fix: comprehensive boot warnings and exceptions — fixable silenced, unfixable diagnosed

Build system (5 gaps hardened):
- COOKBOOK_OFFLINE defaults to true (fork-mode)
- normalize_patch handles diff -ruN format
- New 'repo validate-patches' command (25/25 relibc patches)
- 14 patched Qt/Wayland/display recipes added to protected list
- relibc archive regenerated with current patch chain

Boot fixes (fixable):
- Full ISO EFI partition: 16 MiB → 1 MiB (matches mini, BIOS hardcoded 2 MiB offset)
- D-Bus system bus: absolute /usr/bin/dbus-daemon path (was skipped)
- redbear-sessiond: absolute /usr/bin/redbear-sessiond path (was skipped)
- daemon framework: silenced spurious INIT_NOTIFY warnings for oneshot_async services (P0-daemon-silence-init-notify.patch)
- udev-shim: demoted INIT_NOTIFY warning to INFO (expected for oneshot_async)
- relibc: comprehensive named semaphores (sem_open/close/unlink) replacing upstream todo!() stubs
- greeterd: Wayland socket timeout 15s → 30s (compositor DRM wait)
- greeter-ui: built and linked (header guard unification, sem_compat stubs removed)
- mc: un-ignored in both configs, fixed glib/libiconv/pcre2 transitive deps
- greeter config: removed stale keymapd dependency from display/greeter services
- prefix toolchain: relibc headers synced, _RELIBC_STDLIB_H guard unified

Unfixable (diagnosed, upstream):
- i2c-hidd: abort on no-I2C-hardware (QEMU) — process::exit → relibc abort
- kded6/greeter-ui: page fault 0x8 — Qt library null deref
- Thread panics fd != -1 — Rust std library on Redox
- DHCP timeout / eth0 MAC — QEMU user-mode networking
- hwrngd/thermald — no hardware RNG/thermal in VM
- live preload allocation — BIOS memory fragmentation, continues on demand
This commit is contained in:
2026-05-05 20:20:37 +01:00
parent a5f97b6632
commit f31522130f
81834 changed files with 11051982 additions and 108 deletions
@@ -0,0 +1,129 @@
The V4 Garbage Collector {#v4-garbage-collector}
========================
ChangeLog
---------
- < 6.8: There was little documentation, and the gc was STW mark&sweep
- 6.8: The gc became incremental (with a stop-the-world sweep phase)
- 6.8: Sweep was made incremental, too
Glossary:
------------
- V4: The ECMAScript engine used in Qt (qtdeclarative)
- gc: abbreviation for garbage collector
- mutator: the actual user application (in contrast to the collector)
- roots: A set of pointers from which the gc process starts
- fast roots: A subset of the roots which are not protected by a barrier
- write barrier: A set of instructions executed on each write
- stop the world: not concurrent
- concurrent gc: gc and mutator can run "at the same time"; this can either mean
+ incremental: collector and mutator run in the same thread, but in certain time intervals the mutator is paused, a chunk of the collector is executed, and then the mutator resumes. Repeats until the gc cycle is finished
+ parallel: gc and mutator operations run in different threads
- precise: a gc is precise if for every value it can know whether it's a pointer to a heap object (a non-precise gc can't in general distinguish pointers from pointer-sized numbers)
- floating garbage: items that are not live, but nevertheless end up surviving the gc cycle
- generational: generational refers to dividing objects into different "generations" based on how many collection cycles they survived. This technique is used in garbage collection to improve performance by focusing on collecting the most recently created objects first.
- moving: A gc is moving if it can relocate objects in memory. Care must be taken to update pointers pointing to them.
Overview:
---------
Since Qt 6.8, V4 uses an incremental, precise mark-and-sweep gc algorithm. It is neither generational nor moving.
In the mark phase, each heap-item can be in one of three states:
1. unvisited ("white"): The gc has not seen this item at all
2. seen ("grey"): All grey items have been discovered by the gc, but items directly reachable from them have (potentially) not been visited.
3. finished ("black"): Not only has the item been seen, but also all items directly reachable from it have been seen.
Items are black if they have their corresponding bit set in the black-object bitmap. They are grey if they are stored at least once in the MarkStack, a stack data structure. Items are white if they are neither grey nor black. Note that black items must never be pushed to the MarkStack (otherwise we could easily end up with endless cycles), but items already _on_ the MarkStack can be black:
If an item has been pushed multiple times before it has been popped, this can happen. It causes some additional work to revisit its fields, but that is safe, as after popping the item will be black, and thus we won't keep on repeatedly pushing the same item on the mark stack.
The roots consist of
- the engine-global objects (namely the internal classes for the JS globals)
- all strings (and symbols) stored in the identifier table and
- all actively linked compilation units.
- Moreover, the values on the JS stack are also treated as roots; more precisely as fast roots.
- Additionally, all persistent values (everything stored in a QJSValue as well as all bound functions of QQmlBindings) are added to the roots.
- Lastly, all QObjectWrapper of QObjects with C++ ownership, or which are rooted in or parented to a QObject with C++ ownership are added to the root set.
All roots are added to the MarkStack. Then, during mark phase, entries are:
1. popped from the markstack
2. All heap-objects reachable from them are added to the MarkStack (unless they are already black)
To avoid that values that were on the heap during the start of the gc cycle, then moved to the stack before they could be visited and consequently freed even though they are still live, the stack is rescanned before the sweep phase.
To avoid that unmarked heap-items are moved from one heap item (or the stack) to an already marked heap-item (and consequently end up hidden from the gc), we employ a Dijkstra style write barrier: Any item that becomes reachable from another heap-item is marked grey (unless already black).
While a gc cycle is ongoing, allocations are white. To ensure a correct behavior
and that newly allocated objects that needs marking are correctly marked, we
employ the above mentioned write barriers and we make use of stack scanning
in-between phases.
The gc state machine
--------------------
To facilitate incremental garbage collection, the gc algorithm is divided into the following stages:
1. markStart, the atomic initialization phase, in which the MarkStack is initialized, and a flag is set on the engine indicating that incremental gc is active
2. markGlobalObject, an atomic phase in which the global object, the engine's identifier table and the currently linked compilation units are marked
3. markJSStack, an atomic phase in which the JS stack is marked
4. initMarkPersistentValues: Atomic phase. If there are persistent values, some setup is done for the next phase.
5. markPersistentValues: An interruptible phase in which all persistent values are marked.
6. initMarkWeakValues: Atomic phase. If there are weak values, some setup is done for the next phase
7. markWeakValues: An interruptible phase which takes care of marking the QObjectWrappers
8. markDrain: An interruptible phase. While the MarkStack is not empty, the marking algorithm runs.
9. markReady: An atomic phase which currently does nothing, but could be used for e.g. logging statistics
10. initCallDestroyObjects: An atomic phase, in which the stack is rescanned, the MarkStack is drained once more. This ensures that all live objects are really marked.
Afterwards, the iteration over all the QObjectWrappers is prepared.
11. callDestroyObject: An interruptible phase, were we call destroyObject of all non-marked QObjectWrapper.
12. freeWeakMaps: An atomic phase in which we remove references to dead objects from live weak maps.
13. freeWeakSets: Same as the last phase, but for weak sets
14: handleQObjectWrappers: An atomic phase in which pending references to QObjectWrappers are cleared
15. multiple sweep phases: Atomic phases, in which do the actual sweeping to free up memory. Note that this will also call destroy on objects marked with `V4_NEEDS_DESTROY`. There is one phase for the various allocators (identifier table, block allocator, huge item allocator, IC allocator)
16. updateMetaData: Updates the black bitmaps, the usage statistics, and marks the gc cycle as done.
17. invalid, the "not-running" stage of the state machine.
To avoid constantly having to query the timer, even interruptible phases run for a fixed amount of steps before checking whether there's a timemout.
Most steps are straight-forward, only the persistent and weak value phases require some explanation as to why it's safe to interrupt the process: The important thing to note is that we never remove elements from the structure while we're undergoing gc, and that we only ever append at the end. So we will see any new values that might be added.
Persistent Values
-----------------
As shown in the diagram above, the handling of persistent values is interruptible (both for "real" persistent values, and also for weak values which also are stored in a `PersistentValueStorage` data structure.
This is done by storing the `PersistentValueStorage::Iterator` in the gc state machine. That in turn raises two questions: Is the iterator safe against invalidation? And do we actually keep track of newly added persistent values?
The latter part is easy to answer: Any newly added weak value is marked when we are in a gc cycle, so the marking part is handled. Sweeping only cares about unmarked values, so that's safe too.
To answer the question about iterator validity, we have to look at the `PersistentValueStorage` data structure. Conceptionally, it's a forward-list of `Page`s (arrays of `QV4::Value`). Pages are ref-counted, and only unlinked from the list if the ref-count reaches zero. Moreover, iterators also increase the ref-count.
Therefore, as long as we iterate over the list, we don't risk having the pointer point to a deleted `Page` even if all values in it have been freed. Freeing values is unproblematic for the gc it will simply encounter `undefined` values, something it is already prepared to handle.
Pages are also kept in the list while they are not deleted, so iteration works as expected. The only adjustment we need to do is to disable an optimization: When searching for a Page with an empty slot, we have
to (potentially) traverse the whole `PersistentValueStorage`. To avoid that, the first Page with empty slots is normally moved to the front of the list. However, that would mean that we could potentially skip over it during the marking phase. We sidestep that issue by simply disabling the optimization. This area could
easily be improved in the future by keeping track of the first page with free slots in a different way.
Custom marking
---------------
Some parts of the engine have to deviate from the general scheme described in the overview, as they don't integrate with the normal WriteBarrier. They are wrapped in the callback of the `QV4::WriteBarrier::markCustom` function, so that they can easily be found via "Find references".
1. `QJSValue::encode`. QJSValues act as roots, and don't act as normal heap-items. When the current value of a QJSValue is overwritten with another heap-item, we also mark the new object. That aligns nicely with the gc barrier.
2. The same applies to `PersistentValue::set`.
3. The identifier table is another root; if a new string is inserted there during gc, it is (conservatively) marked black.
4. PropertyKeys should for all intents and purposes use a write barrier (and have a deleted operator=). But them being an ad-hoc union type of numbers, Strings and Symbols, which has the additional requirements of having to be trivial, it turned out to be easier to simply mark them in `SharedInternalClassDataPrivate<PropertyKey>::set` (for PropertyKeys that had already been allocated), and on the fact that we allocate black for newly created PropertyKeys.
5. `QV4::Heap::InternalClass` also requires special handling, as it uses plain members to Heap objects, notably to the prototype and to the parent internal class. As the class is somewhat special in any case (due to the usage of `SharedInternalClassData` and especially due to the usage of `SharedInternalClassData<PropertyKey>`, see notes on PropertyKey above), we use some bespoke sections for now. This could probably be cleaned up.
Motivation for using a Dijkstra style barrier:
----------------------------------------------
- Deletion barriers are hard to support with the current PropertyKey design
- Steele style barriers cause more work (have to revisit more
objects). Furthermore, allocations were initially black, albeit that
was later changed, such that it wouldn't have made sense to optimize for
a minimal amount of floating garbage.
Sweep Phase and finalizers:
---------------------------
A story for another day
Allocator design:
-----------------
Your explanation is in another castle.
@@ -0,0 +1,228 @@
// Copyright (C) 2016 The Qt Company Ltd.
// SPDX-License-Identifier: LicenseRef-Qt-Commercial OR LGPL-3.0-only OR GPL-2.0-only OR GPL-3.0-only
// Qt-Security score:critical reason:low-level-memory-management
#ifndef QV4HEAP_P_H
#define QV4HEAP_P_H
//
// W A R N I N G
// -------------
//
// This file is not part of the Qt API. It exists purely as an
// implementation detail. This header file may change from version to
// version without notice, or even be removed.
//
// We mean it.
//
#include <private/qv4global_p.h>
#include <private/qv4mmdefs_p.h>
#include <private/qv4writebarrier_p.h>
#include <private/qv4vtable_p.h>
#include <QtCore/QSharedPointer>
// To check if Heap::Base::init is called (meaning, all subclasses did their init and called their
// parent's init all up the inheritance chain), define QML_CHECK_INIT_DESTROY_CALLS below.
#undef QML_CHECK_INIT_DESTROY_CALLS
QT_BEGIN_NAMESPACE
namespace QV4 {
namespace Heap {
template <typename T, size_t o>
struct Pointer {
static constexpr size_t offset = o;
T operator->() const { return get(); }
operator T () const { return get(); }
Base *base();
void set(EngineBase *e, T newVal) {
WriteBarrier::write(e, base(), &ptr, reinterpret_cast<Base *>(newVal));
}
T get() const { return reinterpret_cast<T>(ptr); }
template <typename Type>
Type *cast() { return static_cast<Type *>(ptr); }
Base *heapObject() const { return ptr; }
private:
Base *ptr;
};
typedef Pointer<char *, 0> V4PointerCheck;
Q_STATIC_ASSERT(std::is_trivial_v<V4PointerCheck>);
struct Q_QML_EXPORT Base {
void *operator new(size_t) = delete;
static void markObjects(Base *, MarkStack *);
Pointer<InternalClass *, 0> internalClass;
inline ReturnedValue asReturnedValue() const;
inline void mark(QV4::MarkStack *markStack);
inline bool isMarked() const {
const HeapItem *h = reinterpret_cast<const HeapItem *>(this);
Chunk *c = h->chunk();
Q_ASSERT(!Chunk::testBit(c->extendsBitmap, h - c->realBase()));
return Chunk::testBit(c->blackBitmap, h - c->realBase());
}
inline void setMarkBit() {
const HeapItem *h = reinterpret_cast<const HeapItem *>(this);
Chunk *c = h->chunk();
Q_ASSERT(!Chunk::testBit(c->extendsBitmap, h - c->realBase()));
return Chunk::setBit(c->blackBitmap, h - c->realBase());
}
inline bool inUse() const {
const HeapItem *h = reinterpret_cast<const HeapItem *>(this);
Chunk *c = h->chunk();
Q_ASSERT(!Chunk::testBit(c->extendsBitmap, h - c->realBase()));
return Chunk::testBit(c->objectBitmap, h - c->realBase());
}
void *operator new(size_t, Managed *m) { return m; }
void *operator new(size_t, Base *m) { return m; }
void operator delete(void *, Base *) {}
void init() { _setInitialized(); }
void destroy() { _setDestroyed(); }
#ifdef QML_CHECK_INIT_DESTROY_CALLS
enum { Uninitialized = 0, Initialized, Destroyed } _livenessStatus;
void _checkIsInitialized() {
if (_livenessStatus == Uninitialized)
fprintf(stderr, "ERROR: use of object '%s' before call to init() !!\n",
vtable()->className);
else if (_livenessStatus == Destroyed)
fprintf(stderr, "ERROR: use of object '%s' after call to destroy() !!\n",
vtable()->className);
Q_ASSERT(_livenessStatus == Initialized);
}
void _checkIsDestroyed() {
if (_livenessStatus == Initialized)
fprintf(stderr, "ERROR: object '%s' was never destroyed completely !!\n",
vtable()->className);
Q_ASSERT(_livenessStatus == Destroyed);
}
void _setInitialized() { Q_ASSERT(_livenessStatus == Uninitialized); _livenessStatus = Initialized; }
void _setDestroyed() {
if (_livenessStatus == Uninitialized)
fprintf(stderr, "ERROR: attempting to destroy an uninitialized object '%s' !!\n",
vtable()->className);
else if (_livenessStatus == Destroyed)
fprintf(stderr, "ERROR: attempting to destroy repeatedly object '%s' !!\n",
vtable()->className);
Q_ASSERT(_livenessStatus == Initialized);
_livenessStatus = Destroyed;
}
#else
Q_ALWAYS_INLINE void _checkIsInitialized() {}
Q_ALWAYS_INLINE void _checkIsDestroyed() {}
Q_ALWAYS_INLINE void _setInitialized() {}
Q_ALWAYS_INLINE void _setDestroyed() {}
#endif
};
Q_STATIC_ASSERT(std::is_trivial_v<Base>);
// This class needs to consist only of pointer sized members to allow
// for a size/offset translation when cross-compiling between 32- and
// 64-bit.
Q_STATIC_ASSERT(std::is_standard_layout<Base>::value);
Q_STATIC_ASSERT(offsetof(Base, internalClass) == 0);
Q_STATIC_ASSERT(sizeof(Base) == QT_POINTER_SIZE);
inline
void Base::mark(QV4::MarkStack *markStack)
{
Q_ASSERT(inUse());
const HeapItem *h = reinterpret_cast<const HeapItem *>(this);
Chunk *c = h->chunk();
size_t index = h - c->realBase();
Q_ASSERT(!Chunk::testBit(c->extendsBitmap, index));
quintptr *bitmap = c->blackBitmap + Chunk::bitmapIndex(index);
quintptr bit = Chunk::bitForIndex(index);
if (!(*bitmap & bit)) {
*bitmap |= bit;
markStack->push(this);
}
}
template<typename T, size_t o>
Base *Pointer<T, o>::base() {
Base *base = reinterpret_cast<Base *>(this) - (offset/sizeof(Base *));
Q_ASSERT(base->inUse());
return base;
}
}
#ifdef QT_NO_QOBJECT
template <class T>
struct QV4QPointer {
};
#else
template <class T>
struct QV4QPointer {
void init()
{
d = nullptr;
qObject = nullptr;
}
void init(T *o)
{
Q_ASSERT(d == nullptr);
Q_ASSERT(qObject == nullptr);
if (o) {
d = QtSharedPointer::ExternalRefCountData::getAndRef(o);
qObject = o;
}
}
void destroy()
{
if (d && !d->weakref.deref())
delete d;
d = nullptr;
qObject = nullptr;
}
T *data() const {
return d == nullptr || d->strongref.loadRelaxed() == 0 ? nullptr : qObject;
}
operator T*() const { return data(); }
inline T* operator->() const { return data(); }
QV4QPointer &operator=(T *o)
{
if (d)
destroy();
init(o);
return *this;
}
bool isNull() const noexcept
{
return !isValid() || d->strongref.loadRelaxed() == 0;
}
bool isValid() const noexcept
{
return d != nullptr && qObject != nullptr;
}
private:
QtSharedPointer::ExternalRefCountData *d;
T *qObject;
};
Q_STATIC_ASSERT(std::is_trivial_v<QV4QPointer<QObject>>);
#endif
}
QT_END_NAMESPACE
#endif
File diff suppressed because it is too large Load Diff
@@ -0,0 +1,576 @@
// Copyright (C) 2016 The Qt Company Ltd.
// SPDX-License-Identifier: LicenseRef-Qt-Commercial OR LGPL-3.0-only OR GPL-2.0-only OR GPL-3.0-only
// Qt-Security score:critical reason:low-level-memory-management
#ifndef QV4GC_H
#define QV4GC_H
//
// W A R N I N G
// -------------
//
// This file is not part of the Qt API. It exists purely as an
// implementation detail. This header file may change from version to
// version without notice, or even be removed.
//
// We mean it.
//
#include <private/qv4global_p.h>
#include <private/qv4value_p.h>
#include <private/qv4scopedvalue_p.h>
#include <private/qv4object_p.h>
#include <private/qv4mmdefs_p.h>
#include <QList>
#define MM_DEBUG 0
QT_BEGIN_NAMESPACE
namespace QV4 {
// Iterate a potentially growing container without querying the size on each iteration.
// Return the first index where p doesn't hold, or the end of the container.
template<typename Container, typename UnaryPred>
typename Container::size_type reiterate(
Container &container, typename Container::size_type first, UnaryPred &&p)
{
auto last = container.size();
while (first < last) {
do {
if (!p(first))
return first;
} while (++first < last);
Q_ASSERT(first == last);
// Re-fetch the container size.
// If the container hasn't grown, we're done since last == first, still.
// Otherwise, do one more round via the "while (first < last)" above.
last = container.size();
}
// Returning last here allows input of an out of range first, saving us
// one size check ahead of running this.
return last;
}
// index based version of partition to handle potential growth at the end
template<typename Container, class UnaryPred>
typename Container::size_type partition(Container &container, UnaryPred &&p)
{
// Figure out the first entry where p doesn't hold.
auto first = reiterate(container, 0, p);
// Iterate the remaining entries and swap any entry where p holds
// to the (moving) end of the front section of the container.
// Any time we do that, the front section grows by 1. Therefore, the back
// section can only contain entries where p doesn't hold in the end.
reiterate(container, first + 1, [&p, &container, &first](const auto i) {
if (p(i)) {
// It's important to re-resolve container.at(i) for the std::swap.
// The container may have grown as result of determining p, thereby
// invalidating any reference to an entry taken before.
std::swap(container.at(i), container.at(first++));
}
return true;
});
return first;
}
struct GCData { virtual ~GCData(){};};
struct GCIteratorStorage {
PersistentValueStorage::Iterator it{nullptr, 0};
};
struct GCStateMachine {
Q_GADGET_EXPORT(Q_QML_EXPORT)
public:
enum GCState {
MarkStart = 0,
MarkGlobalObject,
MarkJSStack,
InitMarkPersistentValues,
MarkPersistentValues,
InitMarkWeakValues,
MarkWeakValues,
MarkDrain,
MarkReady,
InitCallDestroyObjects,
// The following needs to be after InitCallDestroyObjects,
// even if it normally would run before it, to ensure that in
// a normal incremental run the stack is redrained before this
// is run as we make use of that knowledge in a test.
CrossValidateIncrementalMarkPhase,
CallDestroyObjects,
FreeWeakMaps,
FreeWeakSets,
HandleQObjectWrappers,
DoSweep,
Invalid,
Count,
};
Q_ENUM(GCState)
struct StepTiming {
qint64 rolling_sum = 0;
qint64 count = 0;
};
struct GCStateInfo {
using ExtraData = std::variant<std::monostate, GCIteratorStorage>;
GCState (*execute)(GCStateMachine *, ExtraData &) = nullptr; // Function to execute for this state, returns true if ready to transition
bool breakAfter{false};
};
using ExtraData = GCStateInfo::ExtraData;
GCState state{GCState::Invalid};
std::chrono::microseconds timeLimit{};
QDeadlineTimer deadline;
std::array<GCStateInfo, GCState::Count> stateInfoMap;
std::array<StepTiming, GCState::Count> executionTiming{};
MemoryManager *mm = nullptr;
ExtraData stateData; // extra date for specific states
bool collectTimings = false;
#ifdef QT_BUILD_INTERNAL
// This is used only to simplify testing.
using BitmapError = std::tuple<std::size_t, std::size_t, quintptr>;
std::vector<BitmapError> bitmapErrors;
#endif
GCStateMachine();
inline void step() {
if (!inProgress()) {
reset();
}
transition();
}
inline bool inProgress() {
return state != GCState::Invalid;
}
inline void reset() {
state = GCState::MarkStart;
}
Q_QML_EXPORT void transition();
inline void handleTimeout(GCState state) {
Q_UNUSED(state);
}
};
using GCState = GCStateMachine::GCState;
using GCStateInfo = GCStateMachine::GCStateInfo;
struct ChunkAllocator;
struct MemorySegment;
struct BlockAllocator {
BlockAllocator(ChunkAllocator *chunkAllocator, ExecutionEngine *engine)
: chunkAllocator(chunkAllocator), engine(engine)
{
memset(freeBins, 0, sizeof(freeBins));
}
enum { NumBins = 8 };
static inline size_t binForSlots(size_t nSlots) {
return nSlots >= NumBins ? NumBins - 1 : nSlots;
}
HeapItem *allocate(size_t size, bool forceAllocation = false);
size_t totalSlots() const {
return Chunk::AvailableSlots*chunks.size();
}
size_t allocatedMem() const {
return chunks.size()*Chunk::DataSize;
}
size_t usedMem() const {
uint used = 0;
for (auto c : chunks)
used += c->nUsedSlots()*Chunk::SlotSize;
return used;
}
void sweep();
void freeAll();
void resetBlackBits();
// bump allocations
HeapItem *nextFree = nullptr;
size_t nFree = 0;
size_t usedSlotsAfterLastSweep = 0;
HeapItem *freeBins[NumBins];
ChunkAllocator *chunkAllocator;
ExecutionEngine *engine;
std::vector<Chunk *> chunks;
uint *allocationStats = nullptr;
};
struct HugeItemAllocator {
HugeItemAllocator(ChunkAllocator *chunkAllocator, ExecutionEngine *engine)
: chunkAllocator(chunkAllocator), engine(engine)
{}
HeapItem *allocate(size_t size);
void sweep(ClassDestroyStatsCallback classCountPtr);
void freeAll();
void resetBlackBits();
size_t usedMem() const {
size_t used = 0;
for (const auto &c : chunks)
used += c.size;
return used;
}
ChunkAllocator *chunkAllocator;
ExecutionEngine *engine;
struct HugeChunk {
MemorySegment *segment;
Chunk *chunk;
size_t size;
};
std::vector<HugeChunk> chunks;
};
class Q_QML_EXPORT MemoryManager
{
Q_DISABLE_COPY(MemoryManager);
public:
MemoryManager(ExecutionEngine *engine);
~MemoryManager();
template <typename ToBeMarked>
friend struct GCCriticalSection;
// TODO: this is only for 64bit (and x86 with SSE/AVX), so exend it for other architectures to be slightly more efficient (meaning, align on 8-byte boundaries).
// Note: all occurrences of "16" in alloc/dealloc are also due to the alignment.
constexpr static inline std::size_t align(std::size_t size)
{ return (size + Chunk::SlotSize - 1) & ~(Chunk::SlotSize - 1); }
/* NOTE: allocManaged comes in various overloads. If size is not passed explicitly
sizeof(ManagedType::Data) is used for size. However, there are quite a few cases
where we allocate more than sizeof(ManagedType::Data); that's generally the case
when the Object has a ValueArray member.
If no internal class pointer is provided, ManagedType::defaultInternalClass(engine)
will be used as the internal class.
*/
template<typename ManagedType>
inline typename ManagedType::Data *allocManaged(std::size_t size, Heap::InternalClass *ic)
{
Q_STATIC_ASSERT(std::is_trivial_v<typename ManagedType::Data>);
size = align(size);
typename ManagedType::Data *d = static_cast<typename ManagedType::Data *>(allocData(size));
d->internalClass.set(engine, ic);
Q_ASSERT(d->internalClass && d->internalClass->vtable);
Q_ASSERT(ic->vtable == ManagedType::staticVTable());
return d;
}
template<typename ManagedType>
inline typename ManagedType::Data *allocManaged(Heap::InternalClass *ic)
{
return allocManaged<ManagedType>(sizeof(typename ManagedType::Data), ic);
}
template<typename ManagedType>
inline typename ManagedType::Data *allocManaged(std::size_t size, InternalClass *ic)
{
return allocManaged<ManagedType>(size, ic->d());
}
template<typename ManagedType>
inline typename ManagedType::Data *allocManaged(InternalClass *ic)
{
return allocManaged<ManagedType>(sizeof(typename ManagedType::Data), ic);
}
template<typename ManagedType>
inline typename ManagedType::Data *allocManaged(std::size_t size)
{
Scope scope(engine);
Scoped<InternalClass> ic(scope, ManagedType::defaultInternalClass(engine));
return allocManaged<ManagedType>(size, ic);
}
template<typename ManagedType>
inline typename ManagedType::Data *allocManaged()
{
auto constexpr size = sizeof(typename ManagedType::Data);
Scope scope(engine);
Scoped<InternalClass> ic(scope, ManagedType::defaultInternalClass(engine));
return allocManaged<ManagedType>(size, ic);
}
template <typename ObjectType>
typename ObjectType::Data *allocateObject(Heap::InternalClass *ic)
{
Heap::Object *o = allocObjectWithMemberData(ObjectType::staticVTable(), ic->size);
o->internalClass.set(engine, ic);
Q_ASSERT(o->internalClass.get() && o->vtable());
Q_ASSERT(o->vtable() == ObjectType::staticVTable());
return static_cast<typename ObjectType::Data *>(o);
}
template <typename ObjectType>
typename ObjectType::Data *allocateObject(InternalClass *ic)
{
return allocateObject<ObjectType>(ic->d());
}
template <typename ObjectType>
typename ObjectType::Data *allocateObject()
{
Scope scope(engine);
Scoped<InternalClass> ic(scope, ObjectType::defaultInternalClass(engine));
ic = ic->changeVTable(ObjectType::staticVTable());
ic = ic->changePrototype(ObjectType::defaultPrototype(engine)->d());
return allocateObject<ObjectType>(ic);
}
template <typename ManagedType, typename Arg1>
typename ManagedType::Data *allocWithStringData(std::size_t unmanagedSize, Arg1 &&arg1)
{
typename ManagedType::Data *o = reinterpret_cast<typename ManagedType::Data *>(allocString(unmanagedSize));
o->internalClass.set(engine, ManagedType::defaultInternalClass(engine));
Q_ASSERT(o->internalClass && o->internalClass->vtable);
o->init(std::forward<Arg1>(arg1));
return o;
}
template <typename ObjectType, typename... Args>
typename ObjectType::Data *allocObject(Heap::InternalClass *ic, Args&&... args)
{
typename ObjectType::Data *d = allocateObject<ObjectType>(ic);
d->init(std::forward<Args>(args)...);
return d;
}
template <typename ObjectType, typename... Args>
typename ObjectType::Data *allocObject(InternalClass *ic, Args&&... args)
{
typename ObjectType::Data *d = allocateObject<ObjectType>(ic);
d->init(std::forward<Args>(args)...);
return d;
}
template <typename ObjectType, typename... Args>
typename ObjectType::Data *allocate(Args&&... args)
{
Scope scope(engine);
Scoped<ObjectType> t(scope, allocateObject<ObjectType>());
t->d_unchecked()->init(std::forward<Args>(args)...);
return t->d();
}
template <typename ManagedType, typename... Args>
typename ManagedType::Data *alloc(Args&&... args)
{
Scope scope(engine);
Scoped<ManagedType> t(scope, allocManaged<ManagedType>());
t->d_unchecked()->init(std::forward<Args>(args)...);
return t->d();
}
void runGC();
bool tryForceGCCompletion();
void runFullGC();
void dumpStats() const;
size_t getRegularItemsMem() const;
size_t getAllocatedMem() const;
size_t getLargeItemsMem() const;
// called when a JS object grows itself. Specifically: Heap::String::append
// and InternalClassDataPrivate<PropertyAttributes>.
void changeUnmanagedHeapSizeUsage(qptrdiff delta) { unmanagedHeapSize += delta; }
// called at the end of a gc cycle
void updateUnmanagedHeapSizeGCLimit();
template<typename ManagedType>
typename ManagedType::Data *allocIC()
{
Heap::Base *b = *allocate(&icAllocator, align(sizeof(typename ManagedType::Data)));
return static_cast<typename ManagedType::Data *>(b);
}
void registerWeakMap(Heap::MapObject *map);
void registerWeakSet(Heap::SetObject *set);
void onEventLoop();
//GC related methods
void setGCTimeLimit(int timeMs);
MarkStack* markStack() { return m_markStack.get(); }
protected:
/// expects size to be aligned
Heap::Base *allocString(std::size_t unmanagedSize);
Heap::Base *allocData(std::size_t size);
Heap::Object *allocObjectWithMemberData(const QV4::VTable *vtable, uint nMembers);
private:
enum {
MinUnmanagedHeapSizeGCLimit = 128 * 1024
};
public:
void collectFromJSStack(MarkStack *markStack) const;
void sweep(bool lastSweep = false, ClassDestroyStatsCallback classCountPtr = nullptr);
void cleanupDeletedQObjectWrappersInSweep();
bool isAboveUnmanagedHeapLimit()
{
const bool incrementalGCIsAlreadyRunning = m_markStack != nullptr;
const bool aboveUnmanagedHeapLimit = incrementalGCIsAlreadyRunning
? unmanagedHeapSize > 3 * unmanagedHeapSizeGCLimit / 2
: unmanagedHeapSize > unmanagedHeapSizeGCLimit;
return aboveUnmanagedHeapLimit;
}
private:
bool shouldRunGC() const;
HeapItem *allocate(BlockAllocator *allocator, std::size_t size)
{
const bool incrementalGCIsAlreadyRunning = m_markStack != nullptr;
bool didGCRun = false;
if (aggressiveGC) {
runFullGC();
didGCRun = true;
}
if (isAboveUnmanagedHeapLimit()) {
if (!didGCRun)
incrementalGCIsAlreadyRunning ? (void) tryForceGCCompletion() : runGC();
didGCRun = true;
}
if (size > Chunk::DataSize)
return hugeItemAllocator.allocate(size);
if (HeapItem *m = allocator->allocate(size))
return m;
if (!didGCRun && shouldRunGC())
runGC();
return allocator->allocate(size, true);
}
public:
QV4::ExecutionEngine *engine;
ChunkAllocator *chunkAllocator;
BlockAllocator blockAllocator;
BlockAllocator icAllocator;
HugeItemAllocator hugeItemAllocator;
PersistentValueStorage *m_persistentValues;
PersistentValueStorage *m_weakValues;
QList<Value *> m_pendingFreedObjectWrapperValue;
Heap::MapObject *weakMaps = nullptr;
Heap::SetObject *weakSets = nullptr;
std::unique_ptr<GCStateMachine> gcStateMachine{nullptr};
std::unique_ptr<MarkStack> m_markStack{nullptr};
std::size_t unmanagedHeapSize = 0; // the amount of bytes of heap that is not managed by the memory manager, but which is held onto by managed items.
std::size_t unmanagedHeapSizeGCLimit;
std::size_t usedSlotsAfterLastFullSweep = 0;
enum Blockness : quint8 {Unblocked, NormalBlocked, InCriticalSection };
Blockness gcBlocked = Unblocked;
bool aggressiveGC = false;
bool crossValidateIncrementalGC = false;
int allocationCount = 0;
size_t lastAllocRequestedSlots = 0;
struct Statistics {
size_t maxAllocatedMem = 0;
size_t maxUsedBeforeGC = 0;
size_t maxUsedAfterGC = 0;
uint allocations[BlockAllocator::NumBins];
};
struct CollectorStatistics
{
qint64 gcTime = 0;
size_t oldUnmanagedSize = 0;
size_t regularItemsBefore = 0;
size_t largeItemsBefore = 0;
size_t oldChunks = 0;
bool triggeredByUnmanagedHeap = false;
void start(MemoryManager *mm);
void step(MemoryManager *mm);
void end(MemoryManager *mm);
};
std::unique_ptr<Statistics> statistics;
std::unique_ptr<CollectorStatistics> collectorStatistics;
};
/*!
\internal
GCCriticalSection prevets the gc from running, until it is destructed.
In its dtor, it runs a check whether we've reached the unmanaegd heap limit,
and triggers a gc run if necessary.
Lastly, it can optionally mark an object passed to it before runnig the gc.
*/
template <typename ToBeMarked = void>
struct GCCriticalSection {
Q_DISABLE_COPY_MOVE(GCCriticalSection)
Q_NODISCARD_CTOR GCCriticalSection(QV4::ExecutionEngine *engine, ToBeMarked *toBeMarked = nullptr)
: m_engine(engine)
, m_oldState(std::exchange(engine->memoryManager->gcBlocked, MemoryManager::InCriticalSection))
, m_toBeMarked(toBeMarked)
{
// disallow nested critical sections
Q_ASSERT(m_oldState != MemoryManager::InCriticalSection);
}
~GCCriticalSection()
{
m_engine->memoryManager->gcBlocked = m_oldState;
if (m_oldState != MemoryManager::Unblocked)
if constexpr (!std::is_same_v<ToBeMarked, void>)
if (m_toBeMarked)
m_toBeMarked->markObjects(m_engine->memoryManager->markStack());
/* because we blocked the gc, we might be using too much memoryon the unmanaged heap
and did not run the normal fixup logic. So recheck again, and trigger a gc run
if necessary*/
if (!m_engine->memoryManager->isAboveUnmanagedHeapLimit())
return;
if (!m_engine->isGCOngoing) {
m_engine->memoryManager->runGC();
} else {
[[maybe_unused]] bool gcFinished = m_engine->memoryManager->tryForceGCCompletion();
Q_ASSERT(gcFinished);
}
}
private:
QV4::ExecutionEngine *m_engine;
MemoryManager::Blockness m_oldState;
ToBeMarked *m_toBeMarked;
};
}
QT_END_NAMESPACE
#endif // QV4GC_H
@@ -0,0 +1,346 @@
// Copyright (C) 2016 The Qt Company Ltd.
// SPDX-License-Identifier: LicenseRef-Qt-Commercial OR LGPL-3.0-only OR GPL-2.0-only OR GPL-3.0-only
// Qt-Security score:critical reason:low-level-memory-management
#ifndef QV4MMDEFS_P_H
#define QV4MMDEFS_P_H
//
// W A R N I N G
// -------------
//
// This file is not part of the Qt API. It exists purely as an
// implementation detail. This header file may change from version to
// version without notice, or even be removed.
//
// We mean it.
//
#include <private/qv4global_p.h>
#include <private/qv4runtimeapi_p.h>
#include <QtCore/qalgorithms.h>
#include <QtCore/qmath.h>
QT_BEGIN_NAMESPACE
class QDeadlineTimer;
namespace QV4 {
struct MarkStack;
typedef void(*ClassDestroyStatsCallback)(const char *);
/*
* Chunks are the basic structure containing GC managed objects.
*
* Chunks are 64k aligned in memory, so that retrieving the Chunk pointer from a Heap object
* is a simple masking operation. Each Chunk has 4 bitmaps for managing purposes,
* and 32byte wide slots for the objects following afterwards.
*
* The black bitmaps are used for mark/sweep.
* The object bitmap has a bit set if this location represents the start of a Heap object.
* The extends bitmap denotes the extend of an object. It has a cleared bit at the start of the object
* and a set bit for all following slots used by the object.
*
* Free memory has both used and extends bits set to 0.
*
* This gives the following operations when allocating an object of size s:
* Find s/Alignment consecutive free slots in the chunk. Set the object bit for the first
* slot to 1. Set the extends bits for all following slots to 1.
*
* All used slots can be found by object|extents.
*
* When sweeping, simply copy the black bits over to the object bits.
*
*/
struct HeapItem;
struct Chunk {
enum {
ChunkSize = 64*1024,
ChunkShift = 16,
SlotSize = 32,
SlotSizeShift = 5,
NumSlots = ChunkSize/SlotSize,
BitmapSize = NumSlots/8,
HeaderSize = 3*BitmapSize,
DataSize = ChunkSize - HeaderSize,
AvailableSlots = DataSize/SlotSize,
#if QT_POINTER_SIZE == 8
Bits = 64,
BitShift = 6,
#else
Bits = 32,
BitShift = 5,
#endif
EntriesInBitmap = BitmapSize/sizeof(quintptr)
};
quintptr blackBitmap[BitmapSize/sizeof(quintptr)];
quintptr objectBitmap[BitmapSize/sizeof(quintptr)];
quintptr extendsBitmap[BitmapSize/sizeof(quintptr)];
char data[ChunkSize - HeaderSize];
HeapItem *realBase();
HeapItem *first();
static Q_ALWAYS_INLINE size_t bitmapIndex(size_t index) {
return index >> BitShift;
}
static Q_ALWAYS_INLINE quintptr bitForIndex(size_t index) {
return static_cast<quintptr>(1) << (index & (Bits - 1));
}
static void setBit(quintptr *bitmap, size_t index) {
// Q_ASSERT(index >= HeaderSize/SlotSize && index < ChunkSize/SlotSize);
bitmap += bitmapIndex(index);
quintptr bit = bitForIndex(index);
*bitmap |= bit;
}
static void clearBit(quintptr *bitmap, size_t index) {
// Q_ASSERT(index >= HeaderSize/SlotSize && index < ChunkSize/SlotSize);
bitmap += bitmapIndex(index);
quintptr bit = bitForIndex(index);
*bitmap &= ~bit;
}
static bool testBit(quintptr *bitmap, size_t index) {
// Q_ASSERT(index >= HeaderSize/SlotSize && index < ChunkSize/SlotSize);
bitmap += bitmapIndex(index);
quintptr bit = bitForIndex(index);
return (*bitmap & bit);
}
static void setBits(quintptr *bitmap, size_t index, size_t nBits) {
// Q_ASSERT(index >= HeaderSize/SlotSize && index + nBits <= ChunkSize/SlotSize);
if (!nBits)
return;
bitmap += index >> BitShift;
index &= (Bits - 1);
while (1) {
size_t bitsToSet = qMin(nBits, Bits - index);
quintptr mask = static_cast<quintptr>(-1) >> (Bits - bitsToSet) << index;
*bitmap |= mask;
nBits -= bitsToSet;
if (!nBits)
return;
index = 0;
++bitmap;
}
}
static bool hasNonZeroBit(quintptr *bitmap) {
for (uint i = 0; i < EntriesInBitmap; ++i)
if (bitmap[i])
return true;
return false;
}
static uint lowestNonZeroBit(quintptr *bitmap) {
for (uint i = 0; i < EntriesInBitmap; ++i) {
if (bitmap[i]) {
quintptr b = bitmap[i];
return i*Bits + qCountTrailingZeroBits(b);
}
}
return 0;
}
uint nFreeSlots() const {
return AvailableSlots - nUsedSlots();
}
uint nUsedSlots() const {
uint usedSlots = 0;
for (uint i = 0; i < EntriesInBitmap; ++i) {
quintptr used = objectBitmap[i] | extendsBitmap[i];
usedSlots += qPopulationCount(used);
}
return usedSlots;
}
bool sweep(ClassDestroyStatsCallback classCountPtr);
void resetBlackBits();
bool sweep(ExecutionEngine *engine);
void freeAll(ExecutionEngine *engine);
void sortIntoBins(HeapItem **bins, uint nBins);
};
struct HeapItem {
union {
struct {
HeapItem *next;
size_t availableSlots;
} freeData;
quint64 payload[Chunk::SlotSize/sizeof(quint64)];
};
operator Heap::Base *() { return reinterpret_cast<Heap::Base *>(this); }
template<typename T>
T *as() { return static_cast<T *>(reinterpret_cast<Heap::Base *>(this)); }
Chunk *chunk() const {
return reinterpret_cast<Chunk *>(reinterpret_cast<quintptr>(this) >> Chunk::ChunkShift << Chunk::ChunkShift);
}
bool isBlack() const {
Chunk *c = chunk();
std::ptrdiff_t index = this - c->realBase();
return Chunk::testBit(c->blackBitmap, index);
}
bool isInUse() const {
Chunk *c = chunk();
std::ptrdiff_t index = this - c->realBase();
return Chunk::testBit(c->objectBitmap, index);
}
void setAllocatedSlots(size_t nSlots) {
// Q_ASSERT(size && !(size % sizeof(HeapItem)));
Chunk *c = chunk();
size_t index = this - c->realBase();
// Q_ASSERT(!Chunk::testBit(c->objectBitmap, index));
Chunk::setBit(c->objectBitmap, index);
Chunk::setBits(c->extendsBitmap, index + 1, nSlots - 1);
// for (uint i = index + 1; i < nBits - 1; ++i)
// Q_ASSERT(Chunk::testBit(c->extendsBitmap, i));
// Q_ASSERT(!Chunk::testBit(c->extendsBitmap, index));
}
// Doesn't report correctly for huge items
size_t size() const {
Chunk *c = chunk();
std::ptrdiff_t index = this - c->realBase();
Q_ASSERT(Chunk::testBit(c->objectBitmap, index));
// ### optimize me
std::ptrdiff_t end = index + 1;
while (end < Chunk::NumSlots && Chunk::testBit(c->extendsBitmap, end))
++end;
return (end - index)*sizeof(HeapItem);
}
};
inline HeapItem *Chunk::realBase()
{
return reinterpret_cast<HeapItem *>(this);
}
inline HeapItem *Chunk::first()
{
return reinterpret_cast<HeapItem *>(data);
}
Q_STATIC_ASSERT(sizeof(Chunk) == Chunk::ChunkSize);
Q_STATIC_ASSERT((1 << Chunk::ChunkShift) == Chunk::ChunkSize);
Q_STATIC_ASSERT(1 << Chunk::SlotSizeShift == Chunk::SlotSize);
Q_STATIC_ASSERT(sizeof(HeapItem) == Chunk::SlotSize);
Q_STATIC_ASSERT(QT_POINTER_SIZE*8 == Chunk::Bits);
Q_STATIC_ASSERT((1 << Chunk::BitShift) == Chunk::Bits);
struct Q_QML_EXPORT MarkStack {
MarkStack(ExecutionEngine *engine);
~MarkStack() { /* we drain manually */ }
void push(Heap::Base *m) {
*(m_top++) = m;
if (m_top < m_softLimit)
return;
// If at or above soft limit, partition the remaining space into at most 64 segments and
// allow one C++ recursion of drain() per segment, plus one for the fence post.
const quintptr segmentSize = qNextPowerOfTwo(quintptr(m_hardLimit - m_softLimit) / 64u);
if (m_drainRecursion * segmentSize <= quintptr(m_top - m_softLimit)) {
++m_drainRecursion;
drain();
--m_drainRecursion;
} else if (m_top == m_hardLimit) {
qFatal("GC mark stack overrun. Either simplify your application or"
"increase QV4_GC_MAX_STACK_SIZE");
}
}
bool isEmpty() const { return m_top == m_base; }
qptrdiff remainingBeforeSoftLimit() const
{
return m_softLimit - m_top;
}
ExecutionEngine *engine() const { return m_engine; }
void drain();
enum class DrainState { Ongoing, Complete };
DrainState drain(QDeadlineTimer deadline);
void setSoftLimit(size_t size);
private:
Heap::Base *pop() { return *(--m_top); }
Heap::Base **m_top = nullptr;
Heap::Base **m_base = nullptr;
Heap::Base **m_softLimit = nullptr;
Heap::Base **m_hardLimit = nullptr;
ExecutionEngine *m_engine = nullptr;
quintptr m_drainRecursion = 0;
};
// Some helper to automate the generation of our
// functions used for marking objects
#define HEAP_OBJECT_OFFSET_MEMBER_EXPANSION(c, gcType, type, name) \
HEAP_OBJECT_OFFSET_MEMBER_EXPANSION_##gcType(c, type, name)
#define HEAP_OBJECT_OFFSET_MEMBER_EXPANSION_Pointer(c, type, name) Pointer<type, 0> name;
#define HEAP_OBJECT_OFFSET_MEMBER_EXPANSION_NoMark(c, type, name) type name;
#define HEAP_OBJECT_OFFSET_MEMBER_EXPANSION_HeapValue(c, type, name) HeapValue<0> name;
#define HEAP_OBJECT_OFFSET_MEMBER_EXPANSION_ValueArray(c, type, name) type<0> name;
#define HEAP_OBJECT_MEMBER_EXPANSION(c, gcType, type, name) \
HEAP_OBJECT_MEMBER_EXPANSION_##gcType(c, type, name)
#define HEAP_OBJECT_MEMBER_EXPANSION_Pointer(c, type, name) \
Pointer<type, offsetof(c##OffsetStruct, name) + baseOffset> name;
#define HEAP_OBJECT_MEMBER_EXPANSION_NoMark(c, type, name) \
type name;
#define HEAP_OBJECT_MEMBER_EXPANSION_HeapValue(c, type, name) \
HeapValue<offsetof(c##OffsetStruct, name) + baseOffset> name;
#define HEAP_OBJECT_MEMBER_EXPANSION_ValueArray(c, type, name) \
type<offsetof(c##OffsetStruct, name) + baseOffset> name;
#define HEAP_OBJECT_MARKOBJECTS_EXPANSION(c, gcType, type, name) \
HEAP_OBJECT_MARKOBJECTS_EXPANSION_##gcType(c, type, name)
#define HEAP_OBJECT_MARKOBJECTS_EXPANSION_Pointer(c, type, name) \
if (o->name) o->name.heapObject()->mark(stack);
#define HEAP_OBJECT_MARKOBJECTS_EXPANSION_NoMark(c, type, name)
#define HEAP_OBJECT_MARKOBJECTS_EXPANSION_HeapValue(c, type, name) \
o->name.mark(stack);
#define HEAP_OBJECT_MARKOBJECTS_EXPANSION_ValueArray(c, type, name) \
o->name.mark(stack);
#define DECLARE_HEAP_OBJECT_BASE(name, base) \
struct name##OffsetStruct { \
name##Members(name, HEAP_OBJECT_OFFSET_MEMBER_EXPANSION) \
}; \
struct name##SizeStruct : base, name##OffsetStruct {}; \
struct name##Data { \
typedef base SuperClass; \
static constexpr size_t baseOffset = sizeof(name##SizeStruct) - sizeof(name##OffsetStruct); \
name##Members(name, HEAP_OBJECT_MEMBER_EXPANSION) \
}; \
Q_STATIC_ASSERT(sizeof(name##SizeStruct) == sizeof(name##Data) + name##Data::baseOffset); \
#define DECLARE_HEAP_OBJECT(name, base) \
DECLARE_HEAP_OBJECT_BASE(name, base) \
struct name : base, name##Data
#define DECLARE_EXPORTED_HEAP_OBJECT(name, base) \
DECLARE_HEAP_OBJECT_BASE(name, base) \
struct Q_QML_EXPORT name : base, name##Data
#define DECLARE_MARKOBJECTS(class) \
static void markObjects(Heap::Base *b, MarkStack *stack) { \
class *o = static_cast<class *>(b); \
class##Data::SuperClass::markObjects(o, stack); \
class##Members(class, HEAP_OBJECT_MARKOBJECTS_EXPANSION) \
}
}
QT_END_NAMESPACE
#endif
@@ -0,0 +1,361 @@
// Copyright (C) 2022 The Qt Company Ltd.
// SPDX-License-Identifier: LicenseRef-Qt-Commercial OR LGPL-3.0-only OR GPL-2.0-only OR GPL-3.0-only
// Qt-Security score:critical raeson:low-level-memory-management
#include <private/qv4stacklimits_p.h>
#include <private/qobject_p.h>
#include <private/qthread_p.h>
#include <QtCore/qfile.h>
#if defined(Q_OS_UNIX)
# include <pthread.h>
#endif
#ifdef Q_OS_WIN
# include <QtCore/qt_windows.h>
#elif defined(Q_OS_FREEBSD_KERNEL) || defined(Q_OS_OPENBSD)
# include <pthread_np.h>
#elif defined(Q_OS_LINUX)
# include <unistd.h>
# include <sys/resource.h> // for getrlimit()
# include <sys/syscall.h> // for SYS_gettid
# if defined(__GLIBC__) && QT_CONFIG(dlopen)
# include <dlfcn.h>
# endif
#elif defined(Q_OS_DARWIN)
# include <sys/resource.h> // for getrlimit()
#elif defined(Q_OS_QNX)
# include <devctl.h>
# include <sys/procfs.h>
# include <sys/types.h>
# include <unistd.h>
# include <fcntl.h>
#elif defined(Q_OS_INTEGRITY)
# include <INTEGRITY.h>
#elif defined(Q_OS_VXWORKS)
# include <taskLib.h>
#elif defined(Q_OS_WASM)
# include <emscripten/stack.h>
#endif
QT_BEGIN_NAMESPACE
namespace QV4 {
enum StackDefaults : qsizetype {
// Default safety margin at the end of the usable stack.
// Since we don't check the stack on every instruction, we might overrun our soft limit.
DefaultSafetyMargin = 128 * 1024,
#if defined(Q_OS_IOS)
PlatformStackSize = 1024 * 1024,
PlatformSafetyMargin = DefaultSafetyMargin,
#elif defined(Q_OS_MACOS)
PlatformStackSize = 8 * 1024 * 1024,
PlatformSafetyMargin = DefaultSafetyMargin,
#elif defined(Q_OS_ANDROID)
// Android appears to have 1MB stacks.
PlatformStackSize = 1024 * 1024,
PlatformSafetyMargin = DefaultSafetyMargin,
#elif defined(Q_OS_LINUX)
// On linux, we assume 8MB stacks if rlimit doesn't work.
PlatformStackSize = 8 * 1024 * 1024,
PlatformSafetyMargin = DefaultSafetyMargin,
#elif defined(Q_OS_QNX)
// QNX's stack is only 512k by default
PlatformStackSize = 512 * 1024,
PlatformSafetyMargin = DefaultSafetyMargin,
#else
// We try to claim 512k if we don't know anything else.
PlatformStackSize = 512 * 1024,
PlatformSafetyMargin = DefaultSafetyMargin,
#endif
};
static StackProperties createStackProperties(void *base, qsizetype size = PlatformStackSize, qsizetype margin = PlatformSafetyMargin)
{
return StackProperties {
base,
incrementStackPointer(base, size - margin),
incrementStackPointer(base, size),
};
}
#if defined(Q_OS_DARWIN) || defined(Q_OS_LINUX)
// On linux and darwin, on the main thread, the pthread functions
// may not return the true stack size since the main thread stack
// may grow. Use rlimit instead. rlimit does not work for secondary
// threads, though. If getrlimit fails, we assume the platform
// stack size.
static qsizetype getMainStackSizeFromRlimit()
{
rlimit limit;
return (getrlimit(RLIMIT_STACK, &limit) == 0 && limit.rlim_cur != RLIM_INFINITY)
? qsizetype(limit.rlim_cur)
: qsizetype(PlatformStackSize);
}
#endif
#if defined(Q_OS_INTEGRITY)
StackProperties stackProperties()
{
Address stackLow, stackHigh;
CheckSuccess(GetTaskStackLimits(CurrentTask(), &stackLow, &stackHigh));
# if Q_STACK_GROWTH_DIRECTION < 0
return createStackProperties(reinterpret_cast<void *>(stackHigh), stackHigh - stackLow);
# else
return createStackProperties(reinterpret_cast<void *>(stackLow), stackHigh - stackLow);
# endif
}
#elif defined(Q_OS_DARWIN)
StackProperties stackProperties()
{
pthread_t thread = pthread_self();
return createStackProperties(
pthread_get_stackaddr_np(thread),
pthread_main_np()
? getMainStackSizeFromRlimit()
: qsizetype(pthread_get_stacksize_np(thread)));
}
#elif defined(Q_OS_WIN)
static_assert(Q_STACK_GROWTH_DIRECTION < 0);
StackProperties stackProperties()
{
// MinGW complains about out of bounds array access in compiler headers
QT_WARNING_PUSH
QT_WARNING_DISABLE_GCC("-Warray-bounds")
// Get the stack base.
# ifdef _WIN64
PNT_TIB64 pTib = reinterpret_cast<PNT_TIB64>(NtCurrentTeb());
# else
PNT_TIB pTib = reinterpret_cast<PNT_TIB>(NtCurrentTeb());
# endif
QT_WARNING_POP
quint8 *stackBase = reinterpret_cast<quint8 *>(pTib->StackBase);
// Get the stack limit. tib->StackLimit is the size of the
// currently mapped stack. The address space is larger.
MEMORY_BASIC_INFORMATION mbi = {};
if (!VirtualQuery(&mbi, &mbi, sizeof(mbi)))
qFatal("Could not retrieve memory information for stack.");
quint8 *stackLimit = reinterpret_cast<quint8 *>(mbi.AllocationBase);
return createStackProperties(stackBase, qsizetype(stackBase - stackLimit));
}
#elif defined(Q_OS_OPENBSD)
StackProperties stackProperties()
{
// From the OpenBSD docs:
//
// The pthread_stackseg_np() function returns information about the given thread's stack.
// A stack_t is the same as a struct sigaltstack (see sigaltstack(2)) except the ss_sp
// variable points to the top of the stack instead of the base.
//
// Since the example in the sigaltstack(2) documentation shows ss_sp being assigned the result
// of a malloc() call, we can assume that "top of the stack" means "the highest address", not
// the logical top of the stack.
stack_t ss;
rc = pthread_stackseg_np(pthread_self, &ss);
#if Q_STACK_GROWTH_DIRECTION < 0
return createStackProperties(ss.ss_sp);
#else
return createStackProperties(decrementStackPointer(ss.ss_sp, ss.ss_size));
#endif
}
#elif defined(Q_OS_QNX)
StackProperties stackProperties()
{
const auto tid = pthread_self();
procfs_status status;
status.tid = tid;
const int fd = open("/proc/self/ctl", O_RDONLY);
if (fd == -1)
qFatal("Could not open /proc/self/ctl");
const auto guard = qScopeGuard([fd]() { close(fd); });
if (devctl(fd, DCMD_PROC_TIDSTATUS, &status, sizeof(status), 0) != EOK)
qFatal("Could not query thread status for current thread");
if (status.tid != tid)
qFatal("Thread status query returned garbage");
#if Q_STACK_GROWTH_DIRECTION < 0
return createStackProperties(
decrementStackPointer(reinterpret_cast<void *>(status.stkbase), status.stksize),
status.stksize);
#else
return createStackProperties(reinterpret_cast<void *>(status.stkbase), status.stksize);
#endif
}
#elif defined(Q_OS_WASM)
StackProperties stackProperties()
{
const uintptr_t base = emscripten_stack_get_base();
const uintptr_t end = emscripten_stack_get_end();
const size_t size = base - end;
return createStackProperties(reinterpret_cast<void *>(base), size);
}
#elif defined(Q_OS_VXWORKS)
StackProperties stackProperties()
{
TASK_DESC taskDescription;
taskInfoGet(taskIdSelf(), &taskDescription);
return createStackProperties(taskDescription.td_pStackBase, taskDescription.td_stackSize,
taskDescription.td_stackSize / 8);
}
#else
StackProperties stackPropertiesGeneric(qsizetype stackSize = 0)
{
// If stackSize is given, do not trust the stack size returned by pthread_attr_getstack
pthread_t thread = pthread_self();
pthread_attr_t sattr;
# if defined(PTHREAD_NP_H) || defined(_PTHREAD_NP_H_) || defined(Q_OS_NETBSD)
pthread_attr_init(&sattr);
pthread_attr_get_np(thread, &sattr);
# else
pthread_getattr_np(thread, &sattr);
# endif
// pthread_attr_getstack returns the address of the memory region, which is the physical
// base of the stack, not the logical one.
void *stackBase;
size_t regionSize;
int rc = pthread_attr_getstack(&sattr, &stackBase, &regionSize);
pthread_attr_destroy(&sattr);
if (rc)
qFatal("Cannot find stack base");
# if Q_STACK_GROWTH_DIRECTION < 0
stackBase = decrementStackPointer(stackBase, regionSize);
# endif
return createStackProperties(stackBase, stackSize ? stackSize : regionSize);
}
#if defined(Q_OS_LINUX)
static void *stackBaseFromLibc()
{
#if defined(__GLIBC__) && QT_CONFIG(dlopen)
void **libcStackEnd = static_cast<void **>(dlsym(RTLD_DEFAULT, "__libc_stack_end"));
if (!libcStackEnd)
return nullptr;
if (void *stackBase = *libcStackEnd)
return stackBase;
#endif
return nullptr;
}
struct StackSegment {
quintptr base;
quintptr limit;
};
static StackSegment stackSegmentFromProc()
{
QFile maps(QStringLiteral("/proc/self/maps"));
if (!maps.open(QIODevice::ReadOnly))
return {0, 0};
const quintptr stackAddr = reinterpret_cast<quintptr>(&maps);
char buffer[1024];
while (true) {
const qint64 length = maps.readLine(buffer, 1024);
if (length <= 0)
break;
const QByteArrayView line(buffer, length);
bool ok = false;
const qsizetype boundary = line.indexOf('-');
if (boundary < 0)
continue;
const quintptr base = line.sliced(0, boundary).toULongLong(&ok, 16);
if (!ok || base > stackAddr)
continue;
const qsizetype end = line.indexOf(' ', boundary);
if (end < 0)
continue;
const quintptr limit = line.sliced(boundary + 1, end - boundary - 1).toULongLong(&ok, 16);
if (!ok || limit <= stackAddr)
continue;
return {base, limit};
}
return {0, 0};
}
StackProperties stackProperties()
{
if (getpid() != static_cast<pid_t>(syscall(SYS_gettid)))
return stackPropertiesGeneric();
// On linux (including android), the pthread functions are expensive
// and unreliable on the main thread.
// First get the stack size from rlimit
const qsizetype stackSize = getMainStackSizeFromRlimit();
// If we have glibc and libdl, we can query a special symbol in glibc to find the base.
// That is extremely cheap, compared to all other options.
if (stackSize) {
if (void *base = stackBaseFromLibc())
return createStackProperties(base, stackSize);
}
// Try to read the stack segment from /proc/self/maps if possible.
const StackSegment segment = stackSegmentFromProc();
if (segment.base) {
# if Q_STACK_GROWTH_DIRECTION > 0
void *stackBase = reinterpret_cast<void *>(segment.base);
# else
void *stackBase = reinterpret_cast<void *>(segment.limit);
# endif
return createStackProperties(
stackBase, stackSize ? stackSize : segment.limit - segment.base);
}
// If we can't read /proc/self/maps, use the pthread functions after all, but
// override the stackSize. The main thread can grow its stack, and the pthread
// functions typically return the currently allocated stack size.
return stackPropertiesGeneric(stackSize);
}
#else // Q_OS_LINUX
StackProperties stackProperties() { return stackPropertiesGeneric(); }
#endif // Q_OS_LINUX
#endif
} // namespace QV4
QT_END_NAMESPACE
@@ -0,0 +1,99 @@
// Copyright (C) 2022 The Qt Company Ltd.
// SPDX-License-Identifier: LicenseRef-Qt-Commercial OR LGPL-3.0-only OR GPL-2.0-only OR GPL-3.0-only
// Qt-Security score:critical reason:low-level-memory-management
#ifndef QV4STACKLIMITS_P_H
#define QV4STACKLIMITS_P_H
//
// W A R N I N G
// -------------
//
// This file is not part of the Qt API. It exists purely as an
// implementation detail. This header file may change from version to
// version without notice, or even be removed.
//
// We mean it.
//
#include <private/qtqmlglobal_p.h>
#ifndef Q_STACK_GROWTH_DIRECTION
# ifdef Q_PROCESSOR_HPPA
# define Q_STACK_GROWTH_DIRECTION (1)
# else
# define Q_STACK_GROWTH_DIRECTION (-1)
# endif
#endif
QT_BEGIN_NAMESPACE
namespace QV4 {
// We may not be able to take the negative of the type
// used to represent stack size, but we can always add
// or subtract it to/from a quint8 pointer.
template<typename Size>
static const void *incrementStackPointer(const void *base, Size amount)
{
#if Q_STACK_GROWTH_DIRECTION > 0
return static_cast<const quint8 *>(base) + amount;
#else
return static_cast<const quint8 *>(base) - amount;
#endif
}
template<typename Size>
static void *decrementStackPointer(void *base, Size amount)
{
#if Q_STACK_GROWTH_DIRECTION > 0
return static_cast<quint8 *>(base) - amount;
#else
return static_cast<quint8 *>(base) + amount;
#endif
}
// Note: This does not return a completely accurate stack pointer.
// Depending on whether this function is inlined or not, we may get the address of
// this function's stack frame or the caller's stack frame.
// Always use a safety margin when determining stack limits.
inline const void *currentStackPointer()
{
// TODO: How often do we actually need the assembler mess below? Is that worth it?
void *stackPointer;
#if defined(Q_CC_GNU) || __has_builtin(__builtin_frame_address)
stackPointer = __builtin_frame_address(0);
#elif defined(Q_CC_MSVC)
stackPointer = &stackPointer;
#elif defined(Q_PROCESSOR_X86_64)
__asm__ __volatile__("movq %%rsp, %0" : "=r"(stackPointer) : :);
#elif defined(Q_PROCESSOR_X86)
__asm__ __volatile__("movl %%esp, %0" : "=r"(stackPointer) : :);
#elif defined(Q_PROCESSOR_ARM_64) && defined(__ILP32__)
quint64 stackPointerRegister = 0;
__asm__ __volatile__("mov %0, sp" : "=r"(stackPointerRegister) : :);
stackPointer = reinterpret_cast<void *>(stackPointerRegister);
#elif defined(Q_PROCESSOR_ARM_64) || defined(Q_PROCESSOR_ARM_32)
__asm__ __volatile__("mov %0, sp" : "=r"(stackPointer) : :);
#else
stackPointer = &stackPointer;
#endif
return stackPointer;
}
struct StackProperties
{
const void *base = nullptr;
const void *softLimit = nullptr;
const void *hardLimit = nullptr;
};
StackProperties stackProperties();
} // namespace QV4
QT_END_NAMESPACE
#endif // QV4STACKLIMITS_P_H
@@ -0,0 +1,37 @@
// Copyright (C) 2023 The Qt Company Ltd.
// SPDX-License-Identifier: LicenseRef-Qt-Commercial OR LGPL-3.0-only OR GPL-2.0-only OR GPL-3.0-only
// Qt-Security score:critical reason:low-level-memory-management
#include <private/qv4value_p.h>
#include <private/qv4mm_p.h>
QT_BEGIN_NAMESPACE
namespace {
void markHeapBase(QV4::MarkStack* markStack, QV4::Heap::Base *base){
if (!base)
return;
base->mark(markStack);
}
}
namespace QV4 {
void WriteBarrier::write_slowpath(EngineBase *engine, Heap::Base *base, ReturnedValue *slot, ReturnedValue value)
{
Q_UNUSED(base);
Q_UNUSED(slot);
MarkStack * markStack = engine->memoryManager->markStack();
if constexpr (isInsertionBarrier)
markHeapBase(markStack, Value::fromReturnedValue(value).heapObject());
}
void WriteBarrier::write_slowpath(EngineBase *engine, Heap::Base *base, Heap::Base **slot, Heap::Base *value)
{
Q_UNUSED(base);
Q_UNUSED(slot);
MarkStack * markStack = engine->memoryManager->markStack();
if constexpr (isInsertionBarrier)
markHeapBase(markStack, value);
}
}
QT_END_NAMESPACE
@@ -0,0 +1,132 @@
// Copyright (C) 2016 The Qt Company Ltd.
// SPDX-License-Identifier: LicenseRef-Qt-Commercial OR LGPL-3.0-only OR GPL-2.0-only OR GPL-3.0-only
// Qt-Security score:critical reason:low-level-memory-management
#ifndef QV4WRITEBARRIER_P_H
#define QV4WRITEBARRIER_P_H
//
// W A R N I N G
// -------------
//
// This file is not part of the Qt API. It exists purely as an
// implementation detail. This header file may change from version to
// version without notice, or even be removed.
//
// We mean it.
//
#include <private/qv4global_p.h>
#include <private/qv4enginebase_p.h>
QT_BEGIN_NAMESPACE
namespace QV4 {
struct EngineBase;
typedef quint64 ReturnedValue;
struct WriteBarrier {
static constexpr bool isInsertionBarrier = true;
Q_ALWAYS_INLINE static void write(EngineBase *engine, Heap::Base *base, ReturnedValue *slot, ReturnedValue value)
{
if (engine->isGCOngoing)
write_slowpath(engine, base, slot, value);
*slot = value;
}
Q_QML_EXPORT Q_NEVER_INLINE static void write_slowpath(
EngineBase *engine, Heap::Base *base,
ReturnedValue *slot, ReturnedValue value);
Q_ALWAYS_INLINE static void write(EngineBase *engine, Heap::Base *base, Heap::Base **slot, Heap::Base *value)
{
if (engine->isGCOngoing)
write_slowpath(engine, base, slot, value);
*slot = value;
}
Q_QML_EXPORT Q_NEVER_INLINE static void write_slowpath(
EngineBase *engine, Heap::Base *base,
Heap::Base **slot, Heap::Base *value);
// MemoryManager isn't a complete type here, so make Engine a template argument
// so that we can still call engine->memoryManager->markStack()
template<typename F, typename Engine = EngineBase>
static void markCustom(Engine *engine, F &&markFunction) {
if (engine->isGCOngoing)
(std::forward<F>(markFunction))(engine->memoryManager->markStack());
}
// HeapObjectWrapper(Base) are helper classes to ensure that
// we always use a WriteBarrier when setting heap-objects
// they are also trivial; if triviality is not required, use Pointer instead
struct HeapObjectWrapperBase
{
// enum class avoids accidental construction via brace-init
enum class PointerWrapper : quintptr {};
PointerWrapper wrapped;
void clear() { wrapped = PointerWrapper(quintptr(0)); }
};
template<typename HeapType>
struct HeapObjectWrapperCommon : HeapObjectWrapperBase
{
HeapType *get() const { return reinterpret_cast<HeapType *>(wrapped); }
operator HeapType *() const { return get(); }
HeapType * operator->() const { return get(); }
template <typename ConvertibleToHeapType>
void set(QV4::EngineBase *engine, ConvertibleToHeapType *heapObject)
{
WriteBarrier::markCustom(engine, [heapObject](QV4::MarkStack *ms){
if (heapObject)
heapObject->mark(ms);
});
wrapped = static_cast<HeapObjectWrapperBase::PointerWrapper>(quintptr(heapObject));
}
};
// all types are trivial; we however want to block copies bypassing the write barrier
// therefore, all members use a PhantomTag to reduce the likelihood
template<typename HeapType, int PhantomTag>
struct HeapObjectWrapper : HeapObjectWrapperCommon<HeapType> {};
/* similar Heap::Pointer, but without the Base conversion (and its inUse assert)
and for storing references in engine classes stored on the native heap
Stores a "non-owning" reference to a heap-item (in the C++ sense), but should
generally mark the heap-item; therefore set goes through a write-barrier
*/
template<typename T>
struct Pointer
{
Pointer() = default;
~Pointer() = default;
Q_DISABLE_COPY_MOVE(Pointer)
T* operator->() const { return get(); }
operator T* () const { return get(); }
void set(EngineBase *e, T *newVal) {
WriteBarrier::markCustom(e, [newVal](QV4::MarkStack *ms) {
if (newVal)
newVal->mark(ms);
});
ptr = newVal;
}
T* get() const { return ptr; }
private:
T *ptr = nullptr;
};
};
// ### this needs to be filled with a real memory fence once marking is concurrent
Q_ALWAYS_INLINE void fence() {}
}
QT_END_NAMESPACE
#endif