Advance Wayland and KDE package bring-up

Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-openagent)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
This commit is contained in:
2026-04-14 10:51:06 +01:00
parent 51f3c21121
commit cf12defd28
15214 changed files with 20594243 additions and 269 deletions
@@ -0,0 +1 @@
add_subdirectory(kioworker6)
@@ -0,0 +1,31 @@
### KApiDox Project-specific Overrides File
# workers are plugins and not part of the API
EXCLUDE_PATTERNS += \
*/kioworkers/*
# define so that deprecated API is not skipped
PREDEFINED += \
"KIOCORE_ENABLE_DEPRECATED_SINCE(x, y)=1" \
"KIOCORE_BUILD_DEPRECATED_SINCE(x, y)=1" \
"KIOCORE_DEPRECATED_VERSION(x, y, t)=" \
"KIOCORE_DEPRECATED_VERSION_BELATED(x, y, xt, yt, t)=" \
"KIOCORE_ENUMERATOR_DEPRECATED_VERSION(x, y, t)=" \
"KIOCORE_ENUMERATOR_DEPRECATED_VERSION_BELATED(x, y, xt, yt, t)=" \
"KIOGUI_ENABLE_DEPRECATED_SINCE(x, y)=1" \
"KIOGUI_BUILD_DEPRECATED_SINCE(x, y)=1" \
"KIOGUI_DEPRECATED_VERSION(x, y, t)=" \
"KIOGUI_DEPRECATED_VERSION_BELATED(x, y, xt, yt, t)=" \
"KIOGUI_ENUMERATOR_DEPRECATED_VERSION(x, y, t)=" \
"KIOGUI_ENUMERATOR_DEPRECATED_VERSION_BELATED(x, y, xt, yt, t)=" \
"KIOWIDGETS_ENABLE_DEPRECATED_SINCE(x, y)=1" \
"KIOWIDGETS_BUILD_DEPRECATED_SINCE(x, y)=1" \
"KIOWIDGETS_DEPRECATED_VERSION(x, y, t)=" \
"KIOWIDGETS_DEPRECATED_VERSION_BELATED(x, y, xt, yt, t)=" \
"KIOWIDGETS_ENUMERATOR_DEPRECATED_VERSION(x, y, t)=" \
"KIOWIDGETS_ENUMERATOR_DEPRECATED_VERSION_BELATED(x, y, xt, yt, t)=" \
"KIOFILEWIDGETS_ENABLE_DEPRECATED_SINCE(x, y)=1" \
"KIOFILEWIDGETS_BUILD_DEPRECATED_SINCE(x, y)=1" \
"KIOFILEWIDGETS_DEPRECATED_VERSION(x, y, t)=" \
"KIOFILEWIDGETS_DEPRECATED_VERSION_BELATED(x, y, xt, yt, t)=" \
"KIOFILEWIDGETS_ENUMERATOR_DEPRECATED_VERSION(x, y, t)=" \
"KIOFILEWIDGETS_ENUMERATOR_DEPRECATED_VERSION_BELATED(x, y, xt, yt, t)="
@@ -0,0 +1,203 @@
DESIGN:
=======
The KIO framework uses workers (separate processes) that handle a given protocol.
Launching those workers is taken care of by the kdeinit/klauncher tandem,
which are notified by DBus. (TODO: update to klauncher remove, also below)
Connection is the most low-level class, the one that encapsulates the pipe.
WorkerInterface is the main class for transferring anything to the worker
and Worker, which inherits WorkerInterface, is the sub class that Job should handle.
A worker inherits WorkerBase, which is the other half of WorkerInterface.
The scheduling is supposed to be on a two level basis. One is in the daemon
and one is in the application. The daemon one (as opposite to the holy one? :)
will determine how many workers are ok for this app to be opened and it will
also assign tasks to actually existing workers.
The application will still have some kind of a scheduler, but it should be
a lot simpler as it doesn't have to decide anything besides which
task goes to which pool of workers (related to the protocol/host/user/port)
and move tasks around.
Currently a design study to name it cool is in scheduler.cpp but in the
application side. This is just to test other things like recursive jobs
and signals/slots within WorkerInterface. If someone feels brave, the scheduler
is yours!
On a second thought: at the daemon side there is no real scheduler, but a
pool of workers. So what we need is some kind of load calculation of the
scheduler in the application and load balancing in the daemon.
A third thought: Maybe the daemon can just take care of a number of 'unused'
workers. When an application needs a worker, it can request it from the daemon.
The application will get one, either from the pool of unused workers,
or a new one will be created. This keeps things simple at the daemon level.
It is up to the application to give the workers back to the daemon.
The scheduler in the application must take care not to request too many
workers and could implement priorities.
Thought on usage:
* Typically a single worker-type is used exclusively in one application. E.g.
http workers are used in a web-browser. POP3 workers used in a mail program.
* Sometimes a single program can have multiple roles. E.g. konqueror is
both a web-browser and a file-manager. As a web-browser it primarily uses
http-workers as a file-manager file-workers.
* Selecting a link in konqueror: konqueror does a partial download of
the file to check the MIME type (right??) then the application is
started which downloads the complete file. In this case it should
be able to pass the worker which does the partial download from konqueror
to the application where it can do the complete download.
Do we need to have a hard limit on the number of workers/host?
It seems so, because some protocols are about to fail if you
have two workers running in parallel (e.g. POP3)
This has to be implemented in the daemon because only at daemon
level all the workers are known. As a consequence workers must
be returned to the daemon before connecting to another host.
(Returning the workers back to the daemon after every job is not
strictly needed and only causes extra overhead)
Instead of actually returning the worker to the daemon, it could
be enough to ask 'recycling permission' from the daemon: the
application asks the daemon whether it is ok to use a worker for
another host. The daemon can then update its administration of
which worker is connected to which host.
The above does of course not apply to hostless protocols (like file).
(They will never change host).
Apart from a 'hard limit' on the number of workers/host we can have
a 'soft limit'. E.g. upon connection to a HTTP 1.1 server, the web-
server tells the worker the number of parallel connections allowed.
THe simplest solution seems to be to treat 'soft limits' the same
as 'hard limits'. This means that the worker has to communicate the
'soft limit' to the daemon.
Jobs using multiple workers.
If a job needs multiple workers in parallel (e.g. copying a file from
a web-server to a ftp-server or browsing a tar-file on a ftp-site)
we must make sure to request the daemon for all workers together since
otherwise there is a risk of deadlock.
(If two applications both need a 'pop3' and a 'ftp' worker for a single
job and only a single worker/host is allowed for pop3 and ftp, we must
prevent giving the single pop3 worker to application #1 and the single
ftp worker to application #2. Both applications will then wait till the
end of times till they get the other worker so that they can start the
job. (This is a quite unlikely situation, but nevertheless possible))
File Operations:
listRecursive is implemented as listDir and finding out if in the result
is a directory. If there is, another listDir job is issued. As listDir
is a readonly operation it fails when a directory isn't readable
.. but the main job goes on and discards the error, because
bIgnoreSubJobsError is true, which is what we want (David)
del is implemented as listRecursive, removing all files and removing all
empty directories. This basically means if one directory isn't readable
we don't remove it as listRecursive didn't find it. But the del will later
on try to remove it's parent directory and fail. But there are cases when
it would be possible to delete the dir in chmod the dir before. On the
other hand del("/") shouldn't list the whole file system and remove all
user owned files just to find out it can't remove everything else (this
basically means we have to take care of things we can remove before we try)
... Well, rm -rf / refuses to do anything, so we should just do the same:
use a listRecursive with bIgnoreSubJobsError = false. If anything can't
be removed, we just abort. (David)
... My concern was more that the fact we can list / doesn't mean we can
remove it. So we shouldn't remove everything we could list without checking
we can. But then the question arises how do we check whether we can remove it?
(Stephan)
... I was wrong, rm -rf /, even as a user, lists everything and removes
everything it can (don't try this at home!). I don't think we can do
better, unless we add a protocol-dependent "canDelete(path)", which is
_really_ not easy to implement, whatever protocol. (David)
Lib docu
========
mkdir: ...
rmdir: ...
chmod: ...
special: ...
stat: ...
get is implemented as TransferJob. Clients get 'data' signals with the data.
A data block of zero size indicates end of data (EOD)
put is implemented as TransferJob. Clients have to connect to the
'dataReq' signal. The worker will call you when it needs your data.
mimetype: ...
file_copy: copies a single file, either using CMD_COPY if the worker
supports that or get & put otherwise.
file_move: moves a single file, either using CMD_RENAME if the worker
supports that, CMD_COPY + del otherwise, or eventually
get & put & del.
file_delete: delete a single file.
copy: copies a file or directory, recursively if the latter
move: moves a file or directory, recursively if the latter
del: deletes a file or directory, recursively if the latter
Resuming
--------
If a .part file exists, KIO offers to resume the download.
This requires negotiation between the worker that reads
(handled by the get job) and the worker that writes
(handled by the put job).
Here's how the negotiation goes.
(PJ=put-job, GJ=get-job)
PJ can't resume:
PJ-->app: canResume(0) (emitted by dataReq)
GJ-->app: data()
PJ-->app: dataReq()
app->PJ: data()
PJ can resume but GJ can't resume:
PJ-->app: canResume(xx)
app->GJ: start job with "resume=xxx" metadata.
GJ-->app: data()
PJ-->app: dataReq()
app->PJ: data()
PJ can resume and GJ can resume:
PJ-->app: canResume(xx)
app->GJ: start job with "resume=xxx" metadata.
GJ-->app: canResume(xx)
GJ-->app: data()
PJ-->app: dataReq()
app->PJ: canResume(xx)
app->PJ: data()
So when the worker supports resume for "put" it has to check after the first
dataRequest() whether it has got a canResume() back from the app. If it did
it must resume. Otherwise it must start from 0.
Protocols
=========
Most KIO workers (but not all) are implementing internet protocols.
In this case, the worker name matches the URI name for the protocol.
A list of such URIs can be found here, as per RFC 4395:
https://www.iana.org/assignments/uri-schemes/uri-schemes.xhtml
@@ -0,0 +1,7 @@
add_subdirectory(data)
add_subdirectory(file)
add_subdirectory(ftp)
add_subdirectory(help)
add_subdirectory(http)
add_subdirectory(webdav)
@@ -0,0 +1,2 @@
########### install files ###############
kdoctools_create_handbook(index.docbook INSTALL_DESTINATION ${KDE_INSTALL_DOCBUNDLEDIR}/en SUBDIR kioworker6/data)
@@ -0,0 +1,54 @@
<?xml version="1.0" ?>
<!DOCTYPE article PUBLIC "-//KDE//DTD DocBook XML V4.5-Based Variant V1.1//EN"
"dtd/kdedbx45.dtd" [
<!ENTITY % addindex "IGNORE">
<!ENTITY % English "INCLUDE" > <!-- change language only here -->
]>
<article lang="&language;" id="data">
<title>Data &URL;s</title>
<articleinfo>
<authorgroup>
<author><personname><firstname>Leo</firstname><surname>Savernik</surname></personname>
<address><email>l.savernik@aon.at</email></address>
</author>
<!-- TRANS:ROLES_OF_TRANSLATORS -->
</authorgroup>
<date>2003-02-06</date>
<!--releaseinfo>2.20.00</releaseinfo-->
</articleinfo>
<para>Data URLs allow small document data to be included in the &URL; itself.
This is useful for very small &HTML; testcases or other occasions that do not
justify a document of their own.</para>
<para><userinput>data:,foobar</userinput>
(note the comma after the colon) will deliver a text document that contains
nothing but <literal>foobar</literal>.
</para>
<para>The last example delivered a text document. For &HTML; documents one
has to specify the &MIME; type <literal>text/html</literal>:
<quote><userinput>data:text/html,&lt;title&gt;Testcase&lt;/title&gt;&lt;p&gt;This
is a testcase&lt;/p&gt;</userinput></quote>. This will produce exactly the same
output as if the content had been loaded from a document of its own.
</para>
<para>Specifying alternate character sets is also possible. Note that 8-Bit
characters have to be escaped by a percentage sign and their two-digit
hexadecimal codes:
<quote><userinput>data:;charset=iso-8859-1,Gr%FC%DFe aus Schl%E4gl</userinput></quote>
results in
<quote><literal>Gr&uuml;&szlig;e aus Schl&auml;gl</literal></quote>
whereas omitting the charset attribute might lead to something like
<quote><literal>Gr??e aus Schl?gl</literal></quote>.
</para>
<para><ulink url="https://www.ietf.org/rfc/rfc2397.txt">IETF
RFC2397</ulink> provides more information.</para>
</article>
@@ -0,0 +1,2 @@
########### install files ###############
kdoctools_create_handbook(index.docbook INSTALL_DESTINATION ${KDE_INSTALL_DOCBUNDLEDIR}/en SUBDIR kioworker6/file)
@@ -0,0 +1,27 @@
<?xml version="1.0" ?>
<!DOCTYPE article PUBLIC "-//KDE//DTD DocBook XML V4.5-Based Variant V1.1//EN"
"dtd/kdedbx45.dtd" [
<!ENTITY % addindex "IGNORE">
<!ENTITY % English "INCLUDE" > <!-- change language only here -->
]>
<article lang="&language;" id="file">
<title>file</title>
<articleinfo>
<authorgroup>
<author>&Ferdinand.Gassauer; &Ferdinand.Gassauer.mail;</author>
<!-- TRANS:ROLES_OF_TRANSLATORS -->
</authorgroup>
</articleinfo>
<para>
The <emphasis>file</emphasis> protocol is used by all &kde; applications to
display locally available files.
</para>
<para>
Entering
<userinput><command>file:/directoryname</command></userinput> in &konqueror;
lists the files of this folder.
</para>
</article>
@@ -0,0 +1,2 @@
########### install files ###############
kdoctools_create_handbook(index.docbook INSTALL_DESTINATION ${KDE_INSTALL_DOCBUNDLEDIR}/en SUBDIR kioworker6/ftp)
@@ -0,0 +1,50 @@
<?xml version="1.0" ?>
<!DOCTYPE article PUBLIC "-//KDE//DTD DocBook XML V4.5-Based Variant V1.1//EN"
"dtd/kdedbx45.dtd" [
<!ENTITY % addindex "IGNORE">
<!ENTITY % English "INCLUDE" > <!-- change language only here -->
]>
<article lang="&language;" id="ftp">
<title>&FTP;</title>
<articleinfo>
<authorgroup>
<author>&Lauri.Watts; &Lauri.Watts.mail;</author>
<!-- TRANS:ROLES_OF_TRANSLATORS -->
</authorgroup>
</articleinfo>
<para>
&FTP; is the Internet service used to transfer a data file from the disk of
one computer to the disk of another, regardless of the operating system type.
</para>
<para> Similar to other Internet applications, &FTP; uses the
client-server approach &mdash; a user invokes an &FTP; program on the
computer, instructs it to contact a remote computer, and then requests
the transfer of one or more files. The local &FTP; program becomes a
client that uses <acronym>TCP</acronym> to contact an &FTP; server
program on the remote computer. Each time the user requests a file
transfer, the client and the server programs cooperate to send a copy
of the data across the Internet.</para>
<para> &FTP; servers which allow <quote>anonymous &FTP;</quote> permit
any user, not only users with accounts on the host, to browse the
<quote>ftp</quote> archives and download files. Some &FTP; servers are
configured to allow users to upload files.</para>
<para>
&FTP; is commonly used to retrieve information and obtain software stored in
files at &FTP; archive sites throughout the world.
</para>
<para>
Source: Paraphrased from <ulink
url="http://tlc.nlm.nih.gov/resources/tutorials/internetdistlrn/ftpdef.htm">
http://tlc.nlm.nih.gov/resources/tutorials/internetdistlrn/ftpdef.htm</ulink>
</para>
<para> See the manual: <ulink url="man:/ftp">ftp</ulink>.</para>
</article>
@@ -0,0 +1,4 @@
########### install files ###############
kdoctools_create_handbook(index.docbook INSTALL_DESTINATION ${KDE_INSTALL_DOCBUNDLEDIR}/en SUBDIR kioworker6/help)
add_subdirectory(documentationnotfound)
@@ -0,0 +1,2 @@
########### install files ###############
kdoctools_create_handbook(index.docbook INSTALL_DESTINATION ${KDE_INSTALL_DOCBUNDLEDIR}/en SUBDIR kioworker6/help/documentationnotfound)
@@ -0,0 +1,77 @@
<?xml version="1.0" ?>
<!DOCTYPE article PUBLIC "-//KDE//DTD DocBook XML V4.5-Based Variant V1.1//EN"
"dtd/kdedbx45.dtd" [
<!ENTITY % addindex "IGNORE">
<!ENTITY % English "INCLUDE">
]>
<article id="documentationnotfound" lang="&language;">
<title>Documentation not Found</title>
<articleinfo>
<authorgroup>
<author><firstname>Jack</firstname>
<surname>Ostroff</surname>
<affiliation>
<address><email>ostroffjh@users.sourceforge.net</email></address>
</affiliation>
</author>
<!-- TRANS:ROLES_OF_TRANSLATORS -->
</authorgroup>
<date>2020-09-08</date>
<releaseinfo>Frameworks 5.73</releaseinfo>
</articleinfo>
<para>The requested documentation was not found on your computer.</para>
<para>The documentation may not exist, or it may not have been installed
with the application.</para>
<tip><para>
Please do not email the author of this page to find the missing document.
He will just tell you to follow the instructions on this page.
</para></tip>
<simplesect>
<title>How to solve this issue</title>
<para>If the application is &kde; software, first use the search function in
&khelpcenter;. In some cases, the documentation has a different name than the
software was using to try to find it. If that doesn't work, try searching the
<ulink url="https://docs.kde.org/">&kde; Documentation site</ulink> for the
requested documentation. If you find the documentation on that site, your
distribution might ship a separate package for documentation (&eg; called
plasma-doc for documentation related to &plasma;). Please use the package
manager of your distribution to find and install the missing
documentation.</para>
<para>If you use a source based distribution, such as Gentoo, be sure that
there are not any configuration settings (USE flags in Gentoo) that
might have disabled the installation of the documentation.
</para>
<para>If you have done that, but still get this page displayed instead of the
application handbook, you probably found a bug in the help
system. In this case, please report this on the <ulink
url="https://bugs.kde.org/">&kde;'s bugtracker</ulink> under the KIO product.
</para>
<para>If you do not find any documentation on the <ulink
url="https://docs.kde.org/">&kde; Documentation site</ulink>, the
application may not have offline documentation. Please report this on
the <ulink url="https://bugs.kde.org/">&kde;'s bugtracker</ulink> under the
product for the application.
</para>
<para>In case the application does not have offline documentation, you should
use the online resources <ulink
url="https://userbase.kde.org/">UserBase Documentation</ulink> and
<ulink url="https://forum.kde.org/">&kde; Community Forums</ulink> to get
help.
</para>
<para>For non-&kde; applications, please contact the application author to
determine whether there should be offline documentation available.</para>
</simplesect>
</article>
@@ -0,0 +1,24 @@
<?xml version="1.0" ?>
<!DOCTYPE article PUBLIC "-//KDE//DTD DocBook XML V4.5-Based Variant V1.1//EN"
"dtd/kdedbx45.dtd" [
<!ENTITY % addindex "IGNORE">
<!ENTITY % English "INCLUDE" > <!-- change language only here -->
]>
<article lang="&language;" id="help">
<title>help</title>
<articleinfo>
<authorgroup>
<author>&Ferdinand.Gassauer;&Ferdinand.Gassauer.mail;</author>
<!-- TRANS:ROLES_OF_TRANSLATORS -->
</authorgroup>
</articleinfo>
<para>
The help system of &kde;
</para>
<para>
See <ulink url="help:/khelpcenter/index.html">The &khelpcenter;</ulink>.
</para>
</article>
@@ -0,0 +1,2 @@
########### install files ###############
kdoctools_create_handbook(index.docbook INSTALL_DESTINATION ${KDE_INSTALL_DOCBUNDLEDIR}/en SUBDIR kioworker6/http)
@@ -0,0 +1,34 @@
<?xml version="1.0" ?>
<!DOCTYPE article PUBLIC "-//KDE//DTD DocBook XML V4.5-Based Variant V1.1//EN"
"dtd/kdedbx45.dtd" [
<!ENTITY % addindex "IGNORE">
<!ENTITY % English "INCLUDE" > <!-- change language only here -->
]>
<article lang="&language;" id="http">
<title>http / https</title>
<articleinfo>
<authorgroup>
<author>&Lauri.Watts; &Lauri.Watts.mail;</author>
<!-- TRANS:ROLES_OF_TRANSLATORS -->
</authorgroup>
</articleinfo>
<para>&HTTP; is the
<emphasis>H</emphasis>yper<emphasis>T</emphasis>ext
<emphasis>T</emphasis>ransfer <emphasis>P</emphasis>rotocol.</para>
<para>The http KIO worker is used by all &kde; applications to handle
connections to &HTTP; servers, that is, web servers. The most common
usage is to view web pages in the &konqueror; web browser.</para>
<para>You can use the https KIO worker in &konqueror; by giving it a &URL;.
<userinput>https://<replaceable>www.kde.org</replaceable></userinput>.</para>
<para>https is http encapsulated in a SSL/TLS stream.</para>
<para>
SSL is the Secure Sockets Layer protocol, a security protocol that provides communications privacy over the Internet. The protocol allows client/server applications to communicate in a way that is designed to prevent eavesdropping, tampering, or message forgery.
</para>
<para>TLS stands for Transport Layer Security.</para>
</article>
@@ -0,0 +1,2 @@
########### install files ###############
kdoctools_create_handbook(index.docbook INSTALL_DESTINATION ${KDE_INSTALL_DOCBUNDLEDIR}/en SUBDIR kioworker6/webdav)
@@ -0,0 +1,72 @@
<?xml version="1.0" ?>
<!DOCTYPE article PUBLIC "-//KDE//DTD DocBook XML V4.5-Based Variant V1.1//EN"
"dtd/kdedbx45.dtd" [
<!ENTITY % addindex "IGNORE">
<!ENTITY % English "INCLUDE" > <!-- change language only here -->
]>
<article lang="&language;" id="webdav">
<title>webdav / webdavs</title>
<articleinfo>
<authorgroup>
<author>&Hamish.Rodda; &Hamish.Rodda.mail;</author>
<!-- TRANS:ROLES_OF_TRANSLATORS -->
</authorgroup>
<date>2002-01-21</date>
</articleinfo>
<para><acronym>WebDAV</acronym> is a <emphasis>D</emphasis>istributed
<emphasis>A</emphasis>uthoring and <emphasis>V</emphasis>ersioning
protocol for the World Wide Web. It allows for easy management of
documents and scripts on a &HTTP; server, and has
additional features designed to simplify version management amongst
multiple authors.</para>
<para>Usage of this protocol is simple. Type the location you want to
view, similar to a &HTTP; &URL; except for the
webdav:// protocol name at the start. An example is
<userinput>webdav://<replaceable>www.hostname.com/path/</replaceable></userinput>.
If you specify a folder name, a list of files and folders will be
displayed, and you can manipulate these folders and files just as you
would with any other filesystem.</para>
<variablelist>
<title>WebDAV Features</title>
<varlistentry>
<term>Locking</term>
<listitem>
<para>File locking allows users to lock a file, informing others that they
are
currently working on this file. This way, editing can be done without fear
that
the changes may be overwritten by another person who is also editing the
same
document.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>Source file access</term>
<listitem>
<para><acronym>WebDAV</acronym> allows access to the script which is called
to
produce a specific page, so changes can be made to the script itself.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>Per-document property support</term>
<listitem>
<para>Arbitrary properties may be set to assist identification of a
document,
such as the author.</para>
</listitem>
</varlistentry>
</variablelist>
<para>To take advantage of these additional capabilities, you will need an
application which supports them. No application currently supports them
through
this KIO worker.</para>
<para><acronym>WebDAVS</acronym> is the <acronym>WebDAV</acronym> protocol encrypted via SSL.</para>
</article>
@@ -0,0 +1,28 @@
konq_run / krun should determine the mimetype by actually
getting the contents of the URL. It should then put the slave
on hold and tell the job-scheduler which request the
slave is currently handling.
Now krun/konq_run should determine which client should process the
result of the request.
* When the client belongs to the same process, no action needs to be
taken. When a new job is created for the request which is on hold the
existing slave will be re-used and the request resumed.
* When the client is an external process, the on-hold-slave should be
removed from the job-scheduler and should connect itself with
klauncher. This is hard because it must ensure that the external
program does not request the slave before it has been transferred to
klauncher.
* When a slave is on hold but not used for a certain period of time,
or, when another slave is put on hold, the slave should be killed.
=====
The slave must emit "mimetype" during a GET before the first data is send.
It may wait with sending "mimetype" until it has enough data to
determine the MIME type, but it should not pass any data along before it has
sent "mimetype".
@@ -0,0 +1,96 @@
METADATA
========
Applications can provide "metadata" to the workers. Metadata can influence
the behavior of a worker and is usually protocol dependent. MetaData consists
of two strings: a "key" and a "value".
Any meta data whose "key" starts with the keywords {internal~currenthost} and
"{internal~allhosts}" will be treated as internal metadata and will not be made
available to client applications. Instead all such meta-data will be stored and
sent back to the appropriate KIO workers along with the other regular metadata values.
Use "{internal~currenthost}" to make the internal metadata available to all
KIO workers of the same protocol and host as the workers that generated it. If
you do not want to restrict the availability of the internal metadata to only
the current host, then use {internal~allhosts}. In either case the internal
metadata follows the rules of the regular metadata and therefore cannot be sent
from one protocol such as "http" to a completely different one like "ftp".
Please note that when internal meta-data values are sent back to KIO workers, the
keyword used to mark them internal will be stripped from the key name.
The following keys are currently in use:
Key Value(s) Description
---- -------- -----------
referrer string The URL from which the request originates. (read by http)
accept string List of MIME types to accept separated by a ", ". (read by http)
responsecode string Original response code of the web server. (set by http)
UserAgent string The user agent name to send to remote host (read by http)
content-type string The content type of the data to be uploaded (read and set by http)
window-id number winId() of the window the request is associated with.
range-start number Try to get the file starting at the given offset (set by file_copy when finding a .part file,
but can also be set by apps.)
range-end number Try to get the file until at the given offset (not set in kdelibs; handled by kio_http).
resume number Deprecated compatibility name for range-start
resume_until number Deprecated compatibility name for range-end
content-disposition-type string Type of Content-Disposition from a HTTP Header Response.
content-disposition-* any other valid value sent in a Content-Disposition header (e.g. filename)
cookies "manual" Cookies set in "setcookies" are send, received cookies are reported
via "setcookies".
"none" No cookies are sent, received cookies are discarded (default).
setcookies string Used to send/receive HTTP cookies when "cookies" is set to "manual".
no-www-auth bool Flag that indicates that no HTTP WWW authentication attempts should be made.
no-proxy-auth bool Flag that indicates that no HTTP proxy authentication attempts should be made.
no-auth-prompt bool Flag that indicates that only cached authentication tokens should be used.
ssl_no_ui bool Flag to tell TCPworkerBase that no user interaction should take place. Instead of asking security questions the connection will silently fail. This is of particular use to favicon code. (default: false)
PropagateHttpHeader bool Whether HTTP headers should be send back (read by http)
HTTP-Headers string The HTTP headers, concatenated, \n delimited (set by http)
Requires PropagateHttpHeader to be set.
customHTTPHeader string Custom HTTP headers to add to the request (read by http)
textmode bool When true, switches FTP up/downloads to ascii transfer mode (read by ftp)
recurse bool When true, del() will be able to delete non-empty directories. (read by file)
Otherwise, del() is supposed to give an error on non-empty directories.
DefaultRemoteProtocol string Protocol to redirect file://<hostname>/ URLs to, default is "smb" (read by file)
redirect-to-get bool If "true", changes a redrirection request to a GET operation regardless of the original operation.
** NOTE: Anything in quotes ("") under Value(s) indicates literal value.
Examples:
E.g. the following disables cookies:
job = KIO::get( QUrl("http://www.kde.org") );
job->addMetaData("cookies", "none");
If you want to handle cookies yourself, you can do:
job = KIO::get( QUrl("http://www.kde.org") );
job->addMetaData("cookies", "manual");
job->addMetaData("setcookies", "Cookie: foo=bar; gnat=gnork");
The above sends two cookies along with the request, any cookies send back by
the server can be retrieved with job->queryMetaData("cookies") after
receiving the mimetype() signal or when the job is finished.
The cookiejar is not used in this case.
Binary file not shown.

After

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.9 KiB