diff --git a/developer/general/doc-guidelines.md b/developer/general/doc-guidelines.md index c3ae2e7a..bea00fec 100644 --- a/developer/general/doc-guidelines.md +++ b/developer/general/doc-guidelines.md @@ -246,7 +246,7 @@ When making contributions, please try to observe the following style conventions * Use hanging indentations where appropriate. * Use underline headings (`=====` and `-----`) if possible. - If this is not possible, use Atx-style headings on both the left and right sides (`### H3 ###`). + If this is not possible, use Atx-style headings: (`### H3 ###`). * When writing code blocks, use [syntax highlighting](https://github.github.com/gfm/#info-string) where [possible](https://github.com/jneen/rouge/wiki/List-of-supported-languages-and-lexers) and use `[...]` for anything omitted. * When providing command line examples: * Tell the reader where to open a terminal (dom0 or a specific domU), and show the command along with its output (if any) in a code block, e.g.: diff --git a/developer/releases/3.2/release-notes.md b/developer/releases/3.2/release-notes.md index f7b66dee..3fc0ef02 100644 --- a/developer/releases/3.2/release-notes.md +++ b/developer/releases/3.2/release-notes.md @@ -73,5 +73,5 @@ the instructions above. This will be time consuming process. [upgrade-r3.1]: /doc/releases/3.1/release-notes/#upgrading [backup]: /doc/backup-restore/ [qrexec-argument]: https://github.com/QubesOS/qubes-issues/issues/1876 -[qrexec-doc]: /doc/qrexec3/#service-argument-in-policy +[qrexec-doc]: /doc/qrexec/#service-argument-in-policy [github-release-notes]: https://github.com/QubesOS/qubes-issues/issues?q=is%3Aissue+sort%3Aupdated-desc+milestone%3A%22Release+3.2%22+label%3Arelease-notes+is%3Aclosed diff --git a/developer/releases/4.0/release-notes.md b/developer/releases/4.0/release-notes.md index 38a47501..30336924 100644 --- a/developer/releases/4.0/release-notes.md +++ b/developer/releases/4.0/release-notes.md @@ -115,7 +115,7 @@ We also provide [detailed instruction][upgrade-to-r4.0] for this procedure. [qrexec-proxy]: https://github.com/QubesOS/qubes-issues/issues/1854 [qrexec-policy-keywords]: https://github.com/QubesOS/qubes-issues/issues/865 [qrexec-confirm]: https://github.com/QubesOS/qubes-issues/issues/910 -[qrexec-doc]: /doc/qrexec3/#extra-keywords-available-in-qubes-40-and-later +[qrexec-doc]: /doc/qrexec/#specifying-vms-tags-types-targets-etc [storage]: https://github.com/QubesOS/qubes-issues/issues/1842 [vm-interface]: /doc/vm-interface/ [admin-api]: /news/2017/06/27/qubes-admin-api/ diff --git a/developer/services/qrexec-internals.md b/developer/services/qrexec-internals.md new file mode 100644 index 00000000..e13be781 --- /dev/null +++ b/developer/services/qrexec-internals.md @@ -0,0 +1,162 @@ +--- +layout: doc +title: Qubes RPC internals +permalink: /doc/qrexec-internals/ +redirect_from: +- /doc/qrexec3-implementation/ +- /en/doc/qrexec3-implementation/ +- /doc/Qrexec3Implementation/ +- /wiki/Qrexec3Implementation/ +--- + +# Qubes RPC internals + +(*This page details the current implementation of qrexec (qrexec3). +A [general introduction](/doc/qrexec/) to qrexec is also available. +For the implementation of qrexec2, see [here](/doc/qrexec2/#qubes-rpc-internals).*) + +Qrexec framework consists of a number of processes communicating with each other using common IPC protocol (described in detail below). +Components residing in the same domain (`qrexec-client-vm` to `qrexec-agent`, `qrexec-client` to `qrexec-daemon`) use pipes as the underlying transport medium, while components in separate domains (`qrexec-daemon` to `qrexec-agent`, data channel between `qrexec-agent`s) use vchan link. +Because of [vchan limitation](https://github.com/qubesos/qubes-issues/issues/951), it is not possible to establish qrexec connection back to the source domain. + +## Dom0 tools implementation + +* `/usr/lib/qubes/qrexec-daemon`: One instance is required for every active domain. Responsible for: + * Handling execution and service requests from **dom0** (source: `qrexec-client`). + * Handling service requests from the associated domain (source: `qrexec-client-vm`, then `qrexec-agent`). +* Command line: `qrexec-daemon domain-id domain-name [default user]` +* `domain-id`: Numeric Qubes ID assigned to the associated domain. +* `domain-name`: Associated domain name. +* `default user`: Optional. If passed, `qrexec-daemon` uses this user as default for all execution requests that don't specify one. +* `/usr/lib/qubes/qrexec-policy`: Internal program used to evaluate the RPC policy and deciding whether a RPC call should be allowed. +* `/usr/lib/qubes/qrexec-client`: Used to pass execution and service requests to `qrexec-daemon`. Command line parameters: + * `-d target-domain-name`: Specifies the target for the execution/service request. + * `-l local-program`: Optional. If present, `local-program` is executed and its stdout/stdin are used when sending/receiving data to/from the remote peer. + * `-e`: Optional. If present, stdout/stdin are not connected to the remote peer. Only process creation status code is received. + * `-c `: used for connecting a VM-VM service request by `qrexec-policy`. Details described below in the service example. + * `cmdline`: Command line to pass to `qrexec-daemon` as the execution/service request. Service request format is described below in the service example. + +**Note:** None of the above tools are designed to be used by users directly. + +## VM tools implementation + +* `qrexec-agent`: One instance runs in each active domain. Responsible for: + * Handling service requests from `qrexec-client-vm` and passing them to connected `qrexec-daemon` in dom0. + * Executing associated `qrexec-daemon` execution/service requests. +* Command line parameters: none. +* `qrexec-client-vm`: Runs in an active domain. Used to pass service requests to `qrexec-agent`. +* Command line: `qrexec-client-vm target-domain-name service-name local-program [local program arguments]` +* `target-domain-name`: Target domain for the service request. Source is the current domain. +* `service-name`: Requested service name. +* `local-program`: `local-program` is executed locally and its stdin/stdout are connected to the remote service endpoint. + +## Qrexec protocol details + +Qrexec protocol is message-based. +All messages share a common header followed by an optional data packet. + + /* uniform for all peers, data type depends on message type */ + struct msg_header { + uint32_t type; /* message type */ + uint32_t len; /* data length */ + }; + +When two peers establish connection, the server sends `MSG_HELLO` followed by `peer_info` struct: + + struct peer_info { + uint32_t version; /* qrexec protocol version */ + }; + +The client then should reply with its own `MSG_HELLO` and `peer_info`. +The lower of two versions define protocol used for this connection. +If either side does not support this version, the connection is closed. + +Details of all possible use cases and the messages involved are described below. + +### dom0: request execution of `some_command` in domX and pass stdin/stdout + +- **dom0**: `qrexec-client` is invoked in **dom0** as follows: + + qrexec-client -d domX [-l local_program] user:some_command` + + `user` may be substituted with the literal `DEFAULT`. In that case, default Qubes user will be used to execute `some_command`. +- **dom0**: `qrexec-client` sets `QREXEC_REMOTE_DOMAIN` environment variable to **domX**. +- **dom0**: If `local_program` is set, `qrexec-client` executes it and uses that child's stdin/stdout in place of its own when exchanging data with `qrexec-agent` later. +- **dom0**: `qrexec-client` connects to **domX**'s `qrexec-daemon`. +- **dom0**: `qrexec-daemon` sends `MSG_HELLO` header followed by `peer_info` to `qrexec-client`. +- **dom0**: `qrexec-client` replies with `MSG_HELLO` header followed by `peer_info` to `qrexec-daemon`. +- **dom0**: `qrexec-client` sends `MSG_EXEC_CMDLINE` header followed by `exec_params` to `qrexec-daemon`. + + /* variable size */ + struct exec_params { + uint32_t connect_domain; /* target domain id */ + uint32_t connect_port; /* target vchan port for i/o exchange */ + char cmdline[0]; /* command line to execute, size = msg_header.len - sizeof(struct exec_params) */ + }; + + In this case, `connect_domain` and `connect_port` are set to 0. + +- **dom0**: `qrexec-daemon` replies to `qrexec-client` with `MSG_EXEC_CMDLINE` header followed by `exec_params`, but with empty `cmdline` field. `connect_domain` is set to Qubes ID of **domX** and `connect_port` is set to a vchan port allocated by `qrexec-daemon`. +- **dom0**: `qrexec-daemon` sends `MSG_EXEC_CMDLINE` header followed by `exec_params` to the associated **domX** `qrexec-agent` over vchan. `connect_domain` is set to 0 (**dom0**), `connect_port` is the same as sent to `qrexec-client`. `cmdline` is unchanged except that the literal `DEFAULT` is replaced with actual user name, if present. +- **dom0**: `qrexec-client` disconnects from `qrexec-daemon`. +- **dom0**: `qrexec-client` starts a vchan server using the details received from `qrexec-daemon` and waits for connection from **domX**'s `qrexec-agent`. +- **domX**: `qrexec-agent` receives `MSG_EXEC_CMDLINE` header followed by `exec_params` from `qrexec-daemon` over vchan. +- **domX**: `qrexec-agent` connects to `qrexec-client` over vchan using the details from `exec_params`. +- **domX**: `qrexec-agent` executes `some_command` in **domX** and connects the child's stdin/stdout to the data vchan. If the process creation fails, `qrexec-agent` sends `MSG_DATA_EXIT_CODE` to `qrexec-client` followed by the status code (**int**) and disconnects from the data vchan. +- Data read from `some_command`'s stdout is sent to the data vchan using `MSG_DATA_STDOUT` by `qrexec-agent`. `qrexec-client` passes data received as `MSG_DATA_STDOUT` to its own stdout (or to `local_program`'s stdin if used). +- `qrexec-client` sends data read from local stdin (or `local_program`'s stdout if used) to `qrexec-agent` over the data vchan using `MSG_DATA_STDIN`. `qrexec-agent` passes data received as `MSG_DATA_STDIN` to `some_command`'s stdin. +- `MSG_DATA_STDOUT` or `MSG_DATA_STDIN` with data `len` field set to 0 in `msg_header` is an EOF marker. Peer receiving such message should close the associated input/output pipe. +- When `some_command` terminates, **domX**'s `qrexec-agent` sends `MSG_DATA_EXIT_CODE` header to `qrexec-client` followed by the exit code (**int**). `qrexec-agent` then disconnects from the data vchan. + +### domY: invoke execution of qubes service `qubes.SomeRpc` in domX and pass stdin/stdout + +- **domY**: `qrexec-client-vm` is invoked as follows: + + qrexec-client-vm domX qubes.SomeRpc local_program [params] + +- **domY**: `qrexec-client-vm` connects to `qrexec-agent` (via local socket/named pipe). +- **domY**: `qrexec-client-vm` sends `trigger_service_params` data to `qrexec-agent` (without filling the `request_id` field): + + struct trigger_service_params { + char service_name[64]; + char target_domain[32]; + struct service_params request_id; /* service request id */ + }; + + struct service_params { + char ident[32]; + }; + +- **domY**: `qrexec-agent` allocates a locally-unique (for this domain) `request_id` (let's say `13`) and fills it in the `trigger_service_params` struct received from `qrexec-client-vm`. +- **domY**: `qrexec-agent` sends `MSG_TRIGGER_SERVICE` header followed by `trigger_service_params` to `qrexec-daemon` in **dom0** via vchan. +- **dom0**: **domY**'s `qrexec-daemon` executes `qrexec-policy`: `qrexec-policy domY_id domY domX qubes.SomeRpc 13`. +- **dom0**: `qrexec-policy` evaluates if the RPC should be allowed or denied. If the action is allowed it returns `0`, if the action is denied it returns `1`. +- **dom0**: **domY**'s `qrexec-daemon` checks the exit code of `qrexec-policy`. + - If `qrexec-policy` returned **not** `0`: **domY**'s `qrexec-daemon` sends `MSG_SERVICE_REFUSED` header followed by `service_params` to **domY**'s `qrexec-agent`. `service_params.ident` is identical to the one received. **domY**'s `qrexec-agent` disconnects its `qrexec-client-vm` and RPC processing is finished. + - If `qrexec-policy` returned `0`, RPC processing continues. +- **dom0**: if `qrexec-policy` allowed the RPC, it executed `qrexec-client -d domX -c 13,domY,domY_id user:QUBESRPC qubes.SomeRpc domY`. +- **dom0**: `qrexec-client` sets `QREXEC_REMOTE_DOMAIN` environment variable to **domX**. +- **dom0**: `qrexec-client` connects to **domX**'s `qrexec-daemon`. +- **dom0**: **domX**'s `qrexec-daemon` sends `MSG_HELLO` header followed by `peer_info` to `qrexec-client`. +- **dom0**: `qrexec-client` replies with `MSG_HELLO` header followed by `peer_info` to **domX**'s`qrexec-daemon`. +- **dom0**: `qrexec-client` sends `MSG_EXEC_CMDLINE` header followed by `exec_params` to **domX**'s`qrexec-daemon` + + /* variable size */ + struct exec_params { + uint32_t connect_domain; /* target domain id */ + uint32_t connect_port; /* target vchan port for i/o exchange */ + char cmdline[0]; /* command line to execute, size = msg_header.len - sizeof(struct exec_params) */ + }; + + In this case, `connect_domain` is set to id of **domY** (from the `-c` parameter) and `connect_port` is set to 0. `cmdline` field contains the RPC to execute, in this case `user:QUBESRPC qubes.SomeRpc domY`. + +- **dom0**: **domX**'s `qrexec-daemon` replies to `qrexec-client` with `MSG_EXEC_CMDLINE` header followed by `exec_params`, but with empty `cmdline` field. `connect_domain` is set to Qubes ID of **domX** and `connect_port` is set to a vchan port allocated by **domX**'s `qrexec-daemon`. +- **dom0**: **domX**'s `qrexec-daemon` sends `MSG_EXEC_CMDLINE` header followed by `exec_params` to **domX**'s `qrexec-agent`. `connect_domain` and `connect_port` fields are the same as in the step above. `cmdline` is set to the one received from `qrexec-client`, in this case `user:QUBESRPC qubes.SomeRpc domY`. +- **dom0**: `qrexec-client` disconnects from **domX**'s `qrexec-daemon` after receiving connection details. +- **dom0**: `qrexec-client` connects to **domY**'s `qrexec-daemon` and exchanges `MSG_HELLO` as usual. +- **dom0**: `qrexec-client` sends `MSG_SERVICE_CONNECT` header followed by `exec_params` to **domY**'s `qrexec-daemon`. `connect_domain` is set to ID of **domX** (received from **domX**'s `qrexec-daemon`) and `connect_port` is the one received as well. `cmdline` is set to request ID (`13` in this case). +- **dom0**: **domY**'s `qrexec-daemon` sends `MSG_SERVICE_CONNECT` header followed by `exec_params` to **domY**'s `qrexec-agent`. Data fields are unchanged from the step above. +- **domY**: `qrexec-agent` starts a vchan server on the port received in the step above. It acts as a `qrexec-client` in this case because this is a VM-VM connection. +- **domX**: `qrexec-agent` connects to the vchan server of **domY**'s `qrexec-agent` (connection details were received before from **domX**'s `qrexec-daemon`). +- After that, connection follows the flow of the previous example (dom0-VM). + diff --git a/developer/services/qrexec.md b/developer/services/qrexec.md new file mode 100644 index 00000000..86452c7b --- /dev/null +++ b/developer/services/qrexec.md @@ -0,0 +1,298 @@ +--- +layout: doc +title: Qrexec +permalink: /doc/qrexec/ +redirect_from: +- /en/doc/qrexec3/ +- /doc/Qrexec3/ +- /doc/qrexec3/ +- /wiki/Qrexec3/ +- /doc/qrexec/ +- /en/doc/qrexec/ +- /doc/Qrexec/ +- /wiki/Qrexec/ +--- + +# Qrexec: secure communication across domains + +(*This page is about qrexec v3. For qrexec v2, see [here](/doc/qrexec2/).*) + +The **qrexec framework** is used by core Qubes components to implement communication between domains. +Qubes domains are strictly isolated by design. +However, the OS needs a mechanism to allow the administrative domain (dom0) to force command execution in another domain (VM). +For instance, when a user selects an application from the KDE menu, it should start in the selected VM. +Also, it is often useful to be able to pass stdin/stdout/stderr from an application running in a VM to dom0 (and the other way around). +(For example, so that a VM can notify dom0 that there are updates available for it). +By default, Qubes allows VMs initiate such communications in specific circumstances. +The qrexec framework generalizes this process by providing a remote procedure call (RPC) protocol for the Qubes architecture. +It allows users and developers to use and design secure inter-VM tools. + +## Qrexec basics: architecture and examples + +Qrexec is built on top of *vchan*, a Xen library providing data links between VMs. +During domain startup , a process named `qrexec-daemon` is started in dom0, and a process named `qrexec-agent` is started in the VM. +They are connected over a **vchan** channel. +`qrexec-daemon` listens for connections from a dom0 utility named `qrexec-client`. +Let's say we want to start a process (call it `VMprocess`) in a VM (`someVM`). +Typically, the first thing that a `qrexec-client` instance does is to send a request to the `qrexec-daemon`, which in turn relays it to `qrexec-agent` running in `someVM`. +`qrexec-daemon` assigns unique vchan connection details and sends them to both `qrexec-client` (in dom0) and `qrexec-agent` (in `someVM`). +`qrexec-client` starts a vchan server, which `qrexec-agent` then connects to. +Once this channel is established, stdin/stdout/stderr from the VMprocess is passed between `qrexec-agent` and the `qrexec-client` process. + +![qrexec basics diagram](/attachment/wiki/qrexec3/qrexec3-basics.png) + +The `qrexec-client` command is used to make connections to VMs from dom0. +For example, the following command + + $ qrexec-client -e -d someVM user:'touch hello-world.txt' + +creates an empty file called `hello-world.txt` in the home folder of `someVM`. + +The string before the colon specifies what user to run the command as. +The `-e` flag tells `qrexec-client` to exit immediately after sending the execution request and receiving a status code from `qrexec-agent` (whether the process creation succeeded). +With this option, no further data is passed between the domains. +By contrast, the following command demonstrates an open channel between dom0 and someVM (in this case, a remote shell): + + $ qrexec-client -d someVM user:bash + +The `qvm-run` command is heavily based on `qrexec-client`. +It also takes care of additional activities, e.g. starting the domain if it is not up yet and starting the GUI daemon. +Thus, it is usually more convenient to use `qvm-run`. + +There can be an almost arbitrary number of `qrexec-client` processes for a given domain. +The limiting factor is the number of available vchan channels, which depends on the underlying hypervisor, as well the domain's OS. + +## Qubes RPC services + +Some common tasks (like copying files between VMs) have an RPC-like structure: a process in one VM (say, the file sender) needs to invoke and send/receive data to some process in other VM (say, the file receiver). +The Qubes RPC framework was created to securely facilitate a range of such actions. + +Obviously, inter-VM communication must be tightly controlled to prevent one VM from taking control of another, possibly more privileged, VM. +Therefore the design decision was made to pass all control communication via dom0, that can enforce proper authorization. +Then, it is natural to reuse the already-existing qrexec framework. + +Also, note that bare qrexec provides `VM <-> dom0` connectivity, but the command execution is always initiated by dom0. +There are cases when VM needs to invoke and send data to a command in dom0 (e.g. to pass information on newly installed `.desktop` files). +Thus, the framework allows dom0 to be the RPC target as well. + +Thanks to the framework, RPC programs are very simple -- both RPC client and server just use their stdin/stdout to pass data. +The framework does all the inner work to connect these processes to each other via `qrexec-daemon` and `qrexec-agent`. +Additionally, disposable VMs are tightly integrated -- RPC to a DisposableVM is identical to RPC to a normal domain, all one needs is to pass `@dispvm` as the remote domain name. + +## Qubes RPC administration + + + +### Policy files + +The dom0 directory `/etc/qubes-rpc/policy/` contains a file for each available RPC action that a VM might call. +Together the contents of these files make up the RPC access policy database. +Policies are defined in lines with the following format: + + srcvm destvm (allow|deny|ask[,default_target=default_target_VM])[,user=user_to_run_as][,target=VM_to_redirect_to] + +You can specify srcvm and destvm by name or by one of the reserved keywords such as `@anyvm`, `@dispvm`, or `dom0`. +(Of these three, only `@anyvm` keyword makes sense in the srcvm field. +Service calls from dom0 are currently always allowed, and `@dispvm` means "new VM created for this particular request," so it is never a source of request.) +Other methods using *tags* and *types* are also available (and discussed below). + +Whenever a RPC request for an action is received, the domain checks the first matching line of the relevant file in `/etc/qubes-rpc/policy/` to determine access: +whether to allow the request, what VM to redirect the execution to, and what user account the program should run under. +Note that if the request is redirected (`target=` parameter), policy action remains the same -- even if there is another rule which would otherwise deny such request. +If no policy rule is matched, the action is denied. +If the policy file does not exist, the user is prompted to create one. +If there is still no policy file after prompting, the action is denied. + +In the target VM, the file `/etc/qubes-rpc/RPC_ACTION_NAME` must exist, containing the file name of the program that will be invoked, or being that program itself -- in which case it must have executable permission set (`chmod +x`). + +### Making an RPC call + +From outside of dom0, RPC calls take the following form: + + $ qrexec-client-vm target_vm_name RPC_ACTION_NAME rpc_client_path client arguments + +For example: + + $ qrexec-client-vm work qubes.StartApp+firefox + +Note that only stdin/stdout is passed between RPC server and client -- notably, no command line arguments are passed. +By default, stderr of client and server is logged in the syslog/journald of the VM where the process is running. + +It is also possible to call service without specific client program -- in which case server stdin/out will be connected with the terminal: + + $ qrexec-client-vm target_vm_name RPC_ACTION_NAME + +### Specifying VMs: tags, types, targets, etc. + +There are severals methods for specifying source/target VMs in RPC policies. + + * `@tag:some-tag` - meaning a VM with tag `some-tag` + * `@type:type` - meaning a VM of `type` (like `AppVM`, `TemplateVM` etc) + +Target VM can be also specified as `@default`, which matches the case when calling VM didn't specified any particular target (either by using `@default` target, or empty target). +For DisposableVMs, `@dispvm:DISP_VM` is very similar to `@dispvm` but forces using a particular VM (`DISP_VM`) as a base VM to be started as DisposableVM. +For example: + + anon-whonix @dispvm:anon-whonix-dvm allow + +Adding such policy itself will not force usage of this particular `DISP_VM` - it will only allow it when specified by the caller. +But `@dispvm:DISP_VM` can also be used as target in request redirection, so _it is possible_ to force particular `DISP_VM` usage, when caller didn't specify it: + + anon-whonix @dispvm allow,target=@dispvm:anon-whonix-dvm + +Note that without redirection, this rule would allow using default Disposable VM (`default_dispvm` VM property, which itself defaults to global `default_dispvm` property). +Also note that the request will be allowed (`allow` action) even if there is no second rule allowing calls to `@dispvm:anon-whonix-dvm`, or even if there is a rule explicitly denying it. +This is because the redirection happens _after_ considering the action. + +The policy confirmation dialog (`ask` action) allows the user to specify target VM. +User can choose from VMs that, according to policy, would lead to `ask` or `allow` actions. +It is not possible to select VM that policy would deny. +By default no VM is selected, even if the caller provided some, but policy can specify default value using `default_target=` parameter. +For example: + + work-mail work-archive allow + work-mail @tag:work ask,default_target=work-files + work-mail @default ask,default_target=work-files + +The first rule allow call from `work-mail` to `work-archive`, without any confirmation. +The second rule will ask the user about calls from `work-mail` VM to any VM with tag `work`. +And the confirmation dialog will have `work-files` VM chosen by default, regardless of the VM specified by the caller (`work-mail` VM). +The third rule allow the caller to not specify target VM at all and let the user choose, still - from VMs with tag `work` (and `work-archive`, regardless of tag), and with `work-files` as default. + +### RPC services and security + +Be very careful when coding and adding a new RPC service. +Unless the offered functionality equals full control over the target (it is the case with e.g. `qubes.VMShell` action), any vulnerability in an RPC server can be fatal to Qubes security. +On the other hand, this mechanism allows to delegate processing of untrusted input to less privileged (or disposable) AppVMs, thus wise usage of it increases security. + +For example, this command will run the `firefox` command in a DisposableVM based on `work`: + +``` +$ qvm-run --dispvm=work firefox +``` + +By contrast, consider this command: + +``` +$ qvm-run --dispvm=work --service qubes.StartApp+firefox +``` + +This will look for a `firefox.desktop` file in a standard location in a DisposableVM based on `work`, then launch the application described by that file. +The practical difference is that the bare `qvm-run` command uses the `qubes.VMShell` service, which allows you to run an arbitrary command with arbitrary arguments, essentially providing full control over the target VM. +By contrast, the `qubes.StartApp` service allows you to run only applications that are advertised in `/usr/share/applications` (or other standard locations) *without* control over the arguments, so giving a VM access to `qubes.StartApp` is much safer. +While there isn't much practical difference between the two commands above when starting an application from dom0 in Qubes 4.0, there is a significant security risk when launching applications from a domU (e.g., from a separate GUI domain). +This is why `qubes.StartApp` uses our standard `qrexec` argument grammar to strictly filter the permissible grammar of the `Exec=` lines in `.desktop` files that are passed from untrusted domUs to dom0, thereby protecting dom0 from command injection by maliciously-crafted `.desktop` files. + + +### Service argument in policy + +Sometimes just service name isn't enough to make reasonable qrexec policy. +One example of such a situation is [qrexec-based USB passthrough](https://github.com/qubesos/qubes-issues/issues/531) - using just service name isn't possible to express the policy "allow access to device X and deny to others". +It also isn't feasible to create a separate service for every device... + +For this reason, starting with Qubes 3.2, it is possible to specify a service argument, which will be subject to policy. +Besides the above example of USB passthrough, a service argument can make many service policies more fine-grained and easier to write precise policy with "allow" and "deny" actions, instead of "ask" (offloading additional decisions to the user). +And generally the less choices the user must make, the lower the chance to make a mistake. + +The syntax is simple: when calling a service, add an argument to the service name separated with `+` sign, for example: + + $ qrexec-client-vm target_vm_name RPC_ACTION_NAME+ARGUMENT + +Then create a policy as usual, including the argument (`/etc/qubes-rpc/policy/RPC_ACTION_NAME+ARGUMENT`). +If the policy for the specific argument is not set (file does not exist), then the default policy for this service is loaded (`/etc/qubes-rpc/policy/RPC_ACTION_NAME`). + +In target VM (when the call is allowed) the service file will searched as: + + - `/etc/qubes-rpc/RPC_ACTION_NAME+ARGUMENT` + - `/etc/qubes-rpc/RPC_ACTION_NAME` + +In any case, the script will receive `ARGUMENT` as its argument and additionally as `QREXEC_SERVICE_ARGUMENT` environment variable. +This means it is also possible to install a different script for a particular service argument. + +See below for an example service using an argument. + + + +### Qubes RPC example + +As a demonstration, we can create an RPC service that adds two integers in a target domain (the server, call it "anotherVM") and returns back the result to the invoker (the client, "someVM"). +In someVM, create a file with the following contents and save it with the path `/usr/bin/our_test_add_client`: + + #!/bin/sh + echo $1 $2 # pass data to RPC server + exec cat >&$SAVED_FD_1 # print result to the original stdout, not to the other RPC endpoint + +Our server will be anotherVM at `/usr/bin/our_test_add_server`. +The code for this file is: + + #!/bin/sh + read arg1 arg2 # read from stdin, which is received from the RPC client + echo $(($arg1+$arg2)) # print to stdout, which is passed to the RPC client + +We'll need to create a service called `test.Add` with its own definition and policy file in dom0. +Now we need to define what the service does. +In this case, it should call our addition script. +We define the service with another one-line file, `/etc/qubes-rpc/test.Add`: + + /usr/bin/our_test_add_server + +The administrative domain will direct traffic based on the current RPC policies. +In dom0, create a file at `/etc/qubes-rpc/policy/test.Add` containing the following: + + @anyvm @anyvm ask + +This will allow our client and server to communicate. + +Before we make the call, ensure that the client and server scripts have executable permissions. +Finally, invoke the RPC service. + + $ qrexec-client-vm anotherVM test.Add /usr/bin/our_test_add_client 1 2 + +We should get "3" as answer. +(dom0 will ask for confirmation first.) + +**Note:** For a real world example of writing a qrexec service, see this [blog post](https://blog.invisiblethings.org/2013/02/21/converting-untrusted-pdfs-into-trusted.html). + +### Qubes RPC example - with argument usage + +We will show the necessary files to create an RPC call that reads a specific file from a predefined directory on the target. +Besides really naive storage, it may be a very simple password manager. +Additionally, in this example a simplified workflow will be used - server code placed directly in the service definition file (in `/etc/qubes-rpc` directory). +And no separate client script will be used. + + * RPC server code (*/etc/qubes-rpc/test.File*) + + #!/bin/sh + argument="$1" # service argument, also available as $QREXEC_SERVICE_ARGUMENT + if [ -z "$argument" ]; then + echo "ERROR: No argument given!" + exit 1 + fi + # service argument is already sanitized by qrexec framework and it is + # guaranteed to not contain any space or /, so no need for additional path + # sanitization + cat "/home/user/rpc-file-storage/$argument" + + * specific policy file in dom0 (*/etc/qubes-rpc/policy/test.File+testfile1* ) + + source_vm1 target_vm allow + + * another specific policy file in dom0 (*/etc/qubes-rpc/policy/test.File+testfile2* ) + + source_vm2 target_vm allow + + * default policy file in dom0 (*/etc/qubes-rpc/policy/test.File* ) + + @anyvm @anyvm deny + + * invoke RPC from `source_vm1` via + + /usr/lib/qubes/qrexec-client-vm target_vm test.File+testfile1 + + and we should get content of `/home/user/rpc-file-storage/testfile1` as answer. + + * also possible to invoke RPC from `source_vm2` via + + /usr/lib/qubes/qrexec-client-vm target_vm test.File+testfile2 + + But when invoked with other argument or from different VM, it should be denied. diff --git a/developer/services/qrexec2.md b/developer/services/qrexec2.md index 6fc85e9c..1f5f3c48 100644 --- a/developer/services/qrexec2.md +++ b/developer/services/qrexec2.md @@ -11,8 +11,7 @@ redirect_from: # Command execution in VMs # -(*This page is about qrexec v2. For qrexec v3, see -[here](/doc/qrexec3/).*) +(*This page is about qrexec v2. For qrexec v3, see [here](/doc/qrexec3/).*) Qubes **qrexec** is a framework for implementing inter-VM (incl. Dom0-VM) services. It offers a mechanism to start programs in VMs, redirect their @@ -232,7 +231,7 @@ surfaces that are exposed to untrusted or less trusted VMs in that case. # Qubes RPC internals # (*This is about the implementation of qrexec v2. For the implementation of -qrexec v3, see [here](/doc/qrexec3/#qubes-rpc-internals). Note that the user +qrexec v3, see [here](/doc/qrexec-internals/). Note that the user API in v3 is backward compatible: qrexec apps written for Qubes R2 should run without modification on Qubes R3.*) diff --git a/developer/services/qrexec3.md b/developer/services/qrexec3.md deleted file mode 100644 index f46227ac..00000000 --- a/developer/services/qrexec3.md +++ /dev/null @@ -1,628 +0,0 @@ ---- -layout: doc -title: Qrexec3 -permalink: /doc/qrexec3/ -redirect_from: -- /en/doc/qrexec3/ -- /doc/Qrexec3/ -- /wiki/Qrexec3/ -- /doc/qrexec/ -- /en/doc/qrexec/ -- /doc/Qrexec/ -- /wiki/Qrexec/ -- /doc/qrexec3-implementation/ -- /en/doc/qrexec3-implementation/ -- /doc/Qrexec3Implementation/ -- /wiki/Qrexec3Implementation/ ---- - -# Command execution in VMs # - -(*This page is about qrexec v3. For qrexec v2, see -[here](/doc/qrexec2/).*) - -The **qrexec** framework is used by core Qubes components to implement -communication between domains. Qubes domains are isolated by design, but -there is a need for a mechanism to allow the administrative domain (dom0) to -force command execution in another domain (VM). For instance, when user -selects an application from the KDE menu, it should be started in the selected -VM. Also, it is often useful to be able to pass stdin/stdout/stderr from an -application running in a VM to dom0 (and the other way around). In specific -circumstances, Qubes allows VMs to be initiators of such communications (so, -for example, a VM can notify dom0 that there are updates available for it). - - -## Qrexec basics ## - -Qrexec is built on top of vchan (a library providing data links between -VMs). During domain creation a process named `qrexec-daemon` is started -in dom0, and a process named `qrexec-agent` is started in the VM. They are -connected over **vchan** channel. `qrexec-daemon` listens for connections -from dom0 utility named `qrexec-client`. Typically, the first thing that a -`qrexec-client` instance does is to send a request to `qrexec-daemon` to -start a process (let's name it `VMprocess`) with a given command line in -a specified VM (`someVM`). `qrexec-daemon` assigns unique vchan connection -details and sends them both to `qrexec-client` (in dom0) and `qrexec-agent` -(in `someVM`). `qrexec-client` starts a vchan server which `qrexec-agent` -connects to. Since then, stdin/stdout/stderr from the VMprocess is passed -via vchan between `qrexec-agent` and the `qrexec-client` process. - -So, for example, executing in dom0: - - qrexec-client -d someVM user:bash - -allows to work with the remote shell. The string before the first -semicolon specifies what user to run the command as. Adding `-e` on the -`qrexec-client` command line results in mere command execution (no data -passing), and `qrexec-client` exits immediately after sending the execution -request and receiving status code from `qrexec-agent` (whether the process -creation succeeded). There is also the `-l local_program` flag -- with it, -`qrexec-client` passes stdin/stdout of the remote process to the (spawned -for this purpose) `local_program`, not to its own stdin/stdout. - -The `qvm-run` command is heavily based on `qrexec-client`. It also takes care -of additional activities, e.g. starting the domain if it is not up yet and -starting the GUI daemon. Thus, it is usually more convenient to use `qvm-run`. - -There can be almost arbitrary number of `qrexec-client` processes for a -domain (so, connected to the same `qrexec-daemon`, same domain) -- their -data is multiplexed independently. Number of available vchan channels is -the limiting factor here, it depends on the underlying hypervisor. - - -## Qubes RPC services ## - -Some tasks (like inter-vm file copy) share the same rpc-like structure: -a process in one VM (say, file sender) needs to invoke and send/receive -data to some process in other VM (say, file receiver). Thus, the Qubes RPC -framework was created, facilitating such actions. - -Obviously, inter-VM communication must be tightly controlled to prevent one -VM from taking control over other, possibly more privileged, VM. Therefore -the design decision was made to pass all control communication via dom0, -that can enforce proper authorization. Then, it is natural to reuse the -already-existing qrexec framework. - -Also, note that bare qrexec provides `VM <-> dom0` connectivity, but the -command execution is always initiated by dom0. There are cases when VM needs -to invoke and send data to a command in dom0 (e.g. to pass information on -newly installed `.desktop` files). Thus, the framework allows dom0 to be -the rpc target as well. - -Thanks to the framework, RPC programs are very simple -- both rpc client -and server just use their stdin/stdout to pass data. The framework does all -the inner work to connect these processes to each other via `qrexec-daemon` -and `qrexec-agent`. Additionally, disposable VMs are tightly integrated -- -rpc to a DisposableVM is identical to rpc to a normal domain, all one needs -is to pass `$dispvm` as the remote domain name. - - -## Qubes RPC administration ## - -(*TODO: fix for non-linux dom0*) - -In dom0, there is a bunch of files in `/etc/qubes-rpc/policy` directory, -whose names describe the available rpc actions. Their content is the rpc -access policy database. Currently defined actions are: - - qubes.ClipboardPaste - qubes.Filecopy - qubes.GetImageRGBA - qubes.GetRandomizedTime - qubes.Gpg - qubes.GpgImportKey - qubes.InputKeyboard - qubes.InputMouse - qubes.NotifyTools - qubes.NotifyUpdates - qubes.OpenInVM - qubes.OpenURL - qubes.PdfConvert - qubes.ReceiveUpdates - qubes.SyncAppMenus - qubes.USB - qubes.VMShell - qubes.WindowIconUpdater - -These files contain lines with the following format: - - srcvm destvm (allow|deny|ask)[,user=user_to_run_as][,target=VM_to_redirect_to] - -You can specify srcvm and destvm by name, or by one of `$anyvm`, `$dispvm`, -`dom0` reserved keywords (note string `dom0` does not match the `$anyvm` -pattern; all other names do). Only `$anyvm` keyword makes sense in srcvm -field (service calls from dom0 are currently always allowed, `$dispvm` -means "new VM created for this particular request," so it is never a -source of request). Currently there is no way to specify source VM by -type. Whenever a rpc request for action X is received, the first line in -`/etc/qubes-rpc/policy/X` that match srcvm/destvm is consulted to determine -whether to allow rpc, what user account the program should run in target VM -under, and what VM to redirect the execution to. Note that if the request is -redirected (`target=` parameter), policy action remains the same - even if -there is another rule which would otherwise deny such request. If the policy -file does not exist, user is prompted to create one; if still there is no -policy file after prompting, the action is denied. - -In the target VM, the `/etc/qubes-rpc/RPC_ACTION_NAME` must exist, containing -the file name of the program that will be invoked, or being that program itself -- in which case it must have executable permission set (`chmod +x`). - -In the src VM, one should invoke the client via: - - /usr/lib/qubes/qrexec-client-vm target_vm_name RPC_ACTION_NAME rpc_client_path client arguments - -Note that only stdin/stdout is passed between rpc server and client -- -notably, no command line arguments are passed. Source VM name is specified by -`QREXEC_REMOTE_DOMAIN` environment variable. By default, stderr of client -and server is logged to respective `/var/log/qubes/qrexec.XID` files. -It is also possible to call service without specific client program - in which -case server stdin/out will be connected with the terminal: - - /usr/lib/qubes/qrexec-client-vm target_vm_name RPC_ACTION_NAME - -Be very careful when coding and adding a new rpc service. Unless the -offered functionality equals full control over the target (it is the case -with e.g. `qubes.VMShell` action), any vulnerability in an rpc server can -be fatal to Qubes security. On the other hand, this mechanism allows to -delegate processing of untrusted input to less privileged (or disposable) -AppVMs, thus wise usage of it increases security. - -For example, this command will run the `firefox` command in a DisposableVM based -on `work`: - -``` -$ qvm-run --dispvm=work firefox -``` - -By contrast, consider this command: - -``` -$ qvm-run --dispvm=work --service qubes.StartApp+firefox -``` - -This will look for a `firefox.desktop` file in a standard location in a -DisposableVM based on `work`, then launch the application described by that -file. The practical difference is that the bare `qvm-run` command uses the -`qubes.VMShell` service, which allows you to run an arbitrary command with -arbitrary arguments, essentially providing full control over the target VM. By -contrast, the `qubes.StartApp` service allows you to run only applications that -are advertised in `/usr/share/applications` (or other standard locations) -*without* control over the arguments, so giving a VM access to `qubes.StartApp` -is much safer. While there isn't much practical difference between the two -commands above when starting an application from dom0 in Qubes 4.0, there is a -significant security risk when launching applications from a domU (e.g., from -a separate GUI domain). This is why `qubes.StartApp` uses our standard `qrexec` -argument grammar to strictly filter the permissible grammar of the `Exec=` lines -in `.desktop` files that are passed from untrusted domUs to dom0, thereby -protecting dom0 from command injection by maliciously-crafted `.desktop` files. - -### Extra keywords available in Qubes 4.0 and later - -**This section is about a not-yet-released version, some details may change** - -In Qubes 4.0, target VM can be specified also as `$dispvm:DISP_VM`, which is -very similar to `$dispvm` but forces using a particular VM (`DISP_VM`) as a base -VM to be started as DisposableVM. For example: - - anon-whonix $dispvm:anon-whonix-dvm allow - -Adding such policy itself will not force usage of this particular `DISP_VM` - -it will only allow it when specified by the caller. But `$dispvm:DISP_VM` can -also be used as target in request redirection, so _it is possible_ to force -particular `DISP_VM` usage, when caller didn't specify it: - - anon-whonix $dispvm allow,target=$dispvm:anon-whonix-dvm - -Note that without redirection, this rule would allow using default Disposable -VM (`default_dispvm` VM property, which itself defaults to global -`default_dispvm` property). -Also note that the request will be allowed (`allow` action) even if there is no -second rule allowing calls to `$dispvm:anon-whonix-dvm`, or even if -there is a rule explicitly denying it. This is because the redirection happens -_after_ considering the action. - -In Qubes 4.0 there are also additional methods to specify source/target VM: - - * `$tag:some-tag` - meaning a VM with tag `some-tag` - * `$type:type` - meaning a VM of `type` (like `AppVM`, `TemplateVM` etc) - -Target VM can be also specified as `$default`, which matches the case when -calling VM didn't specified any particular target (either by using `$default` -target, or empty target). - -In Qubes 4.0 policy confirmation dialog (`ask` action) allow the user to -specify target VM. User can choose from VMs that, according to policy, would -lead to `ask` or `allow` actions. It is not possible to select VM that policy -would deny. By default no VM is selected, even if the caller provided some, but -policy can specify default value using `default_target=` parameter. For -example: - - work-mail work-archive allow - work-mail $tag:work ask,default_target=work-files - work-mail $default ask,default_target=work-files - -The first rule allow call from `work-mail` to `work-archive`, without any -confirmation. -The second rule will ask the user about calls from `work-mail` VM to any VM with -tag `work`. And the confirmation dialog will have `work-files` VM chosen by -default, regardless of the VM specified by the caller (`work-mail` VM). The -third rule allow the caller to not specify target VM at all and let the user -choose, still - from VMs with tag `work` (and `work-archive`, regardless of -tag), and with `work-files` as default. - -### Service argument in policy - -Sometimes just service name isn't enough to make reasonable qrexec policy. One -example of such a situation is [qrexec-based USB -passthrough](https://github.com/qubesos/qubes-issues/issues/531) - using just -service name isn't possible to express the policy "allow access to device X and -deny to others". It also isn't feasible to create a separate service for every -device... - -For this reason, starting with Qubes 3.2, it is possible to specify a service -argument, which will be subject to policy. Besides the above example of USB -passthrough, a service argument can make many service policies more fine-grained -and easier to write precise policy with "allow" and "deny" actions, instead of -"ask" (offloading additional decisions to the user). And generally the less -choices the user must make, the lower the chance to make a mistake. - -The syntax is simple: when calling a service, add an argument to the service name -separated with `+` sign, for example: - - /usr/lib/qubes/qrexec-client-vm target_vm_name RPC_ACTION_NAME+ARGUMENT - -Then create a policy as usual, including the argument -(`/etc/qubes-rpc/policy/RPC_ACTION_NAME+ARGUMENT`). If the policy for the specific -argument is not set (file does not exist), then the default policy for this service -is loaded (`/etc/qubes-rpc/policy/RPC_ACTION_NAME`). - -In target VM (when the call is allowed) the service file will searched as: - - - `/etc/qubes-rpc/RPC_ACTION_NAME+ARGUMENT` - - `/etc/qubes-rpc/RPC_ACTION_NAME` - -In any case, the script will receive `ARGUMENT` as its argument and additionally as -`QREXEC_SERVICE_ARGUMENT` environment variable. This means it is also possible -to install a different script for a particular service argument. - -See below for an example service using an argument. - -### Revoking "Yes to All" authorization ### - -Qubes RPC policy supports "ask" action. This will prompt the user whether given -RPC call should be allowed. That prompt window also has a "Yes to All" option, -which will allow the action and add a new entry to the policy file, which will -unconditionally allow further calls for the given service-srcVM-dstVM tuple. - -In order to remove such authorization, issue this command from a dom0 terminal -(for `qubes.Filecopy` service): - - sudo nano /etc/qubes-rpc/policy/qubes.Filecopy - -and then remove the first line(s) (before the first `##` comment) which are -the "Yes to All" results. - - -### Qubes RPC example ### - -We will show the necessary files to create an rpc call that adds two integers -on the target and returns back the result to the invoker. - - * rpc client code (`/usr/bin/our_test_add_client`): - - #!/bin/sh - echo $1 $2 # pass data to rpc server - exec cat >&$SAVED_FD_1 # print result to the original stdout, not to the other rpc endpoint - - * rpc server code (*/usr/bin/our\_test\_add\_server*) - - #!/bin/sh - read arg1 arg2 # read from stdin, which is received from the rpc client - echo $(($arg1+$arg2)) # print to stdout - so, pass to the rpc client - - * policy file in dom0 (*/etc/qubes-rpc/policy/test.Add* ) - - $anyvm $anyvm ask - - * server path definition ( */etc/qubes-rpc/test.Add*) - - /usr/bin/our_test_add_server - - * invoke rpc via - - /usr/lib/qubes/qrexec-client-vm target_vm test.Add /usr/bin/our_test_add_client 1 2 - -and we should get "3" as answer, after dom0 allows it. - -**Note:** For a real world example of writing a qrexec service, see this -[blog post](https://blog.invisiblethings.org/2013/02/21/converting-untrusted-pdfs-into-trusted.html). - -### Qubes RPC example - with argument usage ### - -We will show the necessary files to create an rpc call that reads a specific file -from a predefined directory on the target. Besides really naive storage, it may -be a very simple password manager. -Additionally, in this example a simplified workflow will be used - server code -placed directly in the service definition file (in `/etc/qubes-rpc` directory). And -no separate client script will be used. - - * rpc server code (*/etc/qubes-rpc/test.File*) - - #!/bin/sh - argument="$1" # service argument, also available as $QREXEC_SERVICE_ARGUMENT - if [ -z "$argument" ]; then - echo "ERROR: No argument given!" - exit 1 - fi - # service argument is already sanitized by qrexec framework and it is - # guaranteed to not contain any space or /, so no need for additional path - # sanitization - cat "/home/user/rpc-file-storage/$argument" - - * specific policy file in dom0 (*/etc/qubes-rpc/policy/test.File+testfile1* ) - - source_vm1 target_vm allow - - * another specific policy file in dom0 (*/etc/qubes-rpc/policy/test.File+testfile2* ) - - source_vm2 target_vm allow - - * default policy file in dom0 (*/etc/qubes-rpc/policy/test.File* ) - - $anyvm $anyvm deny - - * invoke rpc from `source_vm1` via - - /usr/lib/qubes/qrexec-client-vm target_vm test.File+testfile1 - - and we should get content of `/home/user/rpc-file-storage/testfile1` as answer. - - * also possible to invoke rpc from `source_vm2` via - - /usr/lib/qubes/qrexec-client-vm target_vm test.File+testfile2 - - But when invoked with other argument or from different VM, it should be denied. - -# Qubes RPC internals # - -(*This is about the implementation of qrexec v3. For the implementation of -qrexec v2, see [here](/doc/qrexec2/#qubes-rpc-internals).*) - -Qrexec framework consists of a number of processes communicating with each -other using common IPC protocol (described in detail below). Components -residing in the same domain (`qrexec-client-vm` to `qrexec-agent`, `qrexec-client` to `qrexec-daemon`) use pipes as the underlying transport medium, -while components in separate domains (`qrexec-daemon` to `qrexec-agent`, data channel between `qrexec-agent`s) use vchan link. -Because of [vchan limitation](https://github.com/qubesos/qubes-issues/issues/951), it is not possible to establish qrexec connection back to the source domain. - - -## Dom0 tools implementation ## - - * `/usr/lib/qubes/qrexec-daemon`: One instance is required for every active - domain. Responsible for: - * Handling execution and service requests from **dom0** (source: - `qrexec-client`). - * Handling service requests from the associated domain (source: - `qrexec-client-vm`, then `qrexec-agent`). - * Command line: `qrexec-daemon domain-id domain-name [default user]` - * `domain-id`: Numeric Qubes ID assigned to the associated domain. - * `domain-name`: Associated domain name. - * `default user`: Optional. If passed, `qrexec-daemon` uses this user as - default for all execution requests that don't specify one. - * `/usr/lib/qubes/qrexec-policy`: Internal program used to evaluate the - RPC policy and deciding whether a RPC call should be allowed. - * `/usr/lib/qubes/qrexec-client`: Used to pass execution and service requests - to `qrexec-daemon`. Command line parameters: - * `-d target-domain-name`: Specifies the target for the execution/service - request. - * `-l local-program`: Optional. If present, `local-program` is executed - and its stdout/stdin are used when sending/receiving data to/from the - remote peer. - * `-e`: Optional. If present, stdout/stdin are not connected to the remote - peer. Only process creation status code is received. - * `-c `: used for connecting - a VM-VM service request by `qrexec-policy`. Details described below in - the service example. - * `cmdline`: Command line to pass to `qrexec-daemon` as the - execution/service request. Service request format is described below in - the service example. - -**Note:** None of the above tools are designed to be used by users directly. - - -## VM tools implementation ## - - * `qrexec-agent`: One instance runs in each active domain. Responsible for: - * Handling service requests from `qrexec-client-vm` and passing them to - connected `qrexec-daemon` in dom0. - * Executing associated `qrexec-daemon` execution/service requests. - * Command line parameters: none. - * `qrexec-client-vm`: Runs in an active domain. Used to pass service requests - to `qrexec-agent`. - * Command line: `qrexec-client-vm target-domain-name service-name local-program [local program arguments]` - * `target-domain-name`: Target domain for the service request. Source is - the current domain. - * `service-name`: Requested service name. - * `local-program`: `local-program` is executed locally and its stdin/stdout - are connected to the remote service endpoint. - - -## Qrexec protocol details ## - -Qrexec protocol is message-based. All messages share a common header followed -by an optional data packet. - - /* uniform for all peers, data type depends on message type */ - struct msg_header { - uint32_t type; /* message type */ - uint32_t len; /* data length */ - }; - -When two peers establish connection, the server sends `MSG_HELLO` followed by -`peer_info` struct: - - struct peer_info { - uint32_t version; /* qrexec protocol version */ - }; - -The client then should reply with its own `MSG_HELLO` and `peer_info`. The -lower of two versions define protocol used for this connection. If either side -does not support this version, the connection is closed. - -Details of all possible use cases and the messages involved are described below. - - -### dom0: request execution of `some_command` in domX and pass stdin/stdout ### - -- **dom0**: `qrexec-client` is invoked in **dom0** as follows: - - `qrexec-client -d domX [-l local_program] user:some_command` - - - `user` may be substituted with the literal `DEFAULT`. In that case, - default Qubes user will be used to execute `some_command`. - -- **dom0**: `qrexec-client` sets `QREXEC_REMOTE_DOMAIN` environment variable -to **domX**. -- **dom0**: If `local_program` is set, `qrexec-client` executes it and uses -that child's stdin/stdout in place of its own when exchanging data with -`qrexec-agent` later. -- **dom0**: `qrexec-client` connects to **domX**'s `qrexec-daemon`. -- **dom0**: `qrexec-daemon` sends `MSG_HELLO` header followed by `peer_info` -to `qrexec-client`. -- **dom0**: `qrexec-client` replies with `MSG_HELLO` header followed by -`peer_info` to `qrexec-daemon`. -- **dom0**: `qrexec-client` sends `MSG_EXEC_CMDLINE` header followed by -`exec_params` to `qrexec-daemon`. - - /* variable size */ - struct exec_params { - uint32_t connect_domain; /* target domain id */ - uint32_t connect_port; /* target vchan port for i/o exchange */ - char cmdline[0]; /* command line to execute, size = msg_header.len - sizeof(struct exec_params) */ - }; - - In this case, `connect_domain` and `connect_port` are set to 0. - -- **dom0**: `qrexec-daemon` replies to `qrexec-client` with -`MSG_EXEC_CMDLINE` header followed by `exec_params`, but with empty `cmdline` -field. `connect_domain` is set to Qubes ID of **domX** and `connect_port` -is set to a vchan port allocated by `qrexec-daemon`. -- **dom0**: `qrexec-daemon` sends `MSG_EXEC_CMDLINE` header followed -by `exec_params` to the associated **domX** `qrexec-agent` over -vchan. `connect_domain` is set to 0 (**dom0**), `connect_port` is the same -as sent to `qrexec-client`. `cmdline` is unchanged except that the literal -`DEFAULT` is replaced with actual user name, if present. -- **dom0**: `qrexec-client` disconnects from `qrexec-daemon`. -- **dom0**: `qrexec-client` starts a vchan server using the details received -from `qrexec-daemon` and waits for connection from **domX**'s `qrexec-agent`. -- **domX**: `qrexec-agent` receives `MSG_EXEC_CMDLINE` header followed by -`exec_params` from `qrexec-daemon` over vchan. -- **domX**: `qrexec-agent` connects to `qrexec-client` over vchan using the -details from `exec_params`. -- **domX**: `qrexec-agent` executes `some_command` in **domX** and connects -the child's stdin/stdout to the data vchan. If the process creation fails, -`qrexec-agent` sends `MSG_DATA_EXIT_CODE` to `qrexec-client` followed by -the status code (**int**) and disconnects from the data vchan. -- Data read from `some_command`'s stdout is sent to the data vchan using -`MSG_DATA_STDOUT` by `qrexec-agent`. `qrexec-client` passes data received as -`MSG_DATA_STDOUT` to its own stdout (or to `local_program`'s stdin if used). -- `qrexec-client` sends data read from local stdin (or `local_program`'s -stdout if used) to `qrexec-agent` over the data vchan using -`MSG_DATA_STDIN`. `qrexec-agent` passes data received as `MSG_DATA_STDIN` -to `some_command`'s stdin. -- `MSG_DATA_STDOUT` or `MSG_DATA_STDIN` with data `len` field set to 0 in -`msg_header` is an EOF marker. Peer receiving such message should close the -associated input/output pipe. -- When `some_command` terminates, **domX**'s `qrexec-agent` sends -`MSG_DATA_EXIT_CODE` header to `qrexec-client` followed by the exit code -(**int**). `qrexec-agent` then disconnects from the data vchan. - - -### domY: invoke execution of qubes service `qubes.SomeRpc` in domX and pass stdin/stdout ### - -- **domY**: `qrexec-client-vm` is invoked as follows: - - `qrexec-client-vm domX qubes.SomeRpc local_program [params]` - -- **domY**: `qrexec-client-vm` connects to `qrexec-agent` (via local -socket/named pipe). -- **domY**: `qrexec-client-vm` sends `trigger_service_params` data to -`qrexec-agent` (without filling the `request_id` field): - - struct trigger_service_params { - char service_name[64]; - char target_domain[32]; - struct service_params request_id; /* service request id */ - }; - - struct service_params { - char ident[32]; - }; - -- **domY**: `qrexec-agent` allocates a locally-unique (for this domain) -`request_id` (let's say `13`) and fills it in the `trigger_service_params` -struct received from `qrexec-client-vm`. -- **domY**: `qrexec-agent` sends `MSG_TRIGGER_SERVICE` header followed by -`trigger_service_params` to `qrexec-daemon` in **dom0** via vchan. -- **dom0**: **domY**'s `qrexec-daemon` executes `qrexec-policy`: `qrexec-policy -domY_id domY domX qubes.SomeRpc 13`. -- **dom0**: `qrexec-policy` evaluates if the RPC should be allowed or -denied. If the action is allowed it returns `0`, if the action is denied it -returns `1`. -- **dom0**: **domY**'s `qrexec-daemon` checks the exit code of `qrexec-policy`. - - If `qrexec-policy` returned **not** `0`: **domY**'s `qrexec-daemon` - sends `MSG_SERVICE_REFUSED` header followed by `service_params` to - **domY**'s `qrexec-agent`. `service_params.ident` is identical to the one - received. **domY**'s `qrexec-agent` disconnects its `qrexec-client-vm` - and RPC processing is finished. - - If `qrexec-policy` returned `0`, RPC processing continues. -- **dom0**: if `qrexec-policy` allowed the RPC, it executed `qrexec-client --d domX -c 13,domY,domY_id user:QUBESRPC qubes.SomeRpc domY`. -- **dom0**: `qrexec-client` sets `QREXEC_REMOTE_DOMAIN` environment variable -to **domX**. -- **dom0**: `qrexec-client` connects to **domX**'s `qrexec-daemon`. -- **dom0**: **domX**'s `qrexec-daemon` sends `MSG_HELLO` header followed by -`peer_info` to `qrexec-client`. -- **dom0**: `qrexec-client` replies with `MSG_HELLO` header followed by -`peer_info` to **domX**'s`qrexec-daemon`. -- **dom0**: `qrexec-client` sends `MSG_EXEC_CMDLINE` header followed by -`exec_params` to **domX**'s`qrexec-daemon` - - /* variable size */ - struct exec_params { - uint32_t connect_domain; /* target domain id */ - uint32_t connect_port; /* target vchan port for i/o exchange */ - char cmdline[0]; /* command line to execute, size = msg_header.len - sizeof(struct exec_params) */ - }; - - In this case, `connect_domain` is set to id of **domY** (from the `-c` - parameter) and `connect_port` is set to 0. `cmdline` field contains the - RPC to execute, in this case `user:QUBESRPC qubes.SomeRpc domY`. - -- **dom0**: **domX**'s `qrexec-daemon` replies to `qrexec-client` with -`MSG_EXEC_CMDLINE` header followed by `exec_params`, but with empty `cmdline` -field. `connect_domain` is set to Qubes ID of **domX** and `connect_port` -is set to a vchan port allocated by **domX**'s `qrexec-daemon`. -- **dom0**: **domX**'s `qrexec-daemon` sends `MSG_EXEC_CMDLINE` header -followed by `exec_params` to **domX**'s `qrexec-agent`. `connect_domain` -and `connect_port` fields are the same as in the step above. `cmdline` is -set to the one received from `qrexec-client`, in this case `user:QUBESRPC -qubes.SomeRpc domY`. -- **dom0**: `qrexec-client` disconnects from **domX**'s `qrexec-daemon` -after receiving connection details. -- **dom0**: `qrexec-client` connects to **domY**'s `qrexec-daemon` and -exchanges `MSG_HELLO` as usual. -- **dom0**: `qrexec-client` sends `MSG_SERVICE_CONNECT` header followed by -`exec_params` to **domY**'s `qrexec-daemon`. `connect_domain` is set to ID -of **domX** (received from **domX**'s `qrexec-daemon`) and `connect_port` is -the one received as well. `cmdline` is set to request ID (`13` in this case). -- **dom0**: **domY**'s `qrexec-daemon` sends `MSG_SERVICE_CONNECT` header -followed by `exec_params` to **domY**'s `qrexec-agent`. Data fields are -unchanged from the step above. -- **domY**: `qrexec-agent` starts a vchan server on the port received in -the step above. It acts as a `qrexec-client` in this case because this is -a VM-VM connection. -- **domX**: `qrexec-agent` connects to the vchan server of **domY**'s -`qrexec-agent` (connection details were received before from **domX**'s -`qrexec-daemon`). -- After that, connection follows the flow of the previous example (dom0-VM). - diff --git a/doc.md b/doc.md index ef225308..be853c48 100644 --- a/doc.md +++ b/doc.md @@ -45,6 +45,7 @@ Core documentation for Qubes users. * [System Requirements](/doc/system-requirements/) * [Certified Hardware](/doc/certified-hardware/) * [Hardware Compatibility List (HCL)](/hcl/) + * [Hardware Testing](/doc/hardware-testing/) ### Downloading, Installing, and Upgrading Qubes @@ -63,7 +64,7 @@ Core documentation for Qubes users. * [Copying from (and to) Dom0](/doc/copy-from-dom0/) * [Updating Qubes OS](/doc/updating-qubes-os/) * [Installing and Updating Software in Dom0](/doc/software-update-dom0/) - * [Installing and Updating Software in VMs](/doc/software-update-vm/) + * [Installing and Updating Software in DomUs](/doc/software-update-domu/) * [Backup, Restoration, and Migration](/doc/backup-restore/) * [DisposableVMs](/doc/disposablevm/) * [Block (or Storage) Devices](/doc/block-devices/) @@ -77,12 +78,11 @@ Core documentation for Qubes users. ### Managing Operating Systems within Qubes * [TemplateVMs](/doc/templates/) - * [Template: Fedora](/doc/templates/fedora/) - * [Template: Fedora Minimal](/doc/templates/fedora-minimal/) - * [Template: Debian](/doc/templates/debian/) - * [Template: Debian Minimal](/doc/templates/debian-minimal/) + * [Fedora](/doc/templates/fedora/) + * [Debian](/doc/templates/debian/) + * [Minimal TemplateVMs](/doc/templates/minimal/) * [Windows](/doc/windows/) - * [HVM Domains](/doc/hvm/) + * [StandaloneVMs and HVMs](/doc/standalone-and-hvm/) ### Security in Qubes @@ -154,7 +154,6 @@ Core documentation for Qubes developers and advanced users. * [Qubes Core Admin Client](https://dev.qubes-os.org/projects/core-admin-client/en/latest/) * [Qubes Admin API](/news/2017/06/27/qubes-admin-api/) * [Qubes Core Stack](/news/2017/10/03/core3/) - * [Qrexec: command execution in VMs](/doc/qrexec3/) * [Qubes GUI virtualization protocol](/doc/gui/) * [Networking in Qubes](/doc/networking/) * [Implementation of template sharing and updating](/doc/template-implementation/) @@ -167,6 +166,8 @@ Core documentation for Qubes developers and advanced users. * [Dynamic memory management in Qubes](/doc/qmemman/) * [Implementation of DisposableVMs](/doc/dvm-impl/) * [Dom0 secure update mechanism](/doc/dom0-secure-updates/) + * [Qrexec: secure communication across domains](/doc/qrexec/) + * [Qubes RPC internals](/doc/qrexec-internals/) ### Debugging diff --git a/external/configuration-guides/tips-and-tricks.md b/external/configuration-guides/tips-and-tricks.md index f3419984..f937b32d 100644 --- a/external/configuration-guides/tips-and-tricks.md +++ b/external/configuration-guides/tips-and-tricks.md @@ -53,10 +53,3 @@ This applies also to any TemplateBasedVM relative to its parent TemplateVM, but Credit: [Joanna Rutkovska](https://twitter.com/rootkovska/status/832571372085850112) - -Trim for standalone AppVMs ---------------------- -The `qvm-trim-template` command is not available for a standalone AppVM. - -It is still possible to trim the AppVM disks by using the `fstrim --all` command from the appvm. -You can also add the `discard` option to the mount line in `/etc/fstab` inside the standalone AppVM if you want trimming to be performed automatically, but there may be a performance impact on writes and deletes. diff --git a/external/os-guides/pentesting/kali.md b/external/os-guides/pentesting/kali.md index 08e03362..5cda6f74 100644 --- a/external/os-guides/pentesting/kali.md +++ b/external/os-guides/pentesting/kali.md @@ -148,15 +148,9 @@ There are multiple ways to create a Kali Linux VM: [user@kali ~]$ sudo apt-get dist-upgrade [user@kali ~]$ sudo apt-get autoremove -8. Shutdown and trim `kali` template +8. Shut down `kali` template - - Shutdown `kali` template - - [user@kali ~]$ sudo shutdown -h now - - - In `dom0` console: - - [user@dom0 ~]$ qvm-trim-template kali + [user@kali ~]$ sudo shutdown -h now 9. Start image @@ -285,10 +279,9 @@ These instructions will show you how to upgrade a Debian TemplateVM to Kali Linu [user@kali-rolling ~]$ sudo apt-get dist-upgrade [user@kali-rolling ~]$ sudo apt-get autoremove -9. Shut down and trim the new template. +9. Shut down the new template. [user@dom0 ~]$ qvm-shutdown kali-rolling - [user@dom0 ~]$ qvm-trim-template kali-rolling 10. Ensure a terminal can be opened in the new template. diff --git a/external/os-guides/windows/windows-vm.md b/external/os-guides/windows/windows-vm.md index c90fe015..3c8be14c 100644 --- a/external/os-guides/windows/windows-vm.md +++ b/external/os-guides/windows/windows-vm.md @@ -162,7 +162,7 @@ To avoid that error we temporarily have to switch the video adapter to 'cirrus': qvm-features win7new video-model cirrus ~~~ -The VM is now ready to be started; the best practice is to use an installation ISO [located in a VM](/doc/hvm/#installing-an-os-in-an-hvm-qube): +The VM is now ready to be started; the best practice is to use an installation ISO [located in a VM](/doc/standalone-and-hvm/#installing-an-os-in-an-hvm): ~~~ qvm-start --cdrom=untrusted:/home/user/windows_install.iso win7new diff --git a/external/troubleshooting/remove-vm-manually.md b/external/troubleshooting/remove-vm-manually.md index 7d8d52b0..fbc1c145 100644 --- a/external/troubleshooting/remove-vm-manually.md +++ b/external/troubleshooting/remove-vm-manually.md @@ -32,5 +32,5 @@ When a template is marked as 'installed by package manager', but cannot be unins - If `installed_by_rpm` remains `True`, reboot your computer to bring qubes.xml in sync with qubesd, and try again to remove the template. -[normal method]: /doc/templates/#how-to-install-uninstall-reinstall-and-switch +[normal method]: /doc/templates/#uninstalling diff --git a/introduction/experts.md b/introduction/experts.md index efb79623..29ec32da 100644 --- a/introduction/experts.md +++ b/introduction/experts.md @@ -58,15 +58,15 @@ permalink: /experts/ + {% include footer.html %} diff --git a/introduction/faq.md b/introduction/faq.md index b0024a92..d0469729 100644 --- a/introduction/faq.md +++ b/introduction/faq.md @@ -118,7 +118,7 @@ Please refer to [this page](/doc/vm-sudo/). ### Why is dom0 so old? Please see: -- [Why would one want to update software in dom0?](/doc/software-update-dom0/#why-would-one-want-to-install-or-update-software-in-dom0) +- [Installing and updating software in dom0](/doc/software-update-dom0/) - [Note on dom0 and EOL](/doc/supported-versions/#note-on-dom0-and-eol) ### Do you recommend coreboot as an alternative to vendor BIOS? @@ -421,7 +421,7 @@ For Debian: For Fedora: 1. (Recommended) Clone an existing Fedora TemplateVM -2. [Enable the appropriate RPMFusion repos in the desired Fedora TemplateVM.](/doc/software-update-vm/#rpmfusion-for-a-fedora-templatevm) +2. [Enable the appropriate RPMFusion repos in the desired Fedora TemplateVM.](/doc/software-update-domu/#rpmfusion-for-fedora-templatevms) 3. Install VLC in that TemplateVM: $ sudo dnf install vlc diff --git a/project-security/security.md b/project-security/security.md index a3ab976e..25eb0c82 100644 --- a/project-security/security.md +++ b/project-security/security.md @@ -33,6 +33,10 @@ Reporting Security Issues in Qubes OS If you believe you have found a security issue affecting Qubes OS, either directly or indirectly (e.g. the issue affects Xen in a configuration that is used in Qubes OS), then we would be more than happy to hear from you! We promise to treat any reported issue seriously and, if the investigation confirms that it affects Qubes, to patch it within a reasonable time and release a public [Qubes Security Bulletin][Security Bulletins] that describes the issue, discusses the potential impact of the vulnerability, references applicable patches or workarounds, and credits the discoverer. +Security Updates +---------------- + +Qubes security updates are obtained by [Updating Qubes OS]. The Qubes Security Team ----------------------- @@ -82,4 +86,6 @@ Please see [Why and How to Verify Signatures] for information about how to verif [Simon Gaiser (aka HW42)]: /team/#simon-gaiser-aka-hw42 [Joanna Rutkowska]: /team/#joanna-rutkowska [emeritus, canaries only]: /news/2018/11/05/qubes-security-team-update/ +[Updating Qubes OS]: /doc/updating-qubes-os/ + diff --git a/project-security/verifying-signatures.md b/project-security/verifying-signatures.md index e7677568..3b9ad1f8 100644 --- a/project-security/verifying-signatures.md +++ b/project-security/verifying-signatures.md @@ -62,6 +62,10 @@ This Qubes Master Signing Key was generated on and is kept only on a dedicated, There are several ways to get the Qubes Master Signing Key. + - If you have access to an existing Qubes installation, it's available in every VM ([except dom0]): + + $ gpg --import /usr/share/qubes/qubes-master-key.asc + - Fetch it with GPG: $ gpg --fetch-keys https://keys.qubes-os.org/keys/qubes-master-signing-key.asc @@ -149,6 +153,10 @@ Now, when you import any of the legitimate Qubes developer keys and Release Sign The filename of the Release Signing Key for your version is `qubes-release-X-signing-key.asc`, where `X` is the major version number of your Qubes release. There are several ways to get the Release Signing Key for your Qubes release. + - If you have access to an existing Qubes installation, the release keys are available in dom0 in `/etc/pki/rpm-gpg/`. + These can be [copied][copy-from-dom0] into other VMs for further use. + In addition, every other VM contains the release key corresponding to that installation's release in `/etc/pki/rpm-gpg/`. + - Fetch it with GPG: $ gpg --fetch-keys https://keys.qubes-os.org/keys/qubes-release-X-signing-key.asc @@ -443,9 +451,11 @@ If you still have a question, please address it to the [qubes-users mailing list [Troubleshooting FAQ]: #troubleshooting-faq [QMSK]: #1-get-the-qubes-master-signing-key-and-verify-its-authenticity [RSK]: #2-get-the-release-signing-key +[copy-from-dom0]: /doc/copy-from-dom0/#copying-from-dom0 [signature file]: #3-verify-your-qubes-iso [digest file]: #how-to-verify-qubes-iso-digests [Qubes repositories]: https://github.com/QubesOS [GPG documentation]: https://www.gnupg.org/documentation/ [qubes-users mailing list]: /support/#qubes-users +[except dom0]: https://github.com/QubesOS/qubes-issues/issues/2544 diff --git a/user/advanced-configuration/config-files.md b/user/advanced-configuration/config-files.md index ca1fc3f0..aea1f614 100644 --- a/user/advanced-configuration/config-files.md +++ b/user/advanced-configuration/config-files.md @@ -95,6 +95,9 @@ global: { #secure_paste_sequence = "Ctrl-Shift-v"; #windows_count_limit = 500; #audio_low_latency = false; + #log_level = 1; + #trayicon_mode = "border1"; + #startup_timeout = 91; }; # most of setting can be set per-VM basis @@ -122,8 +125,22 @@ Currently supported settings: - `secure_copy_sequence` and `secure_paste_sequence` - key sequences used to trigger secure copy and paste. -- `windows_count_limit` - limit on concurrent windows. - - `audio_low_latency` - force low-latency audio mode (about 40ms compared to 200-500ms by default). - Note that this will cause much higher CPU usage in dom0. + Note that this will cause much higher CPU usage in dom0. It's enabled by + default, disabling it may save CPU in dom0. +- `trayicon_mode` - defines the trayicon coloring mode. Options are + - `bg` - color full icon background to the VM color + - `border1` - add 1px border at the icon edges + - `border2` - add 1px border 1px from the icon edges + - `tint` - tinttint icon to the VM color, can be used with additional + modifiers (you can enable multiple of them) + - `tint+border1,tint+border2` - same as tint, but also add a border + - `tint+saturation50` - same as tint, but reduce icon saturation by 50% + - `tint+whitehack` - same as tint, but change white pixels (0xffffff) to + almost-white (0xfefefe) + +- `log level` - log level defines the log options log can take. log level can + have a value of 0(only errors), 1(some basic messages), 2(debug). + +- `startup_timeout` - The timeout for startup. diff --git a/user/advanced-configuration/disposablevm-customization.md b/user/advanced-configuration/disposablevm-customization.md index bd06c585..60fd6bea 100644 --- a/user/advanced-configuration/disposablevm-customization.md +++ b/user/advanced-configuration/disposablevm-customization.md @@ -48,9 +48,9 @@ Additionally you may want to set it as default DisposableVM Template: [user@dom0 ~]$ qubes-prefs default_dispvm custom-disposablevm-template -The above default is used whenever a qube request starting a new DisposableVM and do not specify which one (for example `qvm-open-in-dvm` tool). This can be also set in qube settings and will affect service calls from that qube. See [qrexec documentation](/doc/qrexec3/#extra-keywords-available-in-qubes-40-and-later) for details. +The above default is used whenever a qube request starting a new DisposableVM and do not specify which one (for example `qvm-open-in-dvm` tool). This can be also set in qube settings and will affect service calls from that qube. See [qrexec documentation](/doc/qrexec/#specifying-vms-tags-types-targets-etc) for details. -If you wish to use the `fedora-minimal` template as a DisposableVM Template, see the "DisposableVM Template" use case under [fedora-minimal customization](/doc/templates/fedora-minimal/#customization). +If you wish to use a [Minimal TemplateVM](/doc/templates/minimal/) as a DisposableVM Template, please see the [Minimal TemplateVM](/doc/templates/minimal/) page. ## Customization of DisposableVM @@ -106,6 +106,8 @@ qvm-prefs provides_network true ~~~ Next, set the old `sys-` VM's autostart to false, and update any references to the old one. +In particular, make sure to update `/etc/qubes-rpc/policy/qubes.UpdatesProxy` in dom0. + For example, `qvm-prefs sys-firewall netvm `. See below for a complete example of a `sys-net` replacement: @@ -198,6 +200,7 @@ Using DisposableVMs in this manner is ideal for untrusted qubes which require pe [user@dom0 ~]$ qubes-prefs clockvm disp-sys-net +9. _(recommended)_ Allow templates to be updated via `disp-sys-net`. In dom0, edit `/etc/qubes-rpc/policy/qubes.UpdatesProxy` to change the target from `sys-net` to `disp-sys-net`. ### Create the sys-firewall DisposableVM diff --git a/user/advanced-configuration/managing-vm-kernel.md b/user/advanced-configuration/managing-vm-kernel.md index 10770d8a..2df4598d 100644 --- a/user/advanced-configuration/managing-vm-kernel.md +++ b/user/advanced-configuration/managing-vm-kernel.md @@ -9,7 +9,9 @@ redirect_from: VM kernel managed by dom0 ========================= -By default, VMs kernels are provided by dom0. This means that: +By default, VMs kernels are provided by dom0. +(See [here][dom0-kernel-upgrade] for information about upgrading kernels in dom0.) +This means that: 1. You can select the kernel version (using GUI VM Settings tool or `qvm-prefs` commandline tool); 2. You can modify kernel options (using `qvm-prefs` commandline tool); @@ -327,7 +329,10 @@ Booting to a kernel inside the template is not supported under `PVH`. In case of problems, you can access the VM console using `sudo xl console VMNAME` in dom0, then access the GRUB menu. You need to call it just after starting the VM (until `GRUB_TIMEOUT` expires); for example, in a separate dom0 terminal window. -In any case you can later access the VM's logs (especially the VM console log `guest-VMNAME.log`). +In any case you can later access the VM's logs (especially the VM console log `/var/log/xen/console/guest-VMNAME.log`). You can always set the kernel back to some dom0-provided value to fix a VM kernel installation. + +[dom0-kernel-upgrade]: /doc/software-update-dom0/#kernel-upgrade + diff --git a/user/advanced-configuration/newer-hardware-troubleshooting.md b/user/advanced-configuration/newer-hardware-troubleshooting.md index 23bd35d4..4ed30ba8 100644 --- a/user/advanced-configuration/newer-hardware-troubleshooting.md +++ b/user/advanced-configuration/newer-hardware-troubleshooting.md @@ -10,8 +10,8 @@ Troubleshooting newer hardware By default, the kernel that is installed in dom0 comes from the `kernel` package, which is an older Linux LTS kernel. For most cases this works fine since the Linux kernel developers backport fixes to this kernel, but for some newer hardware, you may run into issues. For example, the audio might not work if the sound card is too new for the LTS kernel. - -To fix this, you can try the `kernel-latest` package - though be aware that it's less tested! +To fix this, you can try the `kernel-latest` package -- though be aware that it's less tested! +(See [here][dom0-kernel-upgrade] for more information about upgrading kernels in dom0.) In dom0: ~~~ @@ -23,3 +23,7 @@ You can double-check that the boot used the newer kernel with `uname -r`, which Compare this with the output of `rpm -q kernel`. If the start of `uname -r` matches one of the versions printed by `rpm`, then you're still using the Linux LTS kernel, and you'll probably need to manually fix your boot settings. If `uname -r` reports a higher version number, then you've successfully booted with the kernel shipped by `kernel-latest`. + + +[dom0-kernel-upgrade]: /doc/software-update-dom0/#kernel-upgrade + diff --git a/user/advanced-configuration/rpc-policy.md b/user/advanced-configuration/rpc-policy.md index 9d39652b..fe516f5a 100644 --- a/user/advanced-configuration/rpc-policy.md +++ b/user/advanced-configuration/rpc-policy.md @@ -15,10 +15,10 @@ Here's an example of an RPC policy file in dom0: ``` [user@dom0 user ~]$ cat /etc/qubes-rpc/policy/qubes.FileCopy (...) -$tag:work $tag:work allow -$tag:work $anyvm deny -$anyvm $tag:work deny -$anyvm $anyvm ask +@tag:work @tag:work allow +@tag:work @anyvm deny +@anyvm @tag:work deny +@anyvm @anyvm ask ``` It has three columns (from left to right): source, destination, and permission. @@ -32,7 +32,7 @@ Now, the whole policy file is parsed from top to bottom. As soon as a rule is found that matches the action being evaluated, parsing stops. We can see what this means by looking at the second row. It says that we're **denied** from attempting to copy a file **from** any VM tagged with "work" **to** any VM whatsoever. -(That's what the `$anyvm` keyword means -- literally any VM in the system). +(That's what the `@anyvm` keyword means -- literally any VM in the system). But, wait a minute, didn't we just say (in the first row) that all the VMs tagged with work are **allowed** to copy files to each other? That's exactly right. The first and second rows contradict each other, but that's intentional. @@ -46,7 +46,7 @@ Rather, it means that only VMs that match an earlier rule can do so (in this cas The fourth and final row says that we're **asked** (i.e., prompted) to copy files **from** any VM in the system **to** any VM in the system. (This rule was already in the policy file by default. We added the first three.) -Note that it wouldn't make sense to add any rules after this one, since every possible pair of VMs will match the `$anyvm $anyvm` pattern. +Note that it wouldn't make sense to add any rules after this one, since every possible pair of VMs will match the `@anyvm @anyvm` pattern. Therefore, parsing will always stop at this rule, and no rules below it will ever be evaluated. All together, the three rules we added say that all VMs tagged with "work" are allowed to copy files to each other; however, they're denied from copying files to other VMs (without the "work" tag), and other VMs (without the "work" tag) are denied from copying files to them. @@ -54,5 +54,8 @@ The fourth rule means that the user gets prompted for any situation not already Further details about how this system works can be found in [Qrexec: command execution in VMs][qrexec3]. +(***Note**: the `$` character is deprecated in qrexec keywords -- please use `@` instead (e.g. `@anyvm`). +For more information, see the bulletin [here](https://github.com/QubesOS/qubes-secpack/blob/master/QSBs/qsb-038-2018.txt).*) + [qrexec3]: /doc/qrexec3/ diff --git a/user/common-tasks/block-devices.md b/user/common-tasks/block-devices.md index 1c58db63..460aa625 100644 --- a/user/common-tasks/block-devices.md +++ b/user/common-tasks/block-devices.md @@ -26,7 +26,9 @@ Qubes OS supports the ability to attach a USB drive (or just its partitions) to Attaching USB drives is integrated into the Devices Widget: ![device manager icon] Simply insert your USB drive and click on the widget. You will see multiple entries for your USB drive; typically, `sys-usb:sda`, `sys-usb:sda1`, and `sys-usb:2-1` for example. -Entries starting with a number (e.g. here `2-1`) are the [whole usb-device][USB]. Entries without a number (e.g. here `sda`) are the whole block-device. Other entries are partitions of that block-device (e.r. here `sda1`). +Entries starting with a number (e.g. here `2-1`) are the [whole usb-device][USB]. +Entries without a number (e.g. here `sda`) are the whole block-device. +Other entries are partitions of that block-device (e.r. here `sda1`). The simplest option is to attach the entire block drive. In our example, this is `sys-usb:sda`, so hover over it. @@ -40,7 +42,8 @@ See below for more detailed steps. ## Block Devices in VMs ## -If not specified otherwise, block devices will show up as `/dev/xvdi*` in a linux VM, where `*` may be the partition-number. If a block device isn't automatically mounted after attaching, open a terminal in the VM and execute: +If not specified otherwise, block devices will show up as `/dev/xvdi*` in a linux VM, where `*` may be the partition-number. +If a block device isn't automatically mounted after attaching, open a terminal in the VM and execute: cd ~ mkdir mnt @@ -60,9 +63,11 @@ To specify this device node name, you need to use the command line tool and its The command-line tool you may use to mount whole USB drives or their partitions is `qvm-block`, a shortcut for `qvm-device block`. -`qvm-block` won't recognise your device by any given name, but rather the device-node the sourceVM assigns. So make sure you have the drive available in the sourceVM, then list the available block devices (step 1.) to find the corresponding device-node. +`qvm-block` won't recognise your device by any given name, but rather the device-node the sourceVM assigns. +So make sure you have the drive available in the sourceVM, then list the available block devices (step 1.) to find the corresponding device-node. -In case of a USB-drive, make sure it's attached to your computer. If you don't see anything that looks like your drive, run `sudo udevadm trigger --action=change` in your USB-qube (typically `sys-usb`) +In case of a USB-drive, make sure it's attached to your computer. +If you don't see anything that looks like your drive, run `sudo udevadm trigger --action=change` in your USB-qube (typically `sys-usb`) 1. In a dom0 console (running as a normal user), list all available block devices: @@ -154,13 +159,16 @@ To attach a file as block device to another qube, first turn it into a loopback sudo losetup -f --show /path/to/file - [This command][losetup] will create the device node `/dev/loop0` or, if that is already in use, increase the trailing integer until that name is still available. Afterwards it prints the device-node-name it found. + [This command][losetup] will create the device node `/dev/loop0` or, if that is already in use, increase the trailing integer until that name is still available. + Afterwards it prints the device-node-name it found. - 2. If you want to use the GUI, you're done. Click the Device Manager ![device manager icon] and select the `loop0`-device to attach it to another qube. + 2. If you want to use the GUI, you're done. + Click the Device Manager ![device manager icon] and select the `loop0`-device to attach it to another qube. If you rather use the command line, continue: - In dom0, run `qvm-block` to display known block devices. The newly created loop device should show up: + In dom0, run `qvm-block` to display known block devices. + The newly created loop device should show up: ~]$ qvm-block BACKEND:DEVID DESCRIPTION USED BY @@ -177,12 +185,15 @@ To attach a file as block device to another qube, first turn it into a loopback ## Additional Attach Options ## -Attaching a block device through the command line offers additional customisation options, specifiable via the `--option`/`-o` option. (Yes, confusing wording, there's an [issue for that](https://github.com/QubesOS/qubes-issues/issues/4530).) +Attaching a block device through the command line offers additional customisation options, specifiable via the `--option`/`-o` option. +(Yes, confusing wording, there's an [issue for that](https://github.com/QubesOS/qubes-issues/issues/4530).) ### frontend-dev ### -This option allows you to specify the name of the device node made available in the targetVM. This defaults to `xvdi` or, if already occupied, the first available device node name in alphabetical order. (The next one tried will be `xvdj`, then `xvdk`, and so on ...) +This option allows you to specify the name of the device node made available in the targetVM. +This defaults to `xvdi` or, if already occupied, the first available device node name in alphabetical order. +(The next one tried will be `xvdj`, then `xvdk`, and so on ...) usage example: @@ -193,7 +204,8 @@ This command will attach the partition `sda1` to `work` as `/dev/xvdz`. ### read-only ### -Attach device in read-only mode. Protects the block device in case you don't trust the targetVM. +Attach device in read-only mode. +Protects the block device in case you don't trust the targetVM. If the device is a read-only device, this option is forced true. @@ -210,7 +222,8 @@ The two commands are equivalent. ### devtype ### -Usually, a block device is attached as disk. In case you need to attach a block device as cdrom, this option allows that. +Usually, a block device is attached as disk. +In case you need to attach a block device as cdrom, this option allows that. usage example: diff --git a/user/common-tasks/copy-paste.md b/user/common-tasks/copy-paste.md index b58646f7..9a2543bb 100644 --- a/user/common-tasks/copy-paste.md +++ b/user/common-tasks/copy-paste.md @@ -11,23 +11,34 @@ redirect_from: Copy and Paste between domains ============================== -Qubes fully supports secure copy and paste operation between domains. In order to copy a clipboard from domain A to domain B, follow those steps: +Qubes fully supports secure copy and paste operation between domains. +In order to copy a clipboard from domain A to domain B, follow those steps: -1. Click on the application window in domain A where you have selected text for copying. Then use the *app-specific* hot-key (or menu option) to copy this into domain's local clipboard (in other words: do the copy operation as usual, in most cases by pressing Ctrl-C). -2. Then (when the app in domain A is still in focus) press Ctrl-Shift-C magic hot-key. This will tell Qubes that we want to select this domain's clipboard for *global copy* between domains. -3. Now select the destination app, running in domain B, and press Ctrl-Shift-V, another magic hot-key that will tell Qubes to make the clipboard marked in the previous step available to apps running in domain B. This step is necessary because it ensures that only domain B will get access to the clipboard copied from domain A, and not any other domain that might be running in the system. +1. Click on the application window in domain A where you have selected text for copying. + Then use the *app-specific* hot-key (or menu option) to copy this into domain's local clipboard (in other words: do the copy operation as usual, in most cases by pressing Ctrl-C). +2. Then (when the app in domain A is still in focus) press Ctrl-Shift-C magic hot-key. + This will tell Qubes that we want to select this domain's clipboard for *global copy* between domains. +3. Now select the destination app, running in domain B, and press Ctrl-Shift-V, another magic hot-key that will tell Qubes to make the clipboard marked in the previous step available to apps running in domain B. + This step is necessary because it ensures that only domain B will get access to the clipboard copied from domain A, and not any other domain that might be running in the system. 4. Now, in the destination app use the app-specific key combination (usually Ctrl-V) for pasting the clipboard. Note that the global clipboard will be cleared after step \#3, to prevent accidental leakage to another domain, if the user accidentally pressed Ctrl-Shift-V later. -This 4-step process might look complex, but after some little practice it really is very easy and fast. At the same time it provides the user with full control over who has access to the clipboard. +This 4-step process might look complex, but after some little practice it really is very easy and fast. +At the same time it provides the user with full control over who has access to the clipboard. -Note that only simple plain text copy/paste is supported between AppVMs. This is discussed in a bit more detail in [this message](https://groups.google.com/group/qubes-devel/msg/57fe6695eb8ec8cd). +Note that only simple plain text copy/paste is supported between AppVMs. +This is discussed in a bit more detail in [this message](https://groups.google.com/group/qubes-devel/msg/57fe6695eb8ec8cd). On Copy/Paste Security ---------------------- -The scheme is *secure* because it doesn't allow other VMs to steal the content of the clipboard. However, one should keep in mind that performing a copy and paste operation from *less trusted* to *more trusted* domain can always be potentially insecure, because the data that we insert might potentially try to exploit some hypothetical bug in the destination VM (e.g. the seemingly innocent link that we copy from untrusted domain, might turn out to be, in fact, a large buffer of junk that, when pasted into the destination VM's word processor could exploit a hypothetical bug in the undo buffer). This is a general problem and applies to any data transfer between *less trusted to more trusted* domains. It even applies to copying files between physically separate machines (air-gapped) systems. So, you should always copy clipboard and data only from *more trusted* to *less trusted* domains. +The scheme is *secure* because it doesn't allow other VMs to steal the content of the clipboard. +However, one should keep in mind that performing a copy and paste operation from *less trusted* to *more trusted* domain can always be potentially insecure, because the data that we insert might potentially try to exploit some hypothetical bug in the destination VM (e.g. +the seemingly innocent link that we copy from untrusted domain, might turn out to be, in fact, a large buffer of junk that, when pasted into the destination VM's word processor could exploit a hypothetical bug in the undo buffer). +This is a general problem and applies to any data transfer between *less trusted to more trusted* domains. +It even applies to copying files between physically separate machines (air-gapped) systems. +So, you should always copy clipboard and data only from *more trusted* to *less trusted* domains. See also [this article](https://blog.invisiblethings.org/2011/03/13/partitioning-my-digital-life-into.html) for more information on this topic, and some ideas of how we might solve this problem in some future version of Qubes. @@ -47,11 +58,12 @@ The Qubes clipboard [RPC policy] is configurable in: /etc/qubes-rpc/policy/qubes.ClipboardPaste ~~~ -You may wish to configure this policy in order to prevent user error. For example, if you are certain that you never wish to paste *into* your "vault" AppVM (and it is highly recommended that you do not), then you should edit the policy as follows: +You may wish to configure this policy in order to prevent user error. +For example, if you are certain that you never wish to paste *into* your "vault" AppVM (and it is highly recommended that you do not), then you should edit the policy as follows: ~~~ -$anyvm vault deny -$anyvm $anyvm ask +@anyvm vault deny +@anyvm @anyvm ask ~~~ Shortcut Configuration diff --git a/user/common-tasks/copying-files.md b/user/common-tasks/copying-files.md index 9babf209..21688795 100644 --- a/user/common-tasks/copying-files.md +++ b/user/common-tasks/copying-files.md @@ -23,7 +23,8 @@ GUI 2. A dialog box will appear asking for the name of the destination qube (qube B). -3. A confirmation dialog box will appear(this will be displayed by Dom0, so none of the qubes can fake your consent). After you click ok, qube B will be started if it is not already running, the file copy operation will start, and the files will be copied into the following folder in qube B: +3. A confirmation dialog box will appear(this will be displayed by Dom0, so none of the qubes can fake your consent). + After you click ok, qube B will be started if it is not already running, the file copy operation will start, and the files will be copied into the following folder in qube B: `/home/user/QubesIncoming/` @@ -45,11 +46,14 @@ qvm-move [--without-progress] file [file]+ On inter-qube file copy security ---------------------------------- -The scheme is *secure* because it doesn't allow other qubes to steal the files that are being copied, and also doesn't allow the source qube to overwrite arbitrary files on the destination qube. Also, Qubes's file copy scheme doesn't use any sort of virtual block devices for file copy -- instead we use Xen shared memory, which eliminates lots of processing of untrusted data. For example, the receiving qube is *not* forced to parse untrusted partitions or file systems. In this respect our file copy mechanism provides even more security than file copy between two physically separated (air-gapped) machines! +The scheme is *secure* because it doesn't allow other qubes to steal the files that are being copied, and also doesn't allow the source qube to overwrite arbitrary files on the destination qube. +Also, Qubes's file copy scheme doesn't use any sort of virtual block devices for file copy -- instead we use Xen shared memory, which eliminates lots of processing of untrusted data. +For example, the receiving qube is *not* forced to parse untrusted partitions or file systems. +In this respect our file copy mechanism provides even more security than file copy between two physically separated (air-gapped) machines! -However, one should keep in mind that performing a data transfer from *less trusted* to *more trusted* qubes can always be potentially insecure, because the data that we insert might potentially try to exploit some hypothetical bug in the destination qube (e.g. a seemingly innocent JPEG that we copy from an untrusted qube might contain a specially crafted exploit for a bug in JPEG parsing application in the destination qube). This is a general problem and applies to any data transfer between *less trusted to more trusted* qubes. It even applies to the scenario of copying files between air-gapped machines. So, you should always copy data only from *more trusted* to *less trusted* qubes. +However, one should keep in mind that performing a data transfer from *less trusted* to *more trusted* qubes can always be potentially insecure, because the data that we insert might potentially try to exploit some hypothetical bug in the destination qube (e.g. a seemingly innocent JPEG that we copy from an untrusted qube might contain a specially crafted exploit for a bug in JPEG parsing application in the destination qube). +This is a general problem and applies to any data transfer between *less trusted to more trusted* qubes. +It even applies to the scenario of copying files between air-gapped machines. +So, you should always copy data only from *more trusted* to *less trusted* qubes. See also [this article](https://blog.invisiblethings.org/2011/03/13/partitioning-my-digital-life-into.html) for more information on this topic, and some ideas of how we might solve this problem in some future version of Qubes. - -You may also want to read how to [revoke "Yes to All" authorization](/doc/qrexec3/#revoking-yes-to-all-authorization) - diff --git a/user/common-tasks/device-handling.md b/user/common-tasks/device-handling.md index a85922cf..b882c1e1 100644 --- a/user/common-tasks/device-handling.md +++ b/user/common-tasks/device-handling.md @@ -11,14 +11,18 @@ redirect_from: # Device Handling # -This is an overview of device handling in Qubes OS. For specific devices ([block], [USB] and [PCI] devices), please visit their respective pages. +This is an overview of device handling in Qubes OS. +For specific devices ([block], [USB] and [PCI] devices), please visit their respective pages. -**Important security warning:** Device handling comes with many security implications. Please make sure you carefully read and understand the **[security considerations]**. +**Important security warning:** Device handling comes with many security implications. +Please make sure you carefully read and understand the **[security considerations]**. ## Introduction ## -The interface to deal with devices of all sorts was unified in Qubes 4.0 with the `qvm-device` command and the Qubes Devices Widget. In Qubes 3.X, the Qubes VM Manager dealt with attachment as well. This functionality was moved to the Qubes Device Widget, the tool tray icon with a yellow square located in the top right of your screen by default. +The interface to deal with devices of all sorts was unified in Qubes 4.0 with the `qvm-device` command and the Qubes Devices Widget. +In Qubes 3.X, the Qubes VM Manager dealt with attachment as well. +This functionality was moved to the Qubes Device Widget, the tool tray icon with a yellow square located in the top right of your screen by default. There are currently four categories of devices Qubes understands: - Microphones @@ -26,31 +30,41 @@ There are currently four categories of devices Qubes understands: - USB devices - PCI devices -Microphones, block devices and USB devices can be attached with the GUI-tool. PCI devices can be attached using the Qube Settings, but require a VM reboot. +Microphones, block devices and USB devices can be attached with the GUI-tool. +PCI devices can be attached using the Qube Settings, but require a VM reboot. ## General Qubes Device Widget Behavior And Handling ## -When clicking on the tray icon (which looks similar to this): ![SD card and thumbdrive][device manager icon] several device-classes separated by lines are displayed as tooltip. Block devices are displayed on top, microphones one below and USB-devices at the bottom. +When clicking on the tray icon (which looks similar to this): ![SD card and thumbdrive][device manager icon] several device-classes separated by lines are displayed as tooltip. +Block devices are displayed on top, microphones one below and USB-devices at the bottom. On most laptops, integrated hardware such as cameras and fingerprint-readers are implemented as USB-devices and can be found here. ### Attaching Using The Widget ### -Click the tray icon. Hover on a device you want to attach to a VM. A list of running VMs (except dom0) appears. Click on one and your device will be attached! +Click the tray icon. +Hover on a device you want to attach to a VM. +A list of running VMs (except dom0) appears. +Click on one and your device will be attached! ### Detaching Using The Widget ### -To detach a device, click the Qubes Devices Widget icon again. Attached devices are displayed in bold. Hover the one you want to detach. A list of VMs appears, one showing the eject symbol: ![eject icon] +To detach a device, click the Qubes Devices Widget icon again. +Attached devices are displayed in bold. +Hover the one you want to detach. +A list of VMs appears, one showing the eject symbol: ![eject icon] ### Attaching a Device to Several VMs ### -Only `mic` should be attached to more than one running VM. You may *assign* a device to more than one VM (using the [`--persistent`][#attaching-devices] option), however, only one of them can be started at the same time. +Only `mic` should be attached to more than one running VM. +You may *assign* a device to more than one VM (using the [`--persistent`][#attaching-devices] option), however, only one of them can be started at the same time. -But be careful: There is a [bug in `qvm-device block` or `qvm-block`][i4692] which will allow you to *attach* a block device to two running VMs. Don't do that! +But be careful: There is a [bug in `qvm-device block` or `qvm-block`][i4692] which will allow you to *attach* a block device to two running VMs. +Don't do that! ## General `qvm-device` Command Line Tool Behavior ## @@ -60,7 +74,8 @@ All devices, including PCI-devices, may be attached from the commandline using t ### Device Classes ### -`qvm-device` expects DEVICE_CLASS as first argument. DEVICE_CLASS can be one of +`qvm-device` expects DEVICE_CLASS as first argument. +DEVICE_CLASS can be one of - `pci` - `usb` @@ -85,7 +100,9 @@ These three options are always available: - `--verbose`, `-v` - increase verbosity - `--quiet`, `-q` - decrease verbosity -A full command consists of one DEVICE_CLASS and one action. If no action is given, list is implied. DEVICE_CLASS however is required. +A full command consists of one DEVICE_CLASS and one action. +If no action is given, list is implied. +DEVICE_CLASS however is required. **SYNOPSIS**: `qvm-device DEVICE_CLASS {action} [action-specific arguments] [options]` @@ -98,12 +115,16 @@ Actions are applicable to every DEVICE_CLASS and expose some additional options. ### Listing Devices ### -The `list` action lists known devices in the system. `list` accepts VM-names to narrow down listed devices. Devices available in, as well as attached to the named VMs will be listed. +The `list` action lists known devices in the system. +`list` accepts VM-names to narrow down listed devices. +Devices available in, as well as attached to the named VMs will be listed. `list` accepts two options: - - `--all` - equivalent to specifying every VM name after `list`. No VM-name implies `--all`. - - `--exclude` - exclude VMs from `--all`. Requires `--all`. + - `--all` - equivalent to specifying every VM name after `list`. +No VM-name implies `--all`. + - `--exclude` - exclude VMs from `--all`. +Requires `--all`. **SYNOPSIS** `qvm-device DEVICE_CLASS {list|ls|l} [--all [--exclude VM [VM [...]]] | VM [VM [...]]]` @@ -111,11 +132,15 @@ The `list` action lists known devices in the system. `list` accepts VM-names to ### Attaching Devices ### -The `attach` action assigns an exposed device to a VM. This makes the device available in the VM it's attached to. Required argument are targetVM and sourceVM:deviceID. (sourceVM:deviceID can be determined from `list` output) +The `attach` action assigns an exposed device to a VM. +This makes the device available in the VM it's attached to. +Required argument are targetVM and sourceVM:deviceID. +(sourceVM:deviceID can be determined from `list` output) `attach` accepts two options: - - `--persistent` - attach device on targetVM-boot. If the device is unavailable (physically missing or sourceVM not started), booting the targetVM fails. + - `--persistent` - attach device on targetVM-boot. +If the device is unavailable (physically missing or sourceVM not started), booting the targetVM fails. - `--option`, `-o` - set additional options specific to DEVICE_CLASS. **SYNOPSIS** @@ -124,7 +149,9 @@ The `attach` action assigns an exposed device to a VM. This makes the device ava ### Detaching Devices ### -The `detach` action removes an assigned device from a targetVM. It won't be available afterwards anymore. Though it tries to do so gracefully, beware that data-connections might be broken unexpectedly, so close any transaction before detaching a device! +The `detach` action removes an assigned device from a targetVM. +It won't be available afterwards anymore. +Though it tries to do so gracefully, beware that data-connections might be broken unexpectedly, so close any transaction before detaching a device! If no specific `sourceVM:deviceID` combination is given, *all devices of that DEVICE_CLASS will be detached.* diff --git a/user/common-tasks/disposablevm.md b/user/common-tasks/disposablevm.md index ec8b855b..f59be63f 100644 --- a/user/common-tasks/disposablevm.md +++ b/user/common-tasks/disposablevm.md @@ -68,7 +68,8 @@ This is a change in behaviour from R3.2, where DisposableVMs would inherit the s Therefore, launching a DisposableVM from an AppVM will result in it using the network/firewall settings of the DisposableVM Template on which it is based. For example, if an AppVM uses sys-net as its NetVM, but the default system DisposableVM uses sys-whonix, any DisposableVM launched from this AppVM will have sys-whonix as its NetVM. -**Warning:** The opposite is also true. This means if you have changed anon-whonix's `default_dispvm` to use the system default, and the system default DisposableVM uses sys-net, launching a DisposableVM from inside anon-whonix will result in the DisposableVM using sys-net. +**Warning:** The opposite is also true. +This means if you have changed anon-whonix's `default_dispvm` to use the system default, and the system default DisposableVM uses sys-net, launching a DisposableVM from inside anon-whonix will result in the DisposableVM using sys-net. A DisposableVM launched from the Start Menu inherits the NetVM and firewall settings of the DisposableVM Template on which it is based. Note that changing the "NetVM" setting for the system default DisposableVM Template *does* affect the NetVM of DisposableVMs launched from the Start Menu. @@ -118,10 +119,11 @@ Note that the `qvm-open-in-dvm` process will not exit until you close the applic ## Starting an arbitrary program in a DisposableVM from an AppVM ## -Sometimes it can be useful to start an arbitrary program in a DisposableVM. This can be done from an AppVM by running +Sometimes it can be useful to start an arbitrary program in a DisposableVM. +This can be done from an AppVM by running ~~~ -[user@vault ~]$ qvm-run '$dispvm' xterm +[user@vault ~]$ qvm-run '@dispvm' xterm ~~~ The created DisposableVM can be accessed via other tools (such as `qvm-copy-to-vm`) using its `disp####` name as shown in the Qubes Manager or `qvm-ls`. @@ -153,6 +155,21 @@ $ qvm-open-in-vm @dispvm:online-dvm-template https://www.qubes-os.org This will create a new DisposableVM based on `online-dvm-template`, open the default web browser in that DisposableVM, and navigate to `https://www.qubes-os.org`. +#### Example of RPC policies to allow this behavior + +In dom0, add the following line at the beginning of the file `/etc/qubes-rpc/policy/qubes.OpenURL` +~~~ +@anyvm @dispvm:online-dvm-template allow +~~~ +This line means: +- FROM: Any VM +- TO: A DisposableVM based on the `online-dvm-template` TemplateVM +- WHAT: Allow sending an "Open URL" request + +In other words, any VM will be allowed to create a new DisposableVM based on `online-dvm-template` and open a URL inside of that DisposableVM. + +More information about RPC policies for DisposableVMs can be found [here][qrexec]. + ## Customizing DisposableVMs ## @@ -162,4 +179,4 @@ Full instructions can be found [here](/doc/disposablevm-customization/). [DisposableVM Template]: /doc/glossary/#disposablevm-template - +[qrexec]: /doc/qrexec/#qubes-rpc-administration diff --git a/user/common-tasks/full-screen-mode.md b/user/common-tasks/full-screen-mode.md index 3ebd6a94..1ec7f8fa 100644 --- a/user/common-tasks/full-screen-mode.md +++ b/user/common-tasks/full-screen-mode.md @@ -14,7 +14,9 @@ Enabling Full Screen Mode for select VMs What is full screen mode? ------------------------- -Normally Qubes GUI virtualization daemon restricts the VM from "owning" the full screen, ensuring that there are always clearly marked decorations drawn by the trusted Window Manager around each of the VMs window. This allows the user to easily realize to which domain a specific window belongs. See the [screenshots](/doc/QubesScreenshots/) for better understanding. +Normally Qubes GUI virtualization daemon restricts the VM from "owning" the full screen, ensuring that there are always clearly marked decorations drawn by the trusted Window Manager around each of the VMs window. +This allows the user to easily realize to which domain a specific window belongs. +See the [screenshots](/doc/QubesScreenshots/) for better understanding. Why is full screen mode potentially dangerous? ---------------------------------------------- @@ -24,8 +26,12 @@ If one allowed one of the VMs to "own" the full screen, e.g. to show a movie on Secure use of full screen mode ------------------------------ -However, it is possible to deal with full screen mode in a secure way assuming there are mechanisms that can be used at any time to show the full desktop, and which cannot be intercepted by the VM. An example of such a mechanism is the KDE's "Present Windows" and "Desktop Grid" effects, which are similar to Mac's "Expose" effect, and which can be used to immediately detect potential "GUI forgery", as they cannot be intercepted by any of the VM (as the GUID never passes down the key combinations that got consumed by KDE Window Manager), and so the VM cannot emulate those. Those effects are enabled by default in KDE once Compositing gets enabled in KDE (System Settings -\> Desktop -\> Enable Desktop Effects), which is recommended anyway. By default they are triggered by Ctrl-F8 and Ctrl-F9 key combinations, but can also be reassigned to other shortcuts. -Another option is to use Alt+Tab for switching windows. This shortcut is also handled by dom0. +However, it is possible to deal with full screen mode in a secure way assuming there are mechanisms that can be used at any time to show the full desktop, and which cannot be intercepted by the VM. +An example of such a mechanism is the KDE's "Present Windows" and "Desktop Grid" effects, which are similar to Mac's "Expose" effect, and which can be used to immediately detect potential "GUI forgery", as they cannot be intercepted by any of the VM (as the GUID never passes down the key combinations that got consumed by KDE Window Manager), and so the VM cannot emulate those. +Those effects are enabled by default in KDE once Compositing gets enabled in KDE (System Settings -\> Desktop -\> Enable Desktop Effects), which is recommended anyway. +By default they are triggered by Ctrl-F8 and Ctrl-F9 key combinations, but can also be reassigned to other shortcuts. +Another option is to use Alt+Tab for switching windows. +This shortcut is also handled by dom0. Enabling full screen mode for select VMs ---------------------------------------- @@ -60,11 +66,8 @@ global: { Be sure to restart the VM(s) after modifying this file, for the changes to take effect. -**Note:** Regardless of the settings above, you can always put a window into -fullscreen mode in Xfce4 using the trusted window manager by right-clicking on -a window's title bar and selecting "Fullscreen". This functionality should still -be considered safe, since a VM window still can't voluntarily enter fullscreen -mode. The user must select this option from the trusted window manager in dom0. -To exit fullscreen mode from here, press `alt` + `space` to bring up the title -bar menu again, then select "Leave Fullscreen". +**Note:** Regardless of the settings above, you can always put a window into fullscreen mode in Xfce4 using the trusted window manager by right-clicking on a window's title bar and selecting "Fullscreen". +This functionality should still be considered safe, since a VM window still can't voluntarily enter fullscreen mode. +The user must select this option from the trusted window manager in dom0. +To exit fullscreen mode from here, press `alt` + `space` to bring up the title bar menu again, then select "Leave Fullscreen". For StandaloneHVMs, you should set the screen resolution in the qube to that of the host, (or larger), *before* setting fullscreen mode in Xfce4. diff --git a/user/common-tasks/optical-discs.md b/user/common-tasks/optical-discs.md index 5b11024c..d8fb6f91 100644 --- a/user/common-tasks/optical-discs.md +++ b/user/common-tasks/optical-discs.md @@ -18,5 +18,8 @@ Currently, the only options for reading and recording optical discs (e.g., CDs, 3. Use a SATA optical drive attached to dom0. (**Caution:** This option is [potentially dangerous](/doc/security-guidelines/#dom0-precautions).) -To access an optical disc via USB follow the [typical procedure for attaching a USB device](/doc/usb-devices/#with-the-command-line-tool), then check with the **Qubes Devices** widget to see what device in the target qube the USB optical drive was attached to. Typically this would be `sr0`. For example, if `sys-usb` has device `3-2` attached to the `work` qube's `sr0`, you would mount it with `mount /dev/sr0 /mnt/removable`. You could also write to a disc with `wodim -v dev=/dev/sr0 -eject /home/user/Qubes.iso`. +To access an optical disc via USB follow the [typical procedure for attaching a USB device](/doc/usb-devices/#with-the-command-line-tool), then check with the **Qubes Devices** widget to see what device in the target qube the USB optical drive was attached to. +Typically this would be `sr0`. +For example, if `sys-usb` has device `3-2` attached to the `work` qube's `sr0`, you would mount it with `mount /dev/sr0 /mnt/removable`. +You could also write to a disc with `wodim -v dev=/dev/sr0 -eject /home/user/Qubes.iso`. diff --git a/user/common-tasks/pci-devices.md b/user/common-tasks/pci-devices.md index 91e42924..cc1f74d4 100644 --- a/user/common-tasks/pci-devices.md +++ b/user/common-tasks/pci-devices.md @@ -13,19 +13,24 @@ redirect_from: *This page is part of [device handling in qubes].* -**Warning:** Only dom0 exposes PCI devices. Some of them are strictly required in dom0 (e.g., the host bridge). +**Warning:** Only dom0 exposes PCI devices. +Some of them are strictly required in dom0 (e.g., the host bridge). You may end up with an unusable system by attaching the wrong PCI device to a VM. -PCI passthrough should be safe by default, but non-default options may be required. Please make sure you carefully read and understand the **[security considerations]** before deviating from default behavior. +PCI passthrough should be safe by default, but non-default options may be required. +Please make sure you carefully read and understand the **[security considerations]** before deviating from default behavior. ## Introduction ## -Unlike other devices ([USB], [block], mic), PCI devices need to be attached on VM-bootup. Similar to how you can't attach a new sound-card after your computer booted (and expect it to work properly), attaching PCI devices to already booted VMs isn't supported. +Unlike other devices ([USB], [block], mic), PCI devices need to be attached on VM-bootup. +Similar to how you can't attach a new sound-card after your computer booted (and expect it to work properly), attaching PCI devices to already booted VMs isn't supported. The Qubes installer attaches all network class controllers to `sys-net` and all USB controllers to `sys-usb` by default, if you chose to create the network and USB qube during install. While this covers most use cases, there are some occasions when you may want to manually attach one NIC to `sys-net` and another to a custom NetVM, or have some other type of PCI controller you want to manually attach. -Some devices expose multiple functions with distinct BDF-numbers. Limits imposed by the PC and VT-d architectures may require all functions belonging to the same device to be attached to the same VM. This requirement can be dropped with the `no-strict-reset` option during attachment, bearing in mind the aforementioned [security considerations]. +Some devices expose multiple functions with distinct BDF-numbers. +Limits imposed by the PC and VT-d architectures may require all functions belonging to the same device to be attached to the same VM. +This requirement can be dropped with the `no-strict-reset` option during attachment, bearing in mind the aforementioned [security considerations]. In the steps below, you can tell if this is needed if you see the BDF for the same device listed multiple times with only the number after the "." changing. While PCI device can only be used by one powered on VM at a time, it *is* possible to *assign* the same device to more than one VM at a time. @@ -35,7 +40,8 @@ This can be useful if, for example, you have only one USB controller, but you ha ## Attaching Devices Using the GUI ## -The qube settings for a VM offers the "Devices"-tab. There you can attach PCI-devices to a qube. +The qube settings for a VM offers the "Devices"-tab. +There you can attach PCI-devices to a qube. 1. To reach the settings of any qube either @@ -45,13 +51,16 @@ The qube settings for a VM offers the "Devices"-tab. There you can attach PCI-de 2. Select the "Devices" tab on the top bar. 3. Select a device you want to attach to the qube and click the single arrow right! (`>`) - 4. You're done. If everything worked out, once the qube boots (or reboots if it's running) it will start with the pci device attached. - 5. In case it doesn't work out, first try disabling memory-balancing in the settings ("Advanced" tab). If that doesn't help, read on to learn how to disable the strict reset requirement! + 4. You're done. + If everything worked out, once the qube boots (or reboots if it's running) it will start with the pci device attached. + 5. In case it doesn't work out, first try disabling memory-balancing in the settings ("Advanced" tab). + If that doesn't help, read on to learn how to disable the strict reset requirement! ## `qvm-pci` Usage ## -The `qvm-pci` tool allows PCI attachment and detachment. It's a shortcut for [`qvm-device pci`][qvm-device]. +The `qvm-pci` tool allows PCI attachment and detachment. +It's a shortcut for [`qvm-device pci`][qvm-device]. To figure out what device to attach, first list the available PCI devices by running (as user) in dom0: @@ -99,14 +108,17 @@ Both can be achieved during attachment with `qvm-pci` as described below. ## Additional Attach Options ## -Attaching a PCI device through the commandline offers additional options, specifiable via the `--option`/`-o` option. (Yes, confusing wording, there's an [issue for that](https://github.com/QubesOS/qubes-issues/issues/4530).) +Attaching a PCI device through the commandline offers additional options, specifiable via the `--option`/`-o` option. +(Yes, confusing wording, there's an [issue for that](https://github.com/QubesOS/qubes-issues/issues/4530).) -`qvm-pci` exposes two additional options. Both are intended to fix device or driver specific issues, but both come with [heavy security implications][security considerations]! **Make sure you understand them before continuing!** +`qvm-pci` exposes two additional options. +Both are intended to fix device or driver specific issues, but both come with [heavy security implications][security considerations]! **Make sure you understand them before continuing!** ### no-strict-reset ### -Do not require PCI device to be reset before attaching it to another VM. This may leak usage data even without malicious intent! +Do not require PCI device to be reset before attaching it to another VM. +This may leak usage data even without malicious intent! usage example: @@ -115,7 +127,8 @@ usage example: ### permissive ### -Allow write access to full PCI config space instead of whitelisted registers. This increases attack surface and possibility of [side channel attacks]. +Allow write access to full PCI config space instead of whitelisted registers. +This increases attack surface and possibility of [side channel attacks]. usage example: diff --git a/user/common-tasks/software-update-dom0.md b/user/common-tasks/software-update-dom0.md index 02bb71c5..6ab412ec 100644 --- a/user/common-tasks/software-update-dom0.md +++ b/user/common-tasks/software-update-dom0.md @@ -8,50 +8,53 @@ redirect_from: - /wiki/SoftwareUpdateDom0/ --- -Installing and updating software in dom0 -======================================== +# Installing and updating software in dom0 -Why would one want to install or update software in dom0? ---------------------------------------------------------- +Updating dom0 is one of the main steps in [Updating Qubes OS]. +It is very import to keep dom0 up-to-date with the latest [security] updates. +We also publish dom0 updates for various non-security bug fixes and enhancements to Qubes components. +In addition, you may wish to update the kernel, drivers, or libraries in dom0 when [troubleshooting newer hardware]. -Normally, there should be few reasons for installing or updating software in dom0. This is because there is no networking in dom0, which means that even if some bugs are discovered e.g. in the dom0 Desktop Manager, this really is not a problem for Qubes, because none of the third-party software running in dom0 is accessible from VMs or the network in any way. Some exceptions to this include: Qubes GUI daemon, Xen store daemon, and disk back-ends. (We plan move the disk backends to an untrusted domain in a future Qubes release.) Of course, we believe this software is reasonably secure, and we hope it will not need patching. +## Security -However, we anticipate some other situations in which installing or updating dom0 software might be necessary or desirable: +Since there is no networking in dom0, any bugs discovered in dom0 desktop components (e.g., the window manager) are unlikely to pose a problem for Qubes, since none of the third-party software running in dom0 is accessible from VMs or the network in any way. +Nonetheless, since software running in dom0 can potentially exercise full control over the system, it is important to install only trusted software in dom0. -- Updating drivers/libs for new hardware support -- Correcting non-security related bugs (e.g. new buttons for qubes manager) -- Adding new features (e.g. GUI backup tool) +The install/update process is split into two phases: *resolve and download* and *verify and install*. +The *resolve and download* phase is handled by the UpdateVM. +(The role of UpdateVM can be assigned to any VM in the Qube Manager, and there are no significant security implications in this choice. +By default, this role is assigned to the FirewallVM.) +After the UpdateVM has successfully downloaded new packages, they are sent to dom0, where they are verified and installed. +This separation of duties significantly reduces the attack surface, since all of the network and metadata processing code is removed from the TCB. -How is software installed and updated securely in dom0? -------------------------------------------------------- +Although this update scheme is far more secure than directly downloading updates in dom0, it is not invulnerable. +For example, there is nothing that the Qubes OS Project can feasibly do to prevent a malicious RPM from exploiting a hypothetical bug in the cryptographic signature verification operation. +At best, we could switch to a different distro or package manager, but any of them could be vulnerable to the same (or a similar) attack. +While we could, in theory, write a custom solution, it would only be effective if Qubes repos included all of the regular TemplateVM distro's updates, and this would be far too costly for us to maintain. -The install/update process is split into two phases: "resolve and download" and "verify and install." The "resolve and download" phase is handled by the "UpdateVM." (The role of UpdateVM can be assigned to any VM in the Qubes VM Manager, and there are no significant security implications in this choice. By default, this role is assigned to the firewallvm.) After the UpdateVM has successfully downloaded new packages, they are sent to dom0, where they are verified and installed. This separation of duties significantly reduces the attack surface, since all of the network and metadata processing code is removed from the TCB. +## How to update dom0 -Although this update scheme is far more secure than directly downloading updates in dom0, it is not invulnerable. For example, there is nothing that the Qubes project can feasibly do to prevent a malicious RPM from exploiting a hypothetical bug in GPG's `--verify` operation. At best, we could switch to a different distro or package manager, but any of them could be vulnerable to the same (or a similar) attack. While we could, in theory, write a custom solution, it would only be effective if Qubes repos included all of the regular TemplateVM distro's updates, and this would be far too costly for us to maintain. +In the Qube Manager, simply select dom0 in the VM list, then click the **Update VM system** button (the blue, downward-pointing arrow). +In addition, updating dom0 has been made more convenient: You will be prompted on the desktop whenever new dom0 updates are available and given the choice to run the update with a single click. -How to install and update software in dom0 ------------------------------------------- - -### How to update dom0 - -In the Qube Manager, simply select dom0 in the VM list, then click the **Update VM system** button (the blue, downward-pointing arrow). In addition, updating dom0 has been made more convenient: You will be prompted on the desktop whenever new dom0 updates are available and given the choice to run the update with a single click. - -Alternatively, command-line tools are available for accomplishing various update-related tasks (some of which are not available via Qubes VM Manager). In order to update dom0 from the command line, start a console in dom0 and then run one of the following commands: +Alternatively, command-line tools are available for accomplishing various update-related tasks (some of which are not available via Qubes VM Manager). +In order to update dom0 from the command line, start a console in dom0 and then run one of the following commands: To check and install updates for dom0 software: $ sudo qubes-dom0-update -### How to install a specific package +## How to install a specific package To install additional packages in dom0 (usually not recommended): $ sudo qubes-dom0-update anti-evil-maid -You may also pass the `--enablerepo=` option in order to enable optional repositories (see yum configuration in dom0). However, this is only for advanced users who really understand what they are doing. +You may also pass the `--enablerepo=` option in order to enable optional repositories (see yum configuration in dom0). +However, this is only for advanced users who really understand what they are doing. You can also pass commands to `dnf` using `--action=...`. -### How to downgrade a specific package +## How to downgrade a specific package **WARNING:** Downgrading a package can expose your system to security vulnerabilities. @@ -69,7 +72,7 @@ You can also pass commands to `dnf` using `--action=...`. sudo dnf downgrade package-version ~~~ -### How to re-install a package +## How to re-install a package You can re-install in a similar fashion to downgrading. @@ -87,17 +90,18 @@ You can re-install in a similar fashion to downgrading. sudo dnf reinstall package ~~~ - Note that `dnf` will only re-install if the installed and downloaded versions match. You can ensure they match by either updating the package to the latest version, or specifying the package version in the first step using the form `package-version`. + Note that `dnf` will only re-install if the installed and downloaded versions match. + You can ensure they match by either updating the package to the latest version, or specifying the package version in the first step using the form `package-version`. -### How to uninstall a package +## How to uninstall a package If you've installed a package such as anti-evil-maid, you can remove it with the following command: sudo dnf remove anti-evil-maid -### Testing repositories +## Testing repositories -There are three Qubes dom0 testing repositories: +There are three Qubes dom0 [testing] repositories: * `qubes-dom0-current-testing` -- testing packages that will eventually land in the stable (`current`) repository @@ -106,8 +110,8 @@ There are three Qubes dom0 testing repositories: * `qubes-dom0-unstable` -- packages that are not intended to land in the stable (`qubes-dom0-current`) repository; mostly experimental debugging packages -To temporarily enable any of these repos, use the `--enablerepo=` -option. Example commands: +To temporarily enable any of these repos, use the `--enablerepo=` option. +Example commands: ~~~ sudo qubes-dom0-update --enablerepo=qubes-dom0-current-testing @@ -118,10 +122,30 @@ sudo qubes-dom0-update --enablerepo=qubes-dom0-unstable To enable or disable any of these repos permanently, change the corresponding `enabled` value to `1` in `/etc/yum.repos.d/qubes-dom0.repo`. -### Kernel Upgrade ### +## Kernel upgrade -Install newer kernel for dom0 and VMs. The package `kernel` is for dom0 and the package `kernel-qubes-vm` -is needed for the VMs. (Note that the following example enables the unstable repo.) +This section describes upgrading the kernel in dom0 and domUs. + +### dom0 + +The packages `kernel` and `kernel-latest` are for dom0. + +In the `current` repository: + - `kernel`: an older LTS kernel that has passed Qubes [testing] (the default dom0 kernel) + - `kernel-latest`: the latest release from kernel.org that has passed Qubes [testing] (useful for [troubleshooting newer hardware]) + +In the `current-testing` repository: + - `kernel`: the latest LTS kernel from kernel.org at the time it was built. + - `kernel-latest`: the latest release from kernel.org at the time it was built. + +### domU + +The packages `kernel-qubes-vm` and `kernel-latest-qubes-vm` are for domUs. +See [Managing VM kernel] for more information. + +### Example + +(Note that the following example enables the unstable repo.) ~~~ sudo qubes-dom0-update --enablerepo=qubes-dom0-unstable kernel kernel-qubes-vm @@ -147,11 +171,21 @@ If you wish to upgrade to a kernel that is not available from the repos, then there is no easy way to do so, but [it may still be possible if you're willing to do a lot of work yourself](https://groups.google.com/d/msg/qubes-users/m8sWoyV58_E/HYdReRIYBAAJ). -### Upgrading over Tor ### +## Updating over Tor ### Requires installed [Whonix](/doc/privacy/whonix/). -Go to Qubes VM Manager -> System -> Global Settings. See the UpdateVM setting. Choose your desired Whonix-Gateway ProxyVM from the list. For example: sys-whonix. +Go to Qubes VM Manager -> System -> Global Settings. +See the UpdateVM setting. +Choose your desired Whonix-Gateway ProxyVM from the list. +For example: sys-whonix. Qubes VM Manager -> System -> Global Settings -> UpdateVM -> sys-whonix + +[Updating Qubes OS]: /doc/updating-qubes-os/ +[security]: /security/ +[testing]: /doc/testing/ +[troubleshooting newer hardware]: /doc/newer-hardware-troubleshooting/ +[Managing VM kernel]: /doc/managing-vm-kernel/ + diff --git a/user/common-tasks/software-update-domu.md b/user/common-tasks/software-update-domu.md new file mode 100644 index 00000000..ccc83b2e --- /dev/null +++ b/user/common-tasks/software-update-domu.md @@ -0,0 +1,180 @@ +--- +layout: doc +title: Installing and updating software in domUs +permalink: /doc/software-update-domu/ +redirect_from: +- /doc/software-update-vm/ +- /en/doc/software-update-vm/ +- /doc/SoftwareUpdateVM/ +- /wiki/SoftwareUpdateVM/ +--- + +# Installing and updating software in domUs + +Updating [domUs], especially [TemplateVMs] and [StandaloneVMs][StandaloneVM] are important steps in [Updating Qubes OS]. +It is very import to keep domUs up-to-date with the latest [security] updates. +Updating these VMs also allows you to receive various non-security bug fixes and enhancements both from the Qubes OS Project and from your upstream distro maintainer. + + +## Installing software in TemplateVMs + +To permanently install new software in a TemplateVM: + + 1. Start the TemplateVM. + 2. Start either a terminal (e.g. `gnome-terminal`) or a dedicated software management application, such as `gpk-application`. + 3. Install software as normally instructed inside that operating system (e.g. using `dnf`, or the dedicated GUI application). + 4. Shut down the TemplateVM. + 5. Restart all [TemplateBasedVMs] based on the TemplateVM. + + +## Updating software in TemplateVMs + +The recommended way to update your TemplateVMs is to use the **Qubes Update** tool. +By default, the icon for this tool will appear in your Notification Area when updates are available. +Simply click on it and follow the guided steps. +If you wish to open this tool directly, you can find it in the System Tools area of the Applications menu. + +You can also update TemplateVMs individually. +In the Qube Manager, select the desired TemplateVM, then click **Update qube**. +Advanced users can execute the standard update command for that operating system from the command line, e.g., `dnf update` in Fedora and `apt-get update` in Debian. + + +## Testing repositories + +If you wish to install updates that are still in [testing], you must enable the appropriate testing repositories. + + +### Fedora + +There are three Qubes VM testing repositories (where `*` denotes the Release): + +* `qubes-vm-*-current-testing` -- testing packages that will eventually land in the stable (`current`) repository +* `qubes-vm-*-security-testing` -- a subset of `qubes-vm-*-current-testing` that contains packages that qualify as security fixes +* `qubes-vm-*-unstable` -- packages that are not intended to land in the stable (`qubes-vm-*-current`) repository; mostly experimental debugging packages + +To temporarily enable any of these repos, use the `--enablerepo=` option. +Example commands: + +~~~ +sudo dnf upgrade --enablerepo=qubes-vm-*-current-testing +sudo dnf upgrade --enablerepo=qubes-vm-*-security-testing +sudo dnf upgrade --enablerepo=qubes-vm-*-unstable +~~~ + +To enable or disable any of these repos permanently, change the corresponding `enabled` value to `1` in `/etc/yum.repos.d/qubes-*.repo`. + + +### Debian + +Debian also has three Qubes VM testing repositories (where `*` denotes the Release): + +* `*-testing` -- testing packages that will eventually land in the stable (`current`) repository +* `*-securitytesting` -- a subset of `*-testing` that contains packages that qualify as security fixes +* `*-unstable` -- packages that are not intended to land in the stable repository; mostly experimental debugging packages + +To enable or disable any of these repos permanently, uncomment the corresponding `deb` line in `/etc/apt/sources.list.d/qubes-r*.list` + + +## Reverting changes to a TemplateVM + +Perhaps you've just updated your TemplateVM, and the update broke your template. +Or perhaps you've made a terrible mistake, like accidentally confirming the installation of an unsigned package that could be malicious. +Fortunately, it's easy to revert changes to TemplateVMs using the command appropriate to your version of Qubes. + +**Important:** This command will roll back any changes made *during the last time the TemplateVM was run, but **not** before.* +This means that if you have already restarted the TemplateVM, using this command is unlikely to help, and you'll likely want to reinstall it from the repository instead. +On the other hand, if the template is already broken or compromised, it won't hurt to try reverting first. +Just make sure to **back up** all of your data and changes first! + +For example, to revert changes to the `fedora-XX` TemplateVM (where `XX` is your Fedora version): + +1. Shut down `fedora-XX`. + If you've already just shut it down, do **not** start it again (see above). +2. In a dom0 terminal, type: + + qvm-volume revert fedora-XX:root + + +## StandaloneVMs + +When you create a [StandaloneVM] from a TemplateVM, the StandaloneVM is a complete clone of the TemplateVM, including the entire filesystem. +After the moment of creation, the StandaloneVM is completely independent from the TemplateVM. +Therefore, it will not be updated when the TemplateVM is updated. +Rather, it must be updated individually. +The process for installing and updating software in StandaloneVMs is the same as described above for TemplateVMs. + + +## Advanced + +The following sections cover advanced topics pertaining to installing and updating software in domUs. + + +### RPMFusion for Fedora TemplateVMs + +If you would like to enable the [RPM Fusion] repository, open a Terminal of the TemplateVM and type the following commands: + +~~~ +sudo dnf config-manager --set-enabled rpmfusion-free rpmfusion-nonfree +sudo dnf upgrade --refresh +~~~ + + +### Temporarily allowing networking for software installation + +Some third-party applications cannot be installed using the standard repositories and need to be manually downloaded and installed. +When the installation requires internet connection to access third-party repositories, it will naturally fail when run in a Template VM because the default firewall rules for templates only allow connections from package managers. +So it is necessary to modify firewall rules to allow less restrictive internet access for the time of the installation, if one really wants to install those applications into a template. +As soon as software installation is completed, firewall rules should be returned back to the default state. +The user should decide by themselves whether such third-party applications should be equally trusted as the ones that come from the standard Fedora signed repositories and whether their installation will not compromise the default Template VM, and potentially consider installing them into a separate template or a standalone VM (in which case the problem of limited networking access doesn't apply by default), as described above. + + +### Updates proxy + +Updates proxy is a service which allows access only from package managers. +This is meant to mitigate user errors (like using browser in the template VM), rather than some real isolation. +It is done with http proxy (tinyproxy) instead of simple firewall rules because it is hard to list all the repository mirrors (and keep that list up to date). +The proxy is used only to filter the traffic, not to cache anything. + +The proxy is running in selected VMs (by default all the NetVMs (1)) and intercepts traffic directed to 10.137.255.254:8082. +Thanks to such configuration all the VMs can use the same proxy address, and if there is a proxy on network path, it will handle the traffic (of course when firewall rules allow that). +If the VM is configured to have access to the updates proxy (2), the startup scripts will automatically configure dnf to really use the proxy (3). +Also access to updates proxy is independent of any other firewall settings (VM will have access to updates proxy, even if policy is set to block all the traffic). + +There are two services (`qvm-service`, [service framework]): + +1. qubes-updates-proxy (and its deprecated name: qubes-yum-proxy) - a service providing a proxy for templates - by default enabled in NetVMs (especially: sys-net) +2. updates-proxy-setup (and its deprecated name: yum-proxy-setup) - use a proxy provided by another VM (instead of downloading updates directly), enabled by default in all templates + +Both the old and new names work. +The defaults listed above are applied if the service is not explicitly listed in the services tab. + + +#### Technical details + +The updates proxy uses RPC/qrexec. +The proxy is configured in qrexec policy on dom0: `/etc/qubes-rpc/policy/qubes.UpdatesProxy`. +By default this is set to sys-net and/or sys-whonix, depending on firstboot choices. +This new design allows for templates to be updated even when they are not connected to any NetVM. + + +Example policy file in R4.0 (with Whonix installed, but not set as default UpdateVM for all templates): +``` +# any VM with tag `whonix-updatevm` should use `sys-whonix`; this tag is added to `whonix-gw` and `whonix-ws` during installation and is preserved during template clone +@tag:whonix-updatevm @default allow,target=sys-whonix +@tag:whonix-updatevm @anyvm deny + +# other templates use sys-net +@type:TemplateVM @default allow,target=sys-net +@anyvm @anyvm deny +``` + +[domUs]: /doc/glossary/#domu +[TemplateVMs]: /doc/templates/ +[StandaloneVM]: /doc/standalone-and-hvm/ +[Updating Qubes OS]: /doc/updating-qubes-os/ +[security]: /security/ +[TemplateBasedVMs]: /doc/glossary/#templatebasedvm +[testing]: /doc/testing +[RPM Fusion]: http://rpmfusion.org/ +[service framework]: /doc/qubes-service/ + diff --git a/user/common-tasks/software-update-vm.md b/user/common-tasks/software-update-vm.md deleted file mode 100644 index e3d4c72e..00000000 --- a/user/common-tasks/software-update-vm.md +++ /dev/null @@ -1,257 +0,0 @@ ---- -layout: doc -title: Installing and updating software in VMs -permalink: /doc/software-update-vm/ -redirect_from: -- /en/doc/software-update-vm/ -- /doc/SoftwareUpdateVM/ -- /wiki/SoftwareUpdateVM/ ---- - -Installing and updating software in VMs -======================================= - -How TemplateVMs work in Qubes ------------------------------- - -Most of the AppVMs (domains) are based on a *TemplateVM*, which means that their root filesystem (i.e. all the programs and system files) is based on the root filesystem of the corresponding template VM. -This dramatically saves disk space, because each new AppVM needs disk space only for storing the user's files (i.e. the home directory). -Of course the AppVM has only read-access to the template's filesystem -- it cannot modify it in any way. - -In addition to saving on the disk space, and reducing domain creation time, another advantage of such scheme is the possibility for centralized software update. -It's just enough to do the update in the template VM, and then all the AppVMs based on this template get updates automatically after they are restarted. - -The side effect of this mechanism is, of course, that if you install any software in your AppVM, more specifically in any directory other than `/home`, `/usr/local`, or `/rw` then it will disappear after the AppVM reboots (as the root filesystem for this AppVM will again be "taken" from the TemplateVM). -**This means one normally installs software in the TemplateVM, not in AppVMs.** - -The template root filesystem is created in a thin pool, so manual trims are not necessary. -See [here](/doc/disk-trim) for further discussion on enabling discards/trim support. - -Installing (or updating) software in the TemplateVM ----------------------------------------------------- - -In order to permanently install new software, you should: - -- Start the template VM and then start either console (e.g. `gnome-terminal`) or dedicated software management application, such as `gpk-application` (*Start-\>Applications-\>Template: fedora-XX-\>Add/Remove software*), - -- Install/update software as usual (e.g. using dnf, or the dedicated GUI application). - Then, shutdown the template VM. - -- You will see now that all the AppVMs based on this template (by default all your VMs) will be marked as "outdated" in the manager. - This is because their filesystems have not been yet updated -- in order to do that, you must restart each VM. - You don't need to restart all of them at the same time -- e.g. if you just need the newly installed software to be available in your 'personal' domain, then restart only this VM. - You can restart others whenever this will be convenient to you. - -Testing repositories --------------------- - -### Fedora ### - -There are three Qubes VM testing repositories (where `*` denotes the Release): - -* `qubes-vm-*-current-testing` -- testing packages that will eventually land in the stable (`current`) repository -* `qubes-vm-*-security-testing` -- a subset of `qubes-vm-*-current-testing` that contains packages that qualify as security fixes -* `qubes-vm-*-unstable` -- packages that are not intended to land in the stable (`qubes-vm-*-current`) repository; mostly experimental debugging packages - -To temporarily enable any of these repos, use the `--enablerepo=` option. -Example commands: - -~~~ -sudo dnf upgrade --enablerepo=qubes-vm-*-current-testing -sudo dnf upgrade --enablerepo=qubes-vm-*-security-testing -sudo dnf upgrade --enablerepo=qubes-vm-*-unstable -~~~ - -To enable or disable any of these repos permanently, change the corresponding `enabled` value to `1` in `/etc/yum.repos.d/qubes-*.repo`. - -### Debian ### - -Debian also has three Qubes VM testing repositories (where `*` denotes the Release): - -* `*-testing` -- testing packages that will eventually land in the stable (`current`) repository -* `*-securitytesting` -- a subset of `*-testing` that contains packages that qualify as security fixes -* `*-unstable` -- packages that are not intended to land in the stable repository; mostly experimental debugging packages - -To enable or disable any of these repos permanently, uncomment the corresponding `deb` line in `/etc/apt/sources.list.d/qubes-r*.list` - -Reverting changes to a TemplateVM ---------------------------------- - -Perhaps you've just updated your TemplateVM, and the update broke your template. -Or perhaps you've made a terrible mistake, like accidentally confirming the installation of an unsigned package that could be malicious. -Fortunately, it's easy to revert changes to TemplateVMs using the command appropriate to your version of Qubes. - -**Important:** This command will roll back any changes made *during the last time the TemplateVM was run, but **not** before.* -This means that if you have already restarted the TemplateVM, using this command is unlikely to help, and you'll likely want to reinstall it from the repository instead. -On the other hand, if the template is already broken or compromised, it won't hurt to try reverting first. -Just make sure to **back up** all of your data and changes first! - -For example, to revert changes to the `fedora-26` TemplateVM: - -1. Shut down `fedora-26`. - If you've already just shut it down, do **not** start it again (see above). -2. In a dom0 terminal, type: - - qvm-volume revert fedora-26:root - -Notes on trusting your TemplateVM(s) -------------------------------------- - -As the TemplateVM is used for creating filesystems for other AppVMs where you actually do the work, it means that the TemplateVM is as trusted as the most trusted AppVM based on this template. -In other words, if your template VM gets compromised, e.g. because you installed an application, whose *installer's scripts* were malicious, then *all* your AppVMs (based on this template) will inherit this compromise. - -There are several ways to deal with this problem: - -- Only install packages from trusted sources -- e.g. from the pre-configured Fedora repositories. - All those packages are signed by Fedora, and we expect that at least the package's installation scripts are not malicious. - This is enforced by default (at the [firewall VM level](/doc/firewall/)), by not allowing any networking connectivity in the default template VM, except for access to the Fedora repos. - -- Use *standalone VMs* (see below) for installation of untrusted software packages. - -- Use multiple templates (see below) for different classes of domains, e.g. a less trusted template, used for creation of less trusted AppVMs, would get various packages from less trusted vendors, while the template used for more trusted AppVMs will only get packages from the standard Fedora repos. - -Some popular questions: - -- So, why should we actually trust Fedora repos -- it also contains large amount of third-party software that might be buggy, right? - -As far as the template's compromise is concerned, it doesn't really matter whether `/usr/bin/firefox` is buggy and can be exploited, or not. -What matters is whether its *installation* scripts (such as %post in the rpm.spec) are benign or not. -Template VM should be used only for installation of packages, and nothing more, so it should never get a chance to actually run `/usr/bin/firefox` and get infected from it, in case it was compromised. -Also, some of your more trusted AppVMs would have networking restrictions enforced by the [firewall VM](/doc/firewall/), and again they should not fear this proverbial `/usr/bin/firefox` being potentially buggy and easy to compromise. - -- But why trust Fedora? - -Because we chose to use Fedora as a vendor for the Qubes OS foundation (e.g. for Dom0 packages and for AppVM packages). -We also chose to trust several other vendors, such as Xen.org, kernel.org, and a few others whose software we use in Dom0. -We had to trust *somebody* as we are unable to write all the software from scratch ourselves. -But there is a big difference in trusting all Fedora packages to be non-malicious (in terms of installation scripts) vs. trusting all those packages are non-buggy and non-exploitable. -We certainly do not assume the latter. - -- So, are the template VMs as trusted as Dom0? - -Not quite. -Dom0 compromise is absolutely fatal, and it leads to Game OverTM. -However, a compromise of a template affects only a subset of all your AppVMs (in case you use more than one template, or also some standalone VMs). -Also, if your AppVMs are network disconnected, even though their filesystems might get compromised due to the corresponding template compromise, it still would be difficult for the attacker to actually leak out the data stolen in an AppVM. -Not impossible (due to existence of cover channels between VMs on x86 architecture), but difficult and slow. - -Standalone VMs --------------- -Standalone VMs have their own copy of the whole filesystem, and thus can be updated and managed on their own. -But this means that they take a few GBs on disk, and also that centralized updates do not apply to them. - -Sometimes it might be convenient to have a VM that has its own filesystem, where you can directly introduce changes, without the need to start/stop the template VM. -Such situations include e.g.: - -- VMs used for development (devel environments require a lot of \*-devel packages and specific devel tools) - -- VMs used for installing untrusted packages. - Normally you install digitally signed software from Red Hat/Fedora repositories, and it's reasonable that such software has non malicious *installation* scripts (rpm pre/post scripts). - However, when you would like to install some packages from less trusted sources, or unsigned, then using a dedicated (untrusted) standalone VM might be a better way. - -In order to create a standalone VM you can use a command line like this (from console in Dom0): - -``` -qvm-create --class StandaloneVM --label