]> git.proxmox.com Git - mirror_qemu.git/blob - docs/system/riscv/virt.rst
target/arm: Add FEAT_NV2 to max, neoverse-n2, neoverse-v1 CPUs
[mirror_qemu.git] / docs / system / riscv / virt.rst
1 'virt' Generic Virtual Platform (``virt``)
2 ==========================================
3
4 The ``virt`` board is a platform which does not correspond to any real hardware;
5 it is designed for use in virtual machines. It is the recommended board type
6 if you simply want to run a guest such as Linux and do not care about
7 reproducing the idiosyncrasies and limitations of a particular bit of
8 real-world hardware.
9
10 Supported devices
11 -----------------
12
13 The ``virt`` machine supports the following devices:
14
15 * Up to 512 generic RV32GC/RV64GC cores, with optional extensions
16 * Core Local Interruptor (CLINT)
17 * Platform-Level Interrupt Controller (PLIC)
18 * CFI parallel NOR flash memory
19 * 1 NS16550 compatible UART
20 * 1 Google Goldfish RTC
21 * 1 SiFive Test device
22 * 8 virtio-mmio transport devices
23 * 1 generic PCIe host bridge
24 * The fw_cfg device that allows a guest to obtain data from QEMU
25
26 The hypervisor extension has been enabled for the default CPU, so virtual
27 machines with hypervisor extension can simply be used without explicitly
28 declaring.
29
30 Hardware configuration information
31 ----------------------------------
32
33 The ``virt`` machine automatically generates a device tree blob ("dtb")
34 which it passes to the guest, if there is no ``-dtb`` option. This provides
35 information about the addresses, interrupt lines and other configuration of
36 the various devices in the system. Guest software should discover the devices
37 that are present in the generated DTB.
38
39 If users want to provide their own DTB, they can use the ``-dtb`` option.
40 These DTBs should have the following requirements:
41
42 * The number of subnodes of the /cpus node should match QEMU's ``-smp`` option
43 * The /memory reg size should match QEMU’s selected ram_size via ``-m``
44 * Should contain a node for the CLINT device with a compatible string
45 "riscv,clint0" if using with OpenSBI BIOS images
46
47 Boot options
48 ------------
49
50 The ``virt`` machine can start using the standard -kernel functionality
51 for loading a Linux kernel, a VxWorks kernel, an S-mode U-Boot bootloader
52 with the default OpenSBI firmware image as the -bios. It also supports
53 the recommended RISC-V bootflow: U-Boot SPL (M-mode) loads OpenSBI fw_dynamic
54 firmware and U-Boot proper (S-mode), using the standard -bios functionality.
55
56 Using flash devices
57 -------------------
58
59 By default, the first flash device (pflash0) is expected to contain
60 S-mode firmware code. It can be configured as read-only, with the
61 second flash device (pflash1) available to store configuration data.
62
63 For example, booting edk2 looks like
64
65 .. code-block:: bash
66
67 $ qemu-system-riscv64 \
68 -blockdev node-name=pflash0,driver=file,read-only=on,filename=<edk2_code> \
69 -blockdev node-name=pflash1,driver=file,filename=<edk2_vars> \
70 -M virt,pflash0=pflash0,pflash1=pflash1 \
71 ... other args ....
72
73 For TCG guests only, it is also possible to boot M-mode firmware from
74 the first flash device (pflash0) by additionally passing ``-bios
75 none``, as in
76
77 .. code-block:: bash
78
79 $ qemu-system-riscv64 \
80 -bios none \
81 -blockdev node-name=pflash0,driver=file,read-only=on,filename=<m_mode_code> \
82 -M virt,pflash0=pflash0 \
83 ... other args ....
84
85 Firmware images used for pflash must be exactly 32 MiB in size.
86
87 Machine-specific options
88 ------------------------
89
90 The following machine-specific options are supported:
91
92 - aclint=[on|off]
93
94 When this option is "on", ACLINT devices will be emulated instead of
95 SiFive CLINT. When not specified, this option is assumed to be "off".
96 This option is restricted to the TCG accelerator.
97
98 - aia=[none|aplic|aplic-imsic]
99
100 This option allows selecting interrupt controller defined by the AIA
101 (advanced interrupt architecture) specification. The "aia=aplic" selects
102 APLIC (advanced platform level interrupt controller) to handle wired
103 interrupts whereas the "aia=aplic-imsic" selects APLIC and IMSIC (incoming
104 message signaled interrupt controller) to handle both wired interrupts and
105 MSIs. When not specified, this option is assumed to be "none" which selects
106 SiFive PLIC to handle wired interrupts.
107
108 - aia-guests=nnn
109
110 The number of per-HART VS-level AIA IMSIC pages to be emulated for a guest
111 having AIA IMSIC (i.e. "aia=aplic-imsic" selected). When not specified,
112 the default number of per-HART VS-level AIA IMSIC pages is 0.
113
114 Running Linux kernel
115 --------------------
116
117 Linux mainline v5.12 release is tested at the time of writing. To build a
118 Linux mainline kernel that can be booted by the ``virt`` machine in
119 64-bit mode, simply configure the kernel using the defconfig configuration:
120
121 .. code-block:: bash
122
123 $ export ARCH=riscv
124 $ export CROSS_COMPILE=riscv64-linux-
125 $ make defconfig
126 $ make
127
128 To boot the newly built Linux kernel in QEMU with the ``virt`` machine:
129
130 .. code-block:: bash
131
132 $ qemu-system-riscv64 -M virt -smp 4 -m 2G \
133 -display none -serial stdio \
134 -kernel arch/riscv/boot/Image \
135 -initrd /path/to/rootfs.cpio \
136 -append "root=/dev/ram"
137
138 To build a Linux mainline kernel that can be booted by the ``virt`` machine
139 in 32-bit mode, use the rv32_defconfig configuration. A patch is required to
140 fix the 32-bit boot issue for Linux kernel v5.12.
141
142 .. code-block:: bash
143
144 $ export ARCH=riscv
145 $ export CROSS_COMPILE=riscv64-linux-
146 $ curl https://patchwork.kernel.org/project/linux-riscv/patch/20210627135117.28641-1-bmeng.cn@gmail.com/mbox/ > riscv.patch
147 $ git am riscv.patch
148 $ make rv32_defconfig
149 $ make
150
151 Replace ``qemu-system-riscv64`` with ``qemu-system-riscv32`` in the command
152 line above to boot the 32-bit Linux kernel. A rootfs image containing 32-bit
153 applications shall be used in order for kernel to boot to user space.
154
155 Running U-Boot
156 --------------
157
158 U-Boot mainline v2021.04 release is tested at the time of writing. To build an
159 S-mode U-Boot bootloader that can be booted by the ``virt`` machine, use
160 the qemu-riscv64_smode_defconfig with similar commands as described above for Linux:
161
162 .. code-block:: bash
163
164 $ export CROSS_COMPILE=riscv64-linux-
165 $ make qemu-riscv64_smode_defconfig
166
167 Boot the 64-bit U-Boot S-mode image directly:
168
169 .. code-block:: bash
170
171 $ qemu-system-riscv64 -M virt -smp 4 -m 2G \
172 -display none -serial stdio \
173 -kernel /path/to/u-boot.bin
174
175 To test booting U-Boot SPL which in M-mode, which in turn loads a FIT image
176 that bundles OpenSBI fw_dynamic firmware and U-Boot proper (S-mode) together,
177 build the U-Boot images using riscv64_spl_defconfig:
178
179 .. code-block:: bash
180
181 $ export CROSS_COMPILE=riscv64-linux-
182 $ export OPENSBI=/path/to/opensbi-riscv64-generic-fw_dynamic.bin
183 $ make qemu-riscv64_spl_defconfig
184
185 The minimal QEMU commands to run U-Boot SPL are:
186
187 .. code-block:: bash
188
189 $ qemu-system-riscv64 -M virt -smp 4 -m 2G \
190 -display none -serial stdio \
191 -bios /path/to/u-boot-spl \
192 -device loader,file=/path/to/u-boot.itb,addr=0x80200000
193
194 To test 32-bit U-Boot images, switch to use qemu-riscv32_smode_defconfig and
195 riscv32_spl_defconfig builds, and replace ``qemu-system-riscv64`` with
196 ``qemu-system-riscv32`` in the command lines above to boot the 32-bit U-Boot.
197
198 Enabling TPM
199 ------------
200
201 A TPM device can be connected to the virt board by following the steps below.
202
203 First launch the TPM emulator:
204
205 .. code-block:: bash
206
207 $ swtpm socket --tpm2 -t -d --tpmstate dir=/tmp/tpm \
208 --ctrl type=unixio,path=swtpm-sock
209
210 Then launch QEMU with some additional arguments to link a TPM device to the backend:
211
212 .. code-block:: bash
213
214 $ qemu-system-riscv64 \
215 ... other args .... \
216 -chardev socket,id=chrtpm,path=swtpm-sock \
217 -tpmdev emulator,id=tpm0,chardev=chrtpm \
218 -device tpm-tis-device,tpmdev=tpm0
219
220 The TPM device can be seen in the memory tree and the generated device
221 tree and should be accessible from the guest software.