1 VFIO - "Virtual Function I/O"[1]
2 -------------------------------------------------------------------------------
3 Many modern system now provide DMA and interrupt remapping facilities
4 to help ensure I/O devices behave within the boundaries they've been
5 allotted. This includes x86 hardware with AMD-Vi and Intel VT-d,
6 POWER systems with Partitionable Endpoints (PEs) and embedded PowerPC
7 systems such as Freescale PAMU. The VFIO driver is an IOMMU/device
8 agnostic framework for exposing direct device access to userspace, in
9 a secure, IOMMU protected environment. In other words, this allows
10 safe[2], non-privileged, userspace drivers.
12 Why do we want that? Virtual machines often make use of direct device
13 access ("device assignment") when configured for the highest possible
14 I/O performance. From a device and host perspective, this simply
15 turns the VM into a userspace driver, with the benefits of
16 significantly reduced latency, higher bandwidth, and direct use of
17 bare-metal device drivers[3].
19 Some applications, particularly in the high performance computing
20 field, also benefit from low-overhead, direct device access from
21 userspace. Examples include network adapters (often non-TCP/IP based)
22 and compute accelerators. Prior to VFIO, these drivers had to either
23 go through the full development cycle to become proper upstream
24 driver, be maintained out of tree, or make use of the UIO framework,
25 which has no notion of IOMMU protection, limited interrupt support,
26 and requires root privileges to access things like PCI configuration
29 The VFIO driver framework intends to unify these, replacing both the
30 KVM PCI specific device assignment code as well as provide a more
31 secure, more featureful userspace driver environment than UIO.
33 Groups, Devices, and IOMMUs
34 -------------------------------------------------------------------------------
36 Devices are the main target of any I/O driver. Devices typically
37 create a programming interface made up of I/O access, interrupts,
38 and DMA. Without going into the details of each of these, DMA is
39 by far the most critical aspect for maintaining a secure environment
40 as allowing a device read-write access to system memory imposes the
41 greatest risk to the overall system integrity.
43 To help mitigate this risk, many modern IOMMUs now incorporate
44 isolation properties into what was, in many cases, an interface only
45 meant for translation (ie. solving the addressing problems of devices
46 with limited address spaces). With this, devices can now be isolated
47 from each other and from arbitrary memory access, thus allowing
48 things like secure direct assignment of devices into virtual machines.
50 This isolation is not always at the granularity of a single device
51 though. Even when an IOMMU is capable of this, properties of devices,
52 interconnects, and IOMMU topologies can each reduce this isolation.
53 For instance, an individual device may be part of a larger multi-
54 function enclosure. While the IOMMU may be able to distinguish
55 between devices within the enclosure, the enclosure may not require
56 transactions between devices to reach the IOMMU. Examples of this
57 could be anything from a multi-function PCI device with backdoors
58 between functions to a non-PCI-ACS (Access Control Services) capable
59 bridge allowing redirection without reaching the IOMMU. Topology
60 can also play a factor in terms of hiding devices. A PCIe-to-PCI
61 bridge masks the devices behind it, making transaction appear as if
62 from the bridge itself. Obviously IOMMU design plays a major factor
65 Therefore, while for the most part an IOMMU may have device level
66 granularity, any system is susceptible to reduced granularity. The
67 IOMMU API therefore supports a notion of IOMMU groups. A group is
68 a set of devices which is isolatable from all other devices in the
69 system. Groups are therefore the unit of ownership used by VFIO.
71 While the group is the minimum granularity that must be used to
72 ensure secure user access, it's not necessarily the preferred
73 granularity. In IOMMUs which make use of page tables, it may be
74 possible to share a set of page tables between different groups,
75 reducing the overhead both to the platform (reduced TLB thrashing,
76 reduced duplicate page tables), and to the user (programming only
77 a single set of translations). For this reason, VFIO makes use of
78 a container class, which may hold one or more groups. A container
79 is created by simply opening the /dev/vfio/vfio character device.
81 On its own, the container provides little functionality, with all
82 but a couple version and extension query interfaces locked away.
83 The user needs to add a group into the container for the next level
84 of functionality. To do this, the user first needs to identify the
85 group associated with the desired device. This can be done using
86 the sysfs links described in the example below. By unbinding the
87 device from the host driver and binding it to a VFIO driver, a new
88 VFIO group will appear for the group as /dev/vfio/$GROUP, where
89 $GROUP is the IOMMU group number of which the device is a member.
90 If the IOMMU group contains multiple devices, each will need to
91 be bound to a VFIO driver before operations on the VFIO group
92 are allowed (it's also sufficient to only unbind the device from
93 host drivers if a VFIO driver is unavailable; this will make the
94 group available, but not that particular device). TBD - interface
95 for disabling driver probing/locking a device.
97 Once the group is ready, it may be added to the container by opening
98 the VFIO group character device (/dev/vfio/$GROUP) and using the
99 VFIO_GROUP_SET_CONTAINER ioctl, passing the file descriptor of the
100 previously opened container file. If desired and if the IOMMU driver
101 supports sharing the IOMMU context between groups, multiple groups may
102 be set to the same container. If a group fails to set to a container
103 with existing groups, a new empty container will need to be used
106 With a group (or groups) attached to a container, the remaining
107 ioctls become available, enabling access to the VFIO IOMMU interfaces.
108 Additionally, it now becomes possible to get file descriptors for each
109 device within a group using an ioctl on the VFIO group file descriptor.
111 The VFIO device API includes ioctls for describing the device, the I/O
112 regions and their read/write/mmap offsets on the device descriptor, as
113 well as mechanisms for describing and registering interrupt
117 -------------------------------------------------------------------------------
119 Assume user wants to access PCI device 0000:06:0d.0
121 $ readlink /sys/bus/pci/devices/0000:06:0d.0/iommu_group
122 ../../../../kernel/iommu_groups/26
124 This device is therefore in IOMMU group 26. This device is on the
125 pci bus, therefore the user will make use of vfio-pci to manage the
130 Binding this device to the vfio-pci driver creates the VFIO group
131 character devices for this group:
133 $ lspci -n -s 0000:06:0d.0
134 06:0d.0 0401: 1102:0002 (rev 08)
135 # echo 0000:06:0d.0 > /sys/bus/pci/devices/0000:06:0d.0/driver/unbind
136 # echo 1102 0002 > /sys/bus/pci/drivers/vfio-pci/new_id
138 Now we need to look at what other devices are in the group to free
141 $ ls -l /sys/bus/pci/devices/0000:06:0d.0/iommu_group/devices
143 lrwxrwxrwx. 1 root root 0 Apr 23 16:13 0000:00:1e.0 ->
144 ../../../../devices/pci0000:00/0000:00:1e.0
145 lrwxrwxrwx. 1 root root 0 Apr 23 16:13 0000:06:0d.0 ->
146 ../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
147 lrwxrwxrwx. 1 root root 0 Apr 23 16:13 0000:06:0d.1 ->
148 ../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
150 This device is behind a PCIe-to-PCI bridge[4], therefore we also
151 need to add device 0000:06:0d.1 to the group following the same
152 procedure as above. Device 0000:00:1e.0 is a bridge that does
153 not currently have a host driver, therefore it's not required to
154 bind this device to the vfio-pci driver (vfio-pci does not currently
155 support PCI bridges).
157 The final step is to provide the user with access to the group if
158 unprivileged operation is desired (note that /dev/vfio/vfio provides
159 no capabilities on its own and is therefore expected to be set to
160 mode 0666 by the system).
162 # chown user:user /dev/vfio/26
164 The user now has full access to all the devices and the iommu for this
165 group and can access them as follows:
167 int container, group, device, i;
168 struct vfio_group_status group_status =
169 { .argsz = sizeof(group_status) };
170 struct vfio_iommu_type1_info iommu_info = { .argsz = sizeof(iommu_info) };
171 struct vfio_iommu_type1_dma_map dma_map = { .argsz = sizeof(dma_map) };
172 struct vfio_device_info device_info = { .argsz = sizeof(device_info) };
174 /* Create a new container */
175 container = open("/dev/vfio/vfio", O_RDWR);
177 if (ioctl(container, VFIO_GET_API_VERSION) != VFIO_API_VERSION)
178 /* Unknown API version */
180 if (!ioctl(container, VFIO_CHECK_EXTENSION, VFIO_TYPE1_IOMMU))
181 /* Doesn't support the IOMMU driver we want. */
184 group = open("/dev/vfio/26", O_RDWR);
186 /* Test the group is viable and available */
187 ioctl(group, VFIO_GROUP_GET_STATUS, &group_status);
189 if (!(group_status.flags & VFIO_GROUP_FLAGS_VIABLE))
190 /* Group is not viable (ie, not all devices bound for vfio) */
192 /* Add the group to the container */
193 ioctl(group, VFIO_GROUP_SET_CONTAINER, &container);
195 /* Enable the IOMMU model we want */
196 ioctl(container, VFIO_SET_IOMMU, VFIO_TYPE1_IOMMU);
198 /* Get addition IOMMU info */
199 ioctl(container, VFIO_IOMMU_GET_INFO, &iommu_info);
201 /* Allocate some space and setup a DMA mapping */
202 dma_map.vaddr = mmap(0, 1024 * 1024, PROT_READ | PROT_WRITE,
203 MAP_PRIVATE | MAP_ANONYMOUS, 0, 0);
204 dma_map.size = 1024 * 1024;
205 dma_map.iova = 0; /* 1MB starting at 0x0 from device view */
206 dma_map.flags = VFIO_DMA_MAP_FLAG_READ | VFIO_DMA_MAP_FLAG_WRITE;
208 ioctl(container, VFIO_IOMMU_MAP_DMA, &dma_map);
210 /* Get a file descriptor for the device */
211 device = ioctl(group, VFIO_GROUP_GET_DEVICE_FD, "0000:06:0d.0");
213 /* Test and setup the device */
214 ioctl(device, VFIO_DEVICE_GET_INFO, &device_info);
216 for (i = 0; i < device_info.num_regions; i++) {
217 struct vfio_region_info reg = { .argsz = sizeof(reg) };
221 ioctl(device, VFIO_DEVICE_GET_REGION_INFO, ®);
223 /* Setup mappings... read/write offsets, mmaps
224 * For PCI devices, config space is a region */
227 for (i = 0; i < device_info.num_irqs; i++) {
228 struct vfio_irq_info irq = { .argsz = sizeof(irq) };
232 ioctl(device, VFIO_DEVICE_GET_IRQ_INFO, &irq);
234 /* Setup IRQs... eventfds, VFIO_DEVICE_SET_IRQS */
237 /* Gratuitous device reset and go... */
238 ioctl(device, VFIO_DEVICE_RESET);
241 -------------------------------------------------------------------------------
243 Please see include/linux/vfio.h for complete API documentation.
246 -------------------------------------------------------------------------------
248 VFIO bus drivers, such as vfio-pci make use of only a few interfaces
249 into VFIO core. When devices are bound and unbound to the driver,
250 the driver should call vfio_add_group_dev() and vfio_del_group_dev()
253 extern int vfio_add_group_dev(struct iommu_group *iommu_group,
255 const struct vfio_device_ops *ops,
258 extern void *vfio_del_group_dev(struct device *dev);
260 vfio_add_group_dev() indicates to the core to begin tracking the
261 specified iommu_group and register the specified dev as owned by
262 a VFIO bus driver. The driver provides an ops structure for callbacks
263 similar to a file operations structure:
265 struct vfio_device_ops {
266 int (*open)(void *device_data);
267 void (*release)(void *device_data);
268 ssize_t (*read)(void *device_data, char __user *buf,
269 size_t count, loff_t *ppos);
270 ssize_t (*write)(void *device_data, const char __user *buf,
271 size_t size, loff_t *ppos);
272 long (*ioctl)(void *device_data, unsigned int cmd,
274 int (*mmap)(void *device_data, struct vm_area_struct *vma);
277 Each function is passed the device_data that was originally registered
278 in the vfio_add_group_dev() call above. This allows the bus driver
279 an easy place to store its opaque, private data. The open/release
280 callbacks are issued when a new file descriptor is created for a
281 device (via VFIO_GROUP_GET_DEVICE_FD). The ioctl interface provides
282 a direct pass through for VFIO_DEVICE_* ioctls. The read/write/mmap
283 interfaces implement the device region access defined by the device's
284 own VFIO_DEVICE_GET_REGION_INFO ioctl.
287 PPC64 sPAPR implementation note
288 -------------------------------------------------------------------------------
290 This implementation has some specifics:
292 1) Only one IOMMU group per container is supported as an IOMMU group
293 represents the minimal entity which isolation can be guaranteed for and
294 groups are allocated statically, one per a Partitionable Endpoint (PE)
295 (PE is often a PCI domain but not always).
297 2) The hardware supports so called DMA windows - the PCI address range
298 within which DMA transfer is allowed, any attempt to access address space
299 out of the window leads to the whole PE isolation.
301 3) PPC64 guests are paravirtualized but not fully emulated. There is an API
302 to map/unmap pages for DMA, and it normally maps 1..32 pages per call and
303 currently there is no way to reduce the number of calls. In order to make things
304 faster, the map/unmap handling has been implemented in real mode which provides
305 an excellent performance which has limitations such as inability to do
306 locked pages accounting in real time.
308 So 3 additional ioctls have been added:
310 VFIO_IOMMU_SPAPR_TCE_GET_INFO - returns the size and the start
311 of the DMA window on the PCI bus.
313 VFIO_IOMMU_ENABLE - enables the container. The locked pages accounting
314 is done at this point. This lets user first to know what
315 the DMA window is and adjust rlimit before doing any real job.
317 VFIO_IOMMU_DISABLE - disables the container.
320 The code flow from the example above should be slightly changed:
323 /* Add the group to the container */
324 ioctl(group, VFIO_GROUP_SET_CONTAINER, &container);
326 /* Enable the IOMMU model we want */
327 ioctl(container, VFIO_SET_IOMMU, VFIO_SPAPR_TCE_IOMMU)
329 /* Get addition sPAPR IOMMU info */
330 vfio_iommu_spapr_tce_info spapr_iommu_info;
331 ioctl(container, VFIO_IOMMU_SPAPR_TCE_GET_INFO, &spapr_iommu_info);
333 if (ioctl(container, VFIO_IOMMU_ENABLE))
334 /* Cannot enable container, may be low rlimit */
336 /* Allocate some space and setup a DMA mapping */
337 dma_map.vaddr = mmap(0, 1024 * 1024, PROT_READ | PROT_WRITE,
338 MAP_PRIVATE | MAP_ANONYMOUS, 0, 0);
340 dma_map.size = 1024 * 1024;
341 dma_map.iova = 0; /* 1MB starting at 0x0 from device view */
342 dma_map.flags = VFIO_DMA_MAP_FLAG_READ | VFIO_DMA_MAP_FLAG_WRITE;
344 /* Check here is .iova/.size are within DMA window from spapr_iommu_info */
346 ioctl(container, VFIO_IOMMU_MAP_DMA, &dma_map);
349 -------------------------------------------------------------------------------
351 [1] VFIO was originally an acronym for "Virtual Function I/O" in its
352 initial implementation by Tom Lyon while as Cisco. We've since
353 outgrown the acronym, but it's catchy.
355 [2] "safe" also depends upon a device being "well behaved". It's
356 possible for multi-function devices to have backdoors between
357 functions and even for single function devices to have alternative
358 access to things like PCI config space through MMIO registers. To
359 guard against the former we can include additional precautions in the
360 IOMMU driver to group multi-function PCI devices together
361 (iommu=group_mf). The latter we can't prevent, but the IOMMU should
362 still provide isolation. For PCI, SR-IOV Virtual Functions are the
363 best indicator of "well behaved", as these are designed for
364 virtualization usage models.
366 [3] As always there are trade-offs to virtual machine device
367 assignment that are beyond the scope of VFIO. It's expected that
368 future IOMMU technologies will reduce some, but maybe not all, of
371 [4] In this case the device is below a PCI bridge, so transactions
372 from either function of the device are indistinguishable to the iommu:
374 -[0000:00]-+-1e.0-[06]--+-0d.0
377 00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev 90)