Commit ca3e40e2 authored by Peter Maydell's avatar Peter Maydell
Browse files

Merge remote-tracking branch 'remotes/mst/tags/for_upstream' into staging



vhost, pc, virtio features, fixes, cleanups

New features:
    VT-d support for devices behind a bridge
    vhost-user migration support

Signed-off-by: default avatarMichael S. Tsirkin <mst@redhat.com>

# gpg: Signature made Thu 22 Oct 2015 12:39:19 BST using RSA key ID D28D5469
# gpg: Good signature from "Michael S. Tsirkin <mst@kernel.org>"
# gpg:                 aka "Michael S. Tsirkin <mst@redhat.com>"

* remotes/mst/tags/for_upstream: (37 commits)
  hw/isa/lpc_ich9: inject the SMI on the VCPU that is writing to APM_CNT
  i386: keep cpu_model field in MachineState uptodate
  vhost: set the correct queue index in case of migration with multiqueue
  piix: fix resource leak reported by Coverity
  seccomp: add memfd_create to whitelist
  vhost-user-test: check ownership during migration
  vhost-user-test: add live-migration test
  vhost-user-test: learn to tweak various qemu arguments
  vhost-user-test: wrap server in TestServer struct
  vhost-user-test: remove useless static check
  vhost-user-test: move wait_for_fds() out
  vhost: add migration block if memfd failed
  vhost-user: use an enum helper for features mask
  vhost user: add rarp sending after live migration for legacy guest
  vhost user: add support of live migration
  net: add trace_vhost_user_event
  vhost-user: document migration log
  vhost: use a function for each call
  vhost-user: add a migration blocker
  vhost-user: send log shm fd along with log_base
  ...

Signed-off-by: default avatarPeter Maydell <peter.maydell@linaro.org>
parents c1bd8997 3c23402d
Loading
Loading
Loading
Loading
+19 −0
Original line number Diff line number Diff line
@@ -3491,6 +3491,22 @@ if compile_prog "" "" ; then
  eventfd=yes
fi

# check if memfd is supported
memfd=no
cat > $TMPC << EOF
#include <sys/memfd.h>

int main(void)
{
    return memfd_create("foo", MFD_ALLOW_SEALING);
}
EOF
if compile_prog "" "" ; then
  memfd=yes
fi



# check for fallocate
fallocate=no
cat > $TMPC << EOF
@@ -4885,6 +4901,9 @@ fi
if test "$eventfd" = "yes" ; then
  echo "CONFIG_EVENTFD=y" >> $config_host_mak
fi
if test "$memfd" = "yes" ; then
  echo "CONFIG_MEMFD=y" >> $config_host_mak
fi
if test "$fallocate" = "yes" ; then
  echo "CONFIG_FALLOCATE=y" >> $config_host_mak
fi
+61 −2
Original line number Diff line number Diff line
@@ -115,11 +115,13 @@ the ones that do:
 * VHOST_GET_FEATURES
 * VHOST_GET_PROTOCOL_FEATURES
 * VHOST_GET_VRING_BASE
 * VHOST_SET_LOG_BASE (if VHOST_USER_PROTOCOL_F_LOG_SHMFD)

There are several messages that the master sends with file descriptors passed
in the ancillary data:

 * VHOST_SET_MEM_TABLE
 * VHOST_SET_LOG_BASE (if VHOST_USER_PROTOCOL_F_LOG_SHMFD)
 * VHOST_SET_LOG_FD
 * VHOST_SET_VRING_KICK
 * VHOST_SET_VRING_CALL
@@ -140,8 +142,7 @@ Multiple queue support

Multiple queue is treated as a protocol extension, hence the slave has to
implement protocol features first. The multiple queues feature is supported
only when the protocol feature VHOST_USER_PROTOCOL_F_MQ (bit 0) is set:
#define VHOST_USER_PROTOCOL_F_MQ    0
only when the protocol feature VHOST_USER_PROTOCOL_F_MQ (bit 0) is set.

The max number of queues the slave supports can be queried with message
VHOST_USER_GET_PROTOCOL_FEATURES. Master should stop when the number of
@@ -152,6 +153,49 @@ queue in the sent message to identify a specified queue. One queue pair
is enabled initially. More queues are enabled dynamically, by sending
message VHOST_USER_SET_VRING_ENABLE.

Migration
---------

During live migration, the master may need to track the modifications
the slave makes to the memory mapped regions. The client should mark
the dirty pages in a log. Once it complies to this logging, it may
declare the VHOST_F_LOG_ALL vhost feature.

All the modifications to memory pointed by vring "descriptor" should
be marked. Modifications to "used" vring should be marked if
VHOST_VRING_F_LOG is part of ring's features.

Dirty pages are of size:
#define VHOST_LOG_PAGE 0x1000

The log memory fd is provided in the ancillary data of
VHOST_USER_SET_LOG_BASE message when the slave has
VHOST_USER_PROTOCOL_F_LOG_SHMFD protocol feature.

The size of the log may be computed by using all the known guest
addresses. The log covers from address 0 to the maximum of guest
regions. In pseudo-code, to mark page at "addr" as dirty:

page = addr / VHOST_LOG_PAGE
log[page / 8] |= 1 << page % 8

Use atomic operations, as the log may be concurrently manipulated.

VHOST_USER_SET_LOG_FD is an optional message with an eventfd in
ancillary data, it may be used to inform the master that the log has
been modified.

Once the source has finished migration, VHOST_USER_RESET_OWNER message
will be sent by the source. No further update must be done before the
destination takes over with new regions & rings.

Protocol features
-----------------

#define VHOST_USER_PROTOCOL_F_MQ             0
#define VHOST_USER_PROTOCOL_F_LOG_SHMFD      1
#define VHOST_USER_PROTOCOL_F_RARP           2

Message types
-------------

@@ -236,6 +280,7 @@ Message types
      Id: 6
      Equivalent ioctl: VHOST_SET_LOG_BASE
      Master payload: u64
      Slave payload: N/A

      Sets the logging base address.

@@ -337,3 +382,17 @@ Message types
      Master payload: vring state description

      Signal slave to enable or disable corresponding vring.

 * VHOST_USER_SEND_RARP

      Id: 19
      Equivalent ioctl: N/A
      Master payload: u64

      Ask vhost user backend to broadcast a fake RARP to notify the migration
      is terminated for guest that does not support GUEST_ANNOUNCE.
      Only legal if feature bit VHOST_USER_F_PROTOCOL_FEATURES is present in
      VHOST_USER_GET_FEATURES and protocol feature bit VHOST_USER_PROTOCOL_F_RARP
      is present in VHOST_USER_GET_PROTOCOL_FEATURES.
      The first 6 bytes of the payload contain the mac address of the guest to
      allow the vhost user backend to construct and broadcast the fake RARP.
+106 −0
Original line number Diff line number Diff line
Virtio devices and migration
============================

Copyright 2015 IBM Corp.

This work is licensed under the terms of the GNU GPL, version 2 or later.  See
the COPYING file in the top-level directory.

Saving and restoring the state of virtio devices is a bit of a twisty maze,
for several reasons:
- state is distributed between several parts:
  - virtio core, for common fields like features, number of queues, ...
  - virtio transport (pci, ccw, ...), for the different proxy devices and
    transport specific state (msix vectors, indicators, ...)
  - virtio device (net, blk, ...), for the different device types and their
    state (mac address, request queue, ...)
- most fields are saved via the stream interface; subsequently, subsections
  have been added to make cross-version migration possible

This file attempts to document the current procedure and point out some
caveats.


Save state procedure
====================

virtio core               virtio transport          virtio device
-----------               ----------------          -------------

                                                    save() function registered
                                                    via register_savevm()
virtio_save()                                       <----------
             ------>      save_config()
                          - save proxy device
                          - save transport-specific
                            device fields
- save common device
  fields
- save common virtqueue
  fields
             ------>      save_queue()
                          - save transport-specific
                            virtqueue fields
             ------>                               save_device()
                                                   - save device-specific
                                                     fields
- save subsections
  - device endianness,
    if changed from
    default endianness
  - 64 bit features, if
    any high feature bit
    is set
  - virtio-1 virtqueue
    fields, if VERSION_1
    is set


Load state procedure
====================

virtio core               virtio transport          virtio device
-----------               ----------------          -------------

                                                    load() function registered
                                                    via register_savevm()
virtio_load()                                       <----------
             ------>      load_config()
                          - load proxy device
                          - load transport-specific
                            device fields
- load common device
  fields
- load common virtqueue
  fields
             ------>      load_queue()
                          - load transport-specific
                            virtqueue fields
- notify guest
             ------>                               load_device()
                                                   - load device-specific
                                                     fields
- load subsections
  - device endianness
  - 64 bit features
  - virtio-1 virtqueue
    fields
- sanitize endianness
- sanitize features
- virtqueue index sanity
  check
                                                   - feature-dependent setup


Implications of this setup
==========================

Devices need to be careful in their state processing during load: The
load_device() procedure is invoked by the core before subsections have
been loaded. Any code that depends on information transmitted in subsections
therefore has to be invoked in the device's load() function _after_
virtio_load() returned (like e.g. code depending on features).

Any extension of the state being migrated should be done in subsections
added to the core for compatibility reasons. If transport or device specific
state is added, core needs to invoke a callback from the new subsection.
+10 −37
Original line number Diff line number Diff line
@@ -55,6 +55,9 @@
#include "exec/ram_addr.h"

#include "qemu/range.h"
#ifndef _WIN32
#include "qemu/mmap-alloc.h"
#endif

//#define DEBUG_SUBPAGE

@@ -84,9 +87,9 @@ static MemoryRegion io_mem_unassigned;
 */
#define RAM_RESIZEABLE (1 << 2)

/* An extra page is mapped on top of this RAM.
/* RAM is backed by an mmapped file.
 */
#define RAM_EXTRA (1 << 3)
#define RAM_FILE (1 << 3)
#endif

struct CPUTailQ cpus = QTAILQ_HEAD_INITIALIZER(cpus);
@@ -1205,13 +1208,10 @@ static void *file_ram_alloc(RAMBlock *block,
    char *filename;
    char *sanitized_name;
    char *c;
    void *ptr;
    void *area = NULL;
    void *area;
    int fd;
    uint64_t hpagesize;
    uint64_t total;
    Error *local_err = NULL;
    size_t offset;

    hpagesize = gethugepagesize(path, &local_err);
    if (local_err) {
@@ -1255,7 +1255,6 @@ static void *file_ram_alloc(RAMBlock *block,
    g_free(filename);

    memory = ROUND_UP(memory, hpagesize);
    total = memory + hpagesize;

    /*
     * ftruncate is not supported by hugetlbfs in older
@@ -1267,40 +1266,14 @@ static void *file_ram_alloc(RAMBlock *block,
        perror("ftruncate");
    }

    ptr = mmap(0, total, PROT_NONE, MAP_PRIVATE | MAP_ANONYMOUS,
                -1, 0);
    if (ptr == MAP_FAILED) {
        error_setg_errno(errp, errno,
                         "unable to allocate memory range for hugepages");
        close(fd);
        goto error;
    }

    offset = QEMU_ALIGN_UP((uintptr_t)ptr, hpagesize) - (uintptr_t)ptr;

    area = mmap(ptr + offset, memory, PROT_READ | PROT_WRITE,
                (block->flags & RAM_SHARED ? MAP_SHARED : MAP_PRIVATE) |
                MAP_FIXED,
                fd, 0);
    area = qemu_ram_mmap(fd, memory, hpagesize, block->flags & RAM_SHARED);
    if (area == MAP_FAILED) {
        error_setg_errno(errp, errno,
                         "unable to map backing store for hugepages");
        munmap(ptr, total);
        close(fd);
        goto error;
    }

    if (offset > 0) {
        munmap(ptr, offset);
    }
    ptr += offset;
    total -= offset;

    if (total > memory + getpagesize()) {
        munmap(ptr + memory + getpagesize(),
               total - memory - getpagesize());
    }

    if (mem_prealloc) {
        os_mem_prealloc(fd, area, memory);
    }
@@ -1618,7 +1591,7 @@ ram_addr_t qemu_ram_alloc_from_file(ram_addr_t size, MemoryRegion *mr,
    new_block->used_length = size;
    new_block->max_length = size;
    new_block->flags = share ? RAM_SHARED : 0;
    new_block->flags |= RAM_EXTRA;
    new_block->flags |= RAM_FILE;
    new_block->host = file_ram_alloc(new_block, size,
                                     mem_path, errp);
    if (!new_block->host) {
@@ -1720,8 +1693,8 @@ static void reclaim_ramblock(RAMBlock *block)
        xen_invalidate_map_cache_entry(block->host);
#ifndef _WIN32
    } else if (block->fd >= 0) {
        if (block->flags & RAM_EXTRA) {
            munmap(block->host, block->max_length + getpagesize());
        if (block->flags & RAM_FILE) {
            qemu_ram_munmap(block->host, block->max_length);
        } else {
            munmap(block->host, block->max_length);
        }
+73 −16
Original line number Diff line number Diff line
@@ -22,6 +22,7 @@
#include "hw/sysbus.h"
#include "exec/address-spaces.h"
#include "intel_iommu_internal.h"
#include "hw/pci/pci.h"

/*#define DEBUG_INTEL_IOMMU*/
#ifdef DEBUG_INTEL_IOMMU
@@ -166,19 +167,17 @@ static gboolean vtd_hash_remove_by_page(gpointer key, gpointer value,
 */
static void vtd_reset_context_cache(IntelIOMMUState *s)
{
    VTDAddressSpace **pvtd_as;
    VTDAddressSpace *vtd_as;
    uint32_t bus_it;
    VTDBus *vtd_bus;
    GHashTableIter bus_it;
    uint32_t devfn_it;

    g_hash_table_iter_init(&bus_it, s->vtd_as_by_busptr);

    VTD_DPRINTF(CACHE, "global context_cache_gen=1");
    for (bus_it = 0; bus_it < VTD_PCI_BUS_MAX; ++bus_it) {
        pvtd_as = s->address_spaces[bus_it];
        if (!pvtd_as) {
            continue;
        }
    while (g_hash_table_iter_next (&bus_it, NULL, (void**)&vtd_bus)) {
        for (devfn_it = 0; devfn_it < VTD_PCI_DEVFN_MAX; ++devfn_it) {
            vtd_as = pvtd_as[devfn_it];
            vtd_as = vtd_bus->dev_as[devfn_it];
            if (!vtd_as) {
                continue;
            }
@@ -754,12 +753,13 @@ static inline bool vtd_is_interrupt_addr(hwaddr addr)
 * @is_write: The access is a write operation
 * @entry: IOMMUTLBEntry that contain the addr to be translated and result
 */
static void vtd_do_iommu_translate(VTDAddressSpace *vtd_as, uint8_t bus_num,
static void vtd_do_iommu_translate(VTDAddressSpace *vtd_as, PCIBus *bus,
                                   uint8_t devfn, hwaddr addr, bool is_write,
                                   IOMMUTLBEntry *entry)
{
    IntelIOMMUState *s = vtd_as->iommu_state;
    VTDContextEntry ce;
    uint8_t bus_num = pci_bus_num(bus);
    VTDContextCacheEntry *cc_entry = &vtd_as->context_cache_entry;
    uint64_t slpte;
    uint32_t level;
@@ -874,6 +874,29 @@ static void vtd_context_global_invalidate(IntelIOMMUState *s)
    }
}


/* Find the VTD address space currently associated with a given bus number,
 */
static VTDBus *vtd_find_as_from_bus_num(IntelIOMMUState *s, uint8_t bus_num)
{
    VTDBus *vtd_bus = s->vtd_as_by_bus_num[bus_num];
    if (!vtd_bus) {
        /* Iterate over the registered buses to find the one
         * which currently hold this bus number, and update the bus_num lookup table:
         */
        GHashTableIter iter;

        g_hash_table_iter_init(&iter, s->vtd_as_by_busptr);
        while (g_hash_table_iter_next (&iter, NULL, (void**)&vtd_bus)) {
            if (pci_bus_num(vtd_bus->bus) == bus_num) {
                s->vtd_as_by_bus_num[bus_num] = vtd_bus;
                return vtd_bus;
            }
        }
    }
    return vtd_bus;
}

/* Do a context-cache device-selective invalidation.
 * @func_mask: FM field after shifting
 */
@@ -882,7 +905,7 @@ static void vtd_context_device_invalidate(IntelIOMMUState *s,
                                          uint16_t func_mask)
{
    uint16_t mask;
    VTDAddressSpace **pvtd_as;
    VTDBus *vtd_bus;
    VTDAddressSpace *vtd_as;
    uint16_t devfn;
    uint16_t devfn_it;
@@ -903,11 +926,11 @@ static void vtd_context_device_invalidate(IntelIOMMUState *s,
    }
    VTD_DPRINTF(INV, "device-selective invalidation source 0x%"PRIx16
                    " mask %"PRIu16, source_id, mask);
    pvtd_as = s->address_spaces[VTD_SID_TO_BUS(source_id)];
    if (pvtd_as) {
    vtd_bus = vtd_find_as_from_bus_num(s, VTD_SID_TO_BUS(source_id));
    if (vtd_bus) {
        devfn = VTD_SID_TO_DEVFN(source_id);
        for (devfn_it = 0; devfn_it < VTD_PCI_DEVFN_MAX; ++devfn_it) {
            vtd_as = pvtd_as[devfn_it];
            vtd_as = vtd_bus->dev_as[devfn_it];
            if (vtd_as && ((devfn_it & mask) == (devfn & mask))) {
                VTD_DPRINTF(INV, "invalidate context-cahce of devfn 0x%"PRIx16,
                            devfn_it);
@@ -1805,11 +1828,11 @@ static IOMMUTLBEntry vtd_iommu_translate(MemoryRegion *iommu, hwaddr addr,
        return ret;
    }

    vtd_do_iommu_translate(vtd_as, vtd_as->bus_num, vtd_as->devfn, addr,
    vtd_do_iommu_translate(vtd_as, vtd_as->bus, vtd_as->devfn, addr,
                           is_write, &ret);
    VTD_DPRINTF(MMU,
                "bus %"PRIu8 " slot %"PRIu8 " func %"PRIu8 " devfn %"PRIu8
                " gpa 0x%"PRIx64 " hpa 0x%"PRIx64, vtd_as->bus_num,
                " gpa 0x%"PRIx64 " hpa 0x%"PRIx64, pci_bus_num(vtd_as->bus),
                VTD_PCI_SLOT(vtd_as->devfn), VTD_PCI_FUNC(vtd_as->devfn),
                vtd_as->devfn, addr, ret.translated_addr);
    return ret;
@@ -1839,6 +1862,38 @@ static Property vtd_properties[] = {
    DEFINE_PROP_END_OF_LIST(),
};


VTDAddressSpace *vtd_find_add_as(IntelIOMMUState *s, PCIBus *bus, int devfn)
{
    uintptr_t key = (uintptr_t)bus;
    VTDBus *vtd_bus = g_hash_table_lookup(s->vtd_as_by_busptr, &key);
    VTDAddressSpace *vtd_dev_as;

    if (!vtd_bus) {
        /* No corresponding free() */
        vtd_bus = g_malloc0(sizeof(VTDBus) + sizeof(VTDAddressSpace *) * VTD_PCI_DEVFN_MAX);
        vtd_bus->bus = bus;
        key = (uintptr_t)bus;
        g_hash_table_insert(s->vtd_as_by_busptr, &key, vtd_bus);
    }

    vtd_dev_as = vtd_bus->dev_as[devfn];

    if (!vtd_dev_as) {
        vtd_bus->dev_as[devfn] = vtd_dev_as = g_malloc0(sizeof(VTDAddressSpace));

        vtd_dev_as->bus = bus;
        vtd_dev_as->devfn = (uint8_t)devfn;
        vtd_dev_as->iommu_state = s;
        vtd_dev_as->context_cache_entry.context_cache_gen = 0;
        memory_region_init_iommu(&vtd_dev_as->iommu, OBJECT(s),
                                 &s->iommu_ops, "intel_iommu", UINT64_MAX);
        address_space_init(&vtd_dev_as->as,
                           &vtd_dev_as->iommu, "intel_iommu");
    }
    return vtd_dev_as;
}

/* Do the initialization. It will also be called when reset, so pay
 * attention when adding new initialization stuff.
 */
@@ -1931,13 +1986,15 @@ static void vtd_realize(DeviceState *dev, Error **errp)
    IntelIOMMUState *s = INTEL_IOMMU_DEVICE(dev);

    VTD_DPRINTF(GENERAL, "");
    memset(s->address_spaces, 0, sizeof(s->address_spaces));
    memset(s->vtd_as_by_bus_num, 0, sizeof(s->vtd_as_by_bus_num));
    memory_region_init_io(&s->csrmem, OBJECT(s), &vtd_mem_ops, s,
                          "intel_iommu", DMAR_REG_SIZE);
    sysbus_init_mmio(SYS_BUS_DEVICE(s), &s->csrmem);
    /* No corresponding destroy */
    s->iotlb = g_hash_table_new_full(vtd_uint64_hash, vtd_uint64_equal,
                                     g_free, g_free);
    s->vtd_as_by_busptr = g_hash_table_new_full(vtd_uint64_hash, vtd_uint64_equal,
                                              g_free, g_free);
    vtd_init(s);
}

Loading