Prechádzať zdrojové kódy

Add libvirt as a provider

Lénaïc Huard 10 rokov pred
rodič
commit
9fbec064d2

+ 1 - 0
README.md

@@ -20,6 +20,7 @@ Setup
 - Setup for a specific cloud:
   - [AWS](README_AWS.md)
   - [GCE](README_GCE.md)
+  - [local VMs](README_libvirt.md)
 
 - Build
   - [How to build the openshift-ansible rpms](BUILD.md)

+ 92 - 0
README_libvirt.md

@@ -0,0 +1,92 @@
+
+LIBVIRT Setup instructions
+==========================
+
+`libvirt` is an `openshift-ansible` provider that uses `libvirt` to create local Fedora VMs that are provisioned exactly the same way that cloud VMs would be provisioned.
+
+This makes `libvirt` useful to develop, test and debug Openshift and openshift-ansible locally on the developer’s workstation before going to the cloud.
+
+Install dependencies
+--------------------
+
+1. Install [dnsmasq](http://www.thekelleys.org.uk/dnsmasq/doc.html)
+2. Install [ebtables](http://ebtables.netfilter.org/)
+3. Install [qemu](http://wiki.qemu.org/Main_Page)
+4. Install [libvirt](http://libvirt.org/)
+5. Enable and start the libvirt daemon, e.g:
+   * ``systemctl enable libvirtd``
+   * ``systemctl start libvirtd``
+6. [Grant libvirt access to your user¹](https://libvirt.org/aclpolkit.html)
+7. Check that your `$HOME` is accessible to the qemu user²
+
+#### ¹ Depending on your distribution, libvirt access may be denied by default or may require a password at each access.
+
+You can test it with the following command:
+```
+virsh -c qemu:///system pool-list
+```
+
+If you have access error messages, please read https://libvirt.org/acl.html and https://libvirt.org/aclpolkit.html .
+
+In short, if your libvirt has been compiled with Polkit support (ex: Arch, Fedora 21), you can create `/etc/polkit-1/rules.d/50-org.libvirt.unix.manage.rules` as follows to grant full access to libvirt to `$USER`
+
+```
+sudo /bin/sh -c "cat - > /etc/polkit-1/rules.d/50-org.libvirt.unix.manage.rules" << EOF
+polkit.addRule(function(action, subject) {
+        if (action.id == "org.libvirt.unix.manage" &&
+            subject.user == "$USER") {
+                return polkit.Result.YES;
+                polkit.log("action=" + action);
+                polkit.log("subject=" + subject);
+        }
+});
+EOF
+```
+
+If your libvirt has not been compiled with Polkit (ex: Ubuntu 14.04.1 LTS), check the permissions on the libvirt unix socket:
+
+```
+ls -l /var/run/libvirt/libvirt-sock
+srwxrwx--- 1 root libvirtd 0 févr. 12 16:03 /var/run/libvirt/libvirt-sock
+
+usermod -a -G libvirtd $USER
+# $USER needs to logout/login to have the new group be taken into account
+```
+
+(Replace `$USER` with your login name)
+
+#### ² Qemu will run with a specific user. It must have access to the VMs drives
+
+All the disk drive resources needed by the VMs (Fedora disk image, cloud-init files) are put inside `~/libvirt-storage-pool-openshift/`.
+
+As we’re using the `qemu:///system` instance of libvirt, qemu will run with a specific `user:group` distinct from your user. It is configured in `/etc/libvirt/qemu.conf`. That qemu user must have access to that libvirt storage pool.
+
+If your `$HOME` is world readable, everything is fine. If your `$HOME` is private, `ansible` will fail with an error message like:
+
+```
+error: Cannot access storage file '$HOME/libvirt-storage-pool-openshift/lenaic-master-216d8.qcow2' (as uid:99, gid:78): Permission denied
+```
+
+In order to fix that issue, you have several possibilities:
+* set `libvirt_storage_pool_path` inside `playbooks/libvirt/openshift-cluster/launch.yml` and `playbooks/libvirt/openshift-cluster/terminate.yml` to a directory:
+  * backed by a filesystem with a lot of free disk space
+  * writable by your user;
+  * accessible by the qemu user.
+* Grant the qemu user access to the storage pool.
+
+On Arch:
+
+```
+setfacl -m g:kvm:--x ~
+```
+
+Test the setup
+--------------
+
+```
+cd openshift-ansible
+
+bin/cluster create -m 1 -n 3 libvirt lenaic
+
+bin/cluster terminate libvirt lenaic
+```

+ 4 - 2
bin/cluster

@@ -1,4 +1,4 @@
-#!/usr/bin/env python
+#!/usr/bin/env python2
 # vim: expandtab:tabstop=4:shiftwidth=4
 
 import argparse
@@ -94,6 +94,8 @@ class Cluster(object):
                 os.environ[key] = config.get('ec2', key)
 
             inventory = '-i inventory/aws/ec2.py'
+        elif 'libvirt' == provider:
+            inventory = '-i inventory/libvirt/hosts'
         else:
             # this code should never be reached
             raise ValueError("invalid PROVIDER {}".format(provider))
@@ -139,7 +141,7 @@ if __name__ == '__main__':
 
     cluster = Cluster()
 
-    providers = ['gce', 'aws']
+    providers = ['gce', 'aws', 'libvirt']
     parser = argparse.ArgumentParser(
         description='Python wrapper to ensure proper environment for OpenShift ansible playbooks',
     )

+ 2 - 0
inventory/libvirt/group_vars/all

@@ -0,0 +1,2 @@
+---
+ansible_ssh_user: root

+ 2 - 0
inventory/libvirt/hosts

@@ -0,0 +1,2 @@
+# Eventually we'll add the GCE, AWS, etc dynamic inventories, but for now...
+localhost ansible_python_interpreter=/usr/bin/python2

+ 1 - 0
playbooks/libvirt/openshift-cluster/filter_plugins

@@ -0,0 +1 @@
+../../../filter_plugins

+ 65 - 0
playbooks/libvirt/openshift-cluster/launch.yml

@@ -0,0 +1,65 @@
+- name: Launch instance(s)
+  hosts: localhost
+  connection: local
+  gather_facts: no
+
+  vars:
+    libvirt_storage_pool_path: "{{ lookup('env','HOME') }}/libvirt-storage-pool-openshift"
+    libvirt_storage_pool: 'openshift'
+    libvirt_uri: 'qemu:///system'
+
+  vars_files:
+    - vars.yml
+
+  tasks:
+    - set_fact:
+        k8s_type: master
+
+    - name: Generate master instance name(s)
+      set_fact:
+        scratch_name: "{{ cluster_id }}-{{ k8s_type }}-{{ '%05x' | format( 1048576 | random ) }}"
+      register: master_names_output
+      with_sequence: start=1 end='{{ num_masters }}'
+
+    - set_fact:
+        master_names: "{{ master_names_output.results | oo_collect('ansible_facts') | oo_collect('scratch_name') }}"
+
+    - include: launch_instances.yml
+      vars:
+        instances: '{{ master_names }}'
+        cluster: '{{ cluster_id }}'
+        type: '{{ k8s_type }}'
+        group_name: 'tag_env-host-type-{{ cluster_id }}-openshift-master'
+
+    - set_fact:
+        k8s_type: node
+
+    - name: Generate node instance name(s)
+      set_fact:
+        scratch_name: "{{ cluster_id }}-{{ k8s_type }}-{{ '%05x' | format( 1048576 | random ) }}"
+      register: node_names_output
+      with_sequence: start=1 end='{{ num_nodes }}'
+
+    - set_fact:
+        node_names: "{{ node_names_output.results | oo_collect('ansible_facts') | oo_collect('scratch_name') }}"
+
+    - include: launch_instances.yml
+      vars:
+        instances: '{{ node_names }}'
+        cluster: '{{ cluster_id }}'
+        type: '{{ k8s_type }}'
+
+- hosts: 'tag_env-{{ cluster_id }}'
+  roles:
+    - openshift_repos
+    - os_update_latest
+
+- include: ../openshift-master/config.yml
+  vars:
+    oo_host_group_exp: 'groups["tag_env-host-type-{{ cluster_id }}-openshift-master"]'
+    oo_env: '{{ cluster_id }}'
+
+- include: ../openshift-node/config.yml
+  vars:
+    oo_host_group_exp: 'groups["tag_env-host-type-{{ cluster_id }}-openshift-node"]'
+    oo_env: '{{ cluster_id }}'

+ 102 - 0
playbooks/libvirt/openshift-cluster/launch_instances.yml

@@ -0,0 +1,102 @@
+- name: Create the libvirt storage directory for openshift
+  file:
+    dest: '{{ libvirt_storage_pool_path }}'
+    state: directory
+
+- name: Download Base Cloud image
+  get_url:
+    url: '{{ base_image_url }}'
+    sha256sum: '{{ base_image_sha256 }}'
+    dest: '{{ libvirt_storage_pool_path }}/{{ base_image_name }}'
+
+- name: Create the cloud-init config drive path
+  file:
+    dest: '{{ libvirt_storage_pool_path }}/{{ item }}_configdrive/openstack/latest'
+    state: directory
+  with_items: '{{ instances }}'
+
+- name: Create the cloud-init config drive files
+  template:
+    src: '{{ item[1] }}'
+    dest: '{{ libvirt_storage_pool_path }}/{{ item[0] }}_configdrive/openstack/latest/{{ item[1] }}'
+  with_nested:
+    - '{{ instances }}'
+    - [ user-data, meta-data ]
+
+- name: Create the cloud-init config drive
+  command: 'genisoimage -output {{ libvirt_storage_pool_path }}/{{ item }}_cloud-init.iso -volid cidata -joliet -rock user-data meta-data'
+  args:
+    chdir: '{{ libvirt_storage_pool_path }}/{{ item }}_configdrive/openstack/latest'
+    creates: '{{ libvirt_storage_pool_path }}/{{ item }}_cloud-init.iso'
+  with_items: '{{ instances }}'
+
+- name: Create the libvirt storage pool for openshift
+  command: 'virsh -c {{ libvirt_uri }} pool-create-as {{ libvirt_storage_pool }} dir --target {{ libvirt_storage_pool_path }}'
+  ignore_errors: yes
+
+- name: Refresh the libvirt storage pool for openshift
+  command: 'virsh -c {{ libvirt_uri }} pool-refresh {{ libvirt_storage_pool }}'
+
+- name: Create VMs drives
+  command: 'virsh -c {{ libvirt_uri }} vol-create-as {{ libvirt_storage_pool }} {{ item }}.qcow2 10G --format qcow2 --backing-vol {{ base_image_name }} --backing-vol-format qcow2'
+  with_items: '{{ instances }}'
+
+- name: Create VMs
+  virt:
+    name: '{{ item }}'
+    command: define
+    xml: "{{ lookup('template', '../templates/domain.xml') }}"
+    uri: '{{ libvirt_uri }}'
+  with_items: '{{ instances }}'
+
+- name: Start VMs
+  virt:
+    name: '{{ item }}'
+    state: running
+    uri: '{{ libvirt_uri }}'
+  with_items: '{{ instances }}'
+
+- name: Collect MAC addresses of the VMs
+  shell: 'virsh -c {{ libvirt_uri }} dumpxml {{ item }} | xmllint --xpath "string(//domain/devices/interface/mac/@address)" -'
+  register: scratch_mac
+  with_items: '{{ instances }}'
+
+- name: Wait for the VMs to get an IP
+  command: "egrep -c '{{ scratch_mac.results | oo_collect('stdout') | join('|') }}' /proc/net/arp"
+  ignore_errors: yes
+  register: nb_allocated_ips
+  until: nb_allocated_ips.stdout == '{{ instances | length }}'
+  retries: 30
+  delay: 1
+
+- name: Collect IP addresses of the VMs
+  shell: "awk '/{{ item.stdout }}/ {print $1}' /proc/net/arp"
+  register: scratch_ip
+  with_items: '{{ scratch_mac.results }}'
+
+- set_fact:
+    ips: "{{ scratch_ip.results | oo_collect('stdout') }}"
+
+- name: Add new instances
+  add_host:
+    hostname: '{{ item.0 }}'
+    ansible_ssh_host: '{{ item.1 }}'
+    ansible_ssh_user: root
+    groups: 'tag_env-{{ cluster }}, tag_host-type-{{ type }}, tag_env-host-type-{{ cluster }}-openshift-{{ type }}'
+  with_together:
+    - instances
+    - ips
+
+- name: Wait for ssh
+  wait_for:
+    host: '{{ item }}'
+    port: 22
+  with_items: ips
+
+- name: Wait for root user setup
+  command: 'ssh -o StrictHostKeyChecking=no -o PasswordAuthentication=no -o ConnectTimeout=10 -o UserKnownHostsFile=/dev/null root@{{ item }} echo root user is setup'
+  register: result
+  until: result.rc == 0
+  retries: 30
+  delay: 1
+  with_items: ips

+ 43 - 0
playbooks/libvirt/openshift-cluster/list.yml

@@ -0,0 +1,43 @@
+- name: Generate oo_list_hosts group
+  hosts: localhost
+  connection: local
+  gather_facts: no
+
+  vars:
+    libvirt_uri: 'qemu:///system'
+
+  tasks:
+    - name: List VMs
+      virt:
+        command: list_vms
+      register: list_vms
+
+    - name: Collect MAC addresses of the VMs
+      shell: 'virsh -c {{ libvirt_uri }} dumpxml {{ item }} | xmllint --xpath "string(//domain/devices/interface/mac/@address)" -'
+      register: scratch_mac
+      with_items: '{{ list_vms.list_vms }}'
+      when: item|truncate(cluster_id|length+1, True) == '{{ cluster_id }}-...'
+
+    - name: Collect IP addresses of the VMs
+      shell: "awk '/{{ item.stdout }}/ {print $1}' /proc/net/arp"
+      register: scratch_ip
+      with_items: '{{ scratch_mac.results }}'
+      when: item.skipped is not defined
+
+    - name: Add hosts
+      add_host:
+        hostname: '{{ item[0] }}'
+        ansible_ssh_host: '{{ item[1].stdout }}'
+        ansible_ssh_user: root
+        groups: oo_list_hosts
+      with_together:
+        - '{{ list_vms.list_vms }}'
+        - '{{ scratch_ip.results }}'
+      when: item[1].skipped is not defined
+
+- name: List Hosts
+  hosts: oo_list_hosts
+
+  tasks:
+    - debug:
+        msg: 'public:{{ansible_default_ipv4.address}} private:{{ansible_default_ipv4.address}}'

+ 1 - 0
playbooks/libvirt/openshift-cluster/roles

@@ -0,0 +1 @@
+../../../roles

+ 41 - 0
playbooks/libvirt/openshift-cluster/terminate.yml

@@ -0,0 +1,41 @@
+- name: Terminate instance(s)
+  hosts: localhost
+  connection: local
+  gather_facts: no
+
+  vars:
+    libvirt_storage_pool_path: "{{ lookup('env','HOME') }}/libvirt-storage-pool-openshift"
+    libvirt_storage_pool: 'openshift'
+    libvirt_uri: 'qemu:///system'
+
+  tasks:
+    - name: List VMs
+      virt:
+        command: list_vms
+      register: list_vms
+
+    - name: Destroy VMs
+      virt:
+        name: '{{ item[0] }}'
+        command: '{{ item[1] }}'
+        uri: '{{ libvirt_uri }}'
+      with_nested:
+        - '{{ list_vms.list_vms }}'
+        - [ destroy, undefine ]
+      when: item[0]|truncate(cluster_id|length+1, True) == '{{ cluster_id }}-...'
+
+    - name: Delete VMs config drive
+      file:
+        path: '{{ libvirt_storage_pool_path }}/{{ item }}_configdrive/openstack'
+        state: absent
+      with_items: '{{ list_vms.list_vms }}'
+      when: item|truncate(cluster_id|length+1, True) == '{{ cluster_id }}-...'
+
+    - name: Delete VMs drives
+      command: 'virsh -c {{ libvirt_uri }} vol-delete --pool {{ libvirt_storage_pool }} {{ item[0] }}{{ item[1] }}'
+      args:
+        removes: '{{ libvirt_storage_pool_path }}/{{ item[0] }}{{ item[1] }}'
+      with_nested:
+        - '{{ list_vms.list_vms }}'
+        - [ '_configdrive', '_cloud-init.iso', '.qcow2' ]
+      when: item[0]|truncate(cluster_id|length+1, True) == '{{ cluster_id }}-...'

+ 7 - 0
playbooks/libvirt/openshift-cluster/vars.yml

@@ -0,0 +1,7 @@
+# base_image_url: http://download.fedoraproject.org/pub/fedora/linux/releases/21/Cloud/Images/x86_64/Fedora-Cloud-Base-20141203-21.x86_64.qcow2
+# base_image_name: Fedora-Cloud-Base-20141203-21.x86_64.qcow2
+# base_image_sha256: 3a99bb89f33e3d4ee826c8160053cdb8a72c80cd23350b776ce73cd244467d86
+
+base_image_url: http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2
+base_image_name: CentOS-7-x86_64-GenericCloud.qcow2
+base_image_sha256: e324e3ab1d24a1bbf035ddb365e7f9058c0b454acf48d7aa15c5519fae5998ab

+ 21 - 0
playbooks/libvirt/openshift-master/config.yml

@@ -0,0 +1,21 @@
+- name: master/config.yml, populate oo_masters_to_config host group if needed
+  hosts: localhost
+  gather_facts: no
+  tasks:
+    - name: "Evaluate oo_host_group_exp if it's set"
+      add_host:
+        name: '{{ item }}'
+        groups: oo_masters_to_config
+      with_items: "{{ oo_host_group_exp | default('') }}"
+      when: oo_host_group_exp is defined
+
+- name: Configure instances
+  hosts: oo_masters_to_config
+  vars:
+    openshift_hostname: '{{ ansible_default_ipv4.address }}'
+  vars_files:
+    - vars.yml
+  roles:
+    - openshift_master
+    - pods
+    - os_env_extras

+ 1 - 0
playbooks/libvirt/openshift-master/filter_plugins

@@ -0,0 +1 @@
+../../../filter_plugins

+ 1 - 0
playbooks/libvirt/openshift-master/roles

@@ -0,0 +1 @@
+../../../roles

+ 1 - 0
playbooks/libvirt/openshift-master/vars.yml

@@ -0,0 +1 @@
+openshift_debug_level: 4

+ 102 - 0
playbooks/libvirt/openshift-node/config.yml

@@ -0,0 +1,102 @@
+- name: node/config.yml, populate oo_nodes_to_config host group if needed
+  hosts: localhost
+  gather_facts: no
+  tasks:
+    - name: "Evaluate oo_host_group_exp if it's set"
+      add_host:
+        name: '{{ item }}'
+        groups: oo_nodes_to_config
+      with_items: "{{ oo_host_group_exp | default('') }}"
+      when: oo_host_group_exp is defined
+
+    - add_host:
+        name: "{{ groups['tag_env-host-type-' ~ cluster_id ~ '-openshift-master'][0] }}"
+        groups: oo_first_master
+      when: oo_host_group_exp is defined
+
+
+- name: Gather and set facts for hosts to configure
+  hosts: oo_nodes_to_config
+  roles:
+  - openshift_facts
+  tasks:
+  # Since the master is registering the nodes before they are configured, we
+  # need to make sure to set the node properties beforehand if we do not want
+  # the defaults
+  - openshift_facts:
+      role: "{{ item.role }}"
+      local_facts: "{{ item.local_facts }}"
+    with_items:
+    - role: common
+      local_facts:
+        hostname: "{{ ansible_default_ipv4.address }}"
+    - role: node
+      local_facts:
+        external_id: "{{ openshift_node_external_id | default(None) }}"
+        resources_cpu: "{{ openshfit_node_resources_cpu | default(None) }}"
+        resources_memory: "{{ openshfit_node_resources_memory | default(None) }}"
+        pod_cidr: "{{ openshfit_node_pod_cidr | default(None) }}"
+        labels: "{{ openshfit_node_labels | default(None) }}"
+        annotations: "{{ openshfit_node_annotations | default(None) }}"
+
+
+- name: Register nodes
+  hosts: oo_first_master
+  vars:
+    openshift_nodes: "{{ hostvars
+          | oo_select_keys(groups['oo_nodes_to_config']) }}"
+  roles:
+  - openshift_register_nodes
+  tasks:
+  - name: Create local temp directory for syncing certs
+    local_action: command /usr/bin/mktemp -d /tmp/openshift-ansible-XXXXXXX
+    register: mktemp
+
+  - name: Sync master certs to localhost
+    synchronize:
+      mode: pull
+      checksum: yes
+      src: /var/lib/openshift/openshift.local.certificates
+      dest: "{{ mktemp.stdout }}"
+
+- name: Configure instances
+  hosts: oo_nodes_to_config
+  vars_files:
+  - vars.yml
+  vars:
+    sync_tmpdir: "{{ hostvars[groups['oo_first_master'][0]].mktemp.stdout }}"
+    cert_parent_rel_path: openshift.local.certificates
+    cert_rel_path: "{{ cert_parent_rel_path }}/node-{{ openshift.common.hostname }}"
+    cert_base_path: /var/lib/openshift
+    cert_parent_path: "{{ cert_base_path }}/{{ cert_parent_rel_path }}"
+    cert_path: "{{ cert_base_path }}/{{ cert_rel_path }}"
+  pre_tasks:
+  - name: Ensure certificate directories exists
+    file:
+      path: "{{ item }}"
+      state: directory
+    with_items:
+    - "{{ cert_path }}"
+    - "{{ cert_parent_path }}/ca"
+
+  # TODO: notify restart openshift-node and/or restart openshift-sdn-node,
+  # possibly test service started time against certificate/config file
+  # timestamps in openshift-node or openshift-sdn-node to trigger notify
+  - name: Sync certs to nodes
+    synchronize:
+      checksum: yes
+      src: "{{ item.src }}"
+      dest: "{{ item.dest }}"
+      owner: no
+      group: no
+    with_items:
+    - src: "{{ sync_tmpdir }}/{{ cert_rel_path }}"
+      dest: "{{ cert_parent_path }}"
+    - src: "{{ sync_tmpdir }}/{{ cert_parent_rel_path }}/ca/cert.crt"
+      dest: "{{ cert_parent_path }}/ca/cert.crt"
+  - local_action: file name={{ sync_tmpdir }} state=absent
+    run_once: true
+  roles:
+    - openshift_node
+    - os_env_extras
+    - os_env_extras_node

+ 1 - 0
playbooks/libvirt/openshift-node/filter_plugins

@@ -0,0 +1 @@
+../../../filter_plugins

+ 1 - 0
playbooks/libvirt/openshift-node/roles

@@ -0,0 +1 @@
+../../../roles

+ 1 - 0
playbooks/libvirt/openshift-node/vars.yml

@@ -0,0 +1 @@
+openshift_debug_level: 4

+ 62 - 0
playbooks/libvirt/templates/domain.xml

@@ -0,0 +1,62 @@
+<domain type='kvm' id='8'>
+  <name>{{ item }}</name>
+  <memory unit='GiB'>1</memory>
+  <currentMemory unit='GiB'>1</currentMemory>
+  <vcpu placement='static'>2</vcpu>
+  <os>
+    <type arch='x86_64' machine='pc'>hvm</type>
+    <boot dev='hd'/>
+  </os>
+  <features>
+    <acpi/>
+    <apic/>
+    <pae/>
+  </features>
+  <clock offset='utc'>
+    <timer name='rtc' tickpolicy='catchup'/>
+    <timer name='pit' tickpolicy='delay'/>
+    <timer name='hpet' present='no'/>
+  </clock>
+  <on_poweroff>destroy</on_poweroff>
+  <on_reboot>restart</on_reboot>
+  <on_crash>restart</on_crash>
+  <devices>
+    <emulator>/usr/bin/qemu-system-x86_64</emulator>
+    <disk type='file' device='disk'>
+      <driver name='qemu' type='qcow2'/>
+      <source file='{{ libvirt_storage_pool_path }}/{{ item }}.qcow2'/>
+      <target dev='vda' bus='virtio'/>
+    </disk>
+    <disk type='file' device='cdrom'>
+      <driver name='qemu' type='raw'/>
+      <source file='{{ libvirt_storage_pool_path }}/{{ item }}_cloud-init.iso'/>
+      <target dev='vdb' bus='virtio'/>
+      <readonly/>
+    </disk>
+    <controller type='usb' index='0' />
+    <interface type='network'>
+      <source network='default'/>
+      <model type='virtio'/>
+    </interface>
+    <serial type='pty'>
+      <target port='0'/>
+    </serial>
+    <console type='pty'>
+      <target type='serial' port='0'/>
+    </console>
+    <channel type='spicevmc'>
+      <target type='virtio' name='com.redhat.spice.0'/>
+    </channel>
+    <input type='tablet' bus='usb' />
+    <input type='mouse' bus='ps2'/>
+    <input type='keyboard' bus='ps2'/>
+    <graphics type='spice' autoport='yes' />
+    <video>
+      <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1'/>
+    </video>
+    <redirdev bus='usb' type='spicevmc'>
+    </redirdev>
+    <memballoon model='virtio'>
+    </memballoon>
+  </devices>
+</domain>

+ 2 - 0
playbooks/libvirt/templates/meta-data

@@ -0,0 +1,2 @@
+instance-id: {{ item[0] }}
+local-hostname: {{ item[0] }}

+ 10 - 0
playbooks/libvirt/templates/user-data

@@ -0,0 +1,10 @@
+#cloud-config
+
+disable_root: 0
+
+system_info:
+  default_user:
+    name: root
+
+ssh_authorized_keys:
+  - {{ lookup('file', '~/.ssh/id_rsa.pub') }}