Bringing the VMs back online


okay. New hypervisor hardware. Switching from xen to KVM. I took a snapshot of the xen partitions and moved them to a holding location during migration.

I am now imaging these partitions to the LVM block devices:

calcifer:/home/cjac# for guest in `ls /usr/src/moonunit-guests/ | grep -v -e etc | awk -F- '{print $1}'` ; do lvcreate /dev/vg00 -n ${guest} -L 4200M -C y ; done
  Logical volume "edge" created
  Logical volume "ns0" created
  Logical volume "sh1" created
  Logical volume "sip0" created
  Logical volume "smtp" created
  Logical volume "vpn1" created
  Logical volume "wsg" created
calcifer:/home/cjac# partprobe /dev/vg00/edge
calcifer:/home/cjac# fdisk -l /dev/vg00/edge
GNU Fdisk 1.2.4
Copyright (C) 1998 - 2006 Free Software Foundation, Inc.
This program is free software, covered by the GNU General Public License.

This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
GNU General Public License for more details.


Disk /dev/dm-21: 4 GB, 4400524800 bytes
255 heads, 63 sectors/track, 535 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

     Device Boot      Start         End      Blocks   Id  System 
/dev/dm-21p1               1         535     4297356   83  Linux
calcifer:/home/cjac# kpartx -l /dev/vg00/edge
vg00-edge1 : 0 8594712 /dev/vg00/edge 63
calcifer:/home/cjac# kpartx -a /dev/vg00/edge

calcifer:/home/cjac# dd if=/usr/src/moonunit-guests/edge-disk-20140420T070034.xz | xz -d | dd of=/dev/mapper/vg00-edge1 
503474+1 records in
503474+1 records out
257778704 bytes (258 MB) copied, 311.538 s, 827 kB/s
8388608+0 records in
8388608+0 records out
4294967296 bytes (4.3 GB) copied, 925.612 s, 4.6 MB/s
calcifer:/home/cjac# mount /dev/mapper/vg00-edge1 /mnt/tmp
calcifer:/home/cjac# cp -r /lib/modules/`uname -r` /mnt/tmp/lib/modules/
calcifer:/home/cjac# emacs /mnt/tmp/etc/inittab
calcifer:/home/cjac# emacs /mnt/tmp/etc/fstab
calcifer:/home/cjac# umount /mnt/tmp
calcifer:/home/cjac# kpartx -d /dev/vg00/edge
calcifer:/home/cjac# virsh start edge && virsh console edge

Somebody needs faster drives. Wow was that ever slow.

, , , , ,

6 responses to “Bringing the VMs back online”

    • I tried doing HVM on xen in 2009 and it was pretty painful. The overhead of the virtual network interfaces was not what one would call negligible unless running the 2.6.18 kernel that RHEL shipped with. The turn-around time for getting new features in to the hypervisor feels faster for the qemu-based system as well.

  1. Take a look at the buffer(1) program. Put that in your pipeline somewhere (such as after that godawful 512 bytes at a time dd(1) call) and enjoy decoupling. You’d see something like a 100-fold throughput increase assuming your drives can do 60 megs per second and the CPU can un-xz at that speed.

  2. FWIW I’d have tried it a little differently:
    # xzcat /usr/src/moonunit-guests/edge-disk-20140420T070034.xz | dd bs=1M of=/dev/mapper/vg00-edge1

    dd has a tiny default 512-byte block size. That may incur some overhead on reads, and then data can be buffered and chunked up to at most 5KiB I think over the pipe to xz, whereas giving the file as an argument to xzcat allows it to read exactly the amount of data it wants in one operation.

    And on writes, 512-byte blocks might limit you to the disks’ max write ops/second instead of its max possible throughput. (I suspect the writes actually get coalesced into fewer operations, but there’d still be at least some overhead from doing that).

Leave a Reply to C.J. Adams-Collier Cancel reply

%d bloggers like this: