Ceph qemu kvm performance issues in Windows (it’s slow)

Here are my benchmarks. I’m comparing a Samsung EVO (on a workstation) with a Windows 7 qemu-kvm running on my Ceph cluster.

Notes:

  • Writeback caching is enabled
  • Using VirtIO drivers

ATTO

Ceph Disk vs SSD Disk

compare_atto-ceph-ssd

CrystalMark

Ceph Disk vs SSD Disk

compare_crystal

HD Tune

Ceph Disk vs SSD Disk

compare_hdtune

Interestingly, the HD Tune benchmarks actually hit great speeds, but it took about 10 seconds of benchmarking to get there. The speed is all over the place though.

Ceph LXC openSuSE Samba Share benchmarked within Windows

This is currently my fastest way of accessing my ceph cluster within Windows. Not ideal. Seems to be limited by gigabit. I could add more nic’s and bridge them together, but that’s getting a little weird.

4 bridged VirtIO NIC’s

I’m guessing there was no improvement because the other VM needs the same config in place. Or it’s just a stupid idea / crappy logic. Update: The single nic is connected at 10 gbps all along.

Our Windows 7 KVM benching a network share to a slower Windows 2003 VM (separate server)… for comparison purposes:

Networked Drive w/2 more Ceph OSD’s:

2 more Ceph OSD’s, local VirtIO disk

Research

Unless there is a Windows tunable that is equivalent to “read_ahead_kb” or Ceph addresses this problem on their end (as planned, but with little to no progress) this is as good as it gets.

Update

I found one of my 3 ceph servers had a 100mb link instead of gbe, fixed it and now I’m getting:

HD Tune: Ceph VirtIO RBD vs QEMU image on a single spinning disk

Windows is still performing quite badly. Not sure if improving our network with infiniband will help. Maybe Windows in KVM is just crap.

Leave a Reply

Next ArticleSuperMicro Server Reboots Automatically / Every 5 Minutes