How to convert a VM from VirtualBox to KVM

Converting the virtual machine images is very easy, but many guides suggest you to convertĀ  from VDI to RAW and then from RAW to QCOW2. It doesn’t really make sense, you’ll waste double the time, and it’s gonna be hours if the drive is big.

To convert from VDI to QCOW2 just use qemu-img:

qemu-img convert -f vdi -O qcow2 [VBOX-IMAGE.vdi] [KVM-IMAGE.qcow2]

If the virtual machine was Windows-based, probably will crash at first boot, because of the virtual hardware changes, and because there are no virtio drivers installed (unless you make a VM with IDE emulation)

For fix the BSOD at boot you can do onf of this:

Personally I chose the first option, but then I had to install again hundreds and hundreds of security updates, it would have been better if I installed the patch before migrating.

Update: I noticed that the converted Windows XP VM uses the cpu at 100% even when idle. This is a nasty bug that would take ages to fix. I tried to force enabling ACPI, but all I can get is a BSOD on boot. I’ll just scratch the VM and rebuild it…

Is it worth to keep a PC case open?

I got a new server for Dandandin and I kept it open from all sides while I was doing tests. Average disk temperature 23 Ā° C. When I saw that was stable, I closed, placing fans on the front and on the back, to have a positive airflow. Disk temperature lowered a couple of degrees, and the disk has much more activity than last week!

caso-chiusoSo, even if it might seem that an open case will have “more airflow”, you will have a better airflow with a closed case.

update: looks like smartctl is not reporting accurately the hdd temperature. I need to do more tests with hddtemp.

How to do a quick CPU benchmark on Linux

With Enki (a Brain Training app for coders – if you want to try iy, and you need an invite, you can my code: MAGNE985) I found a quick benchmark to for Linux, to see the speed of a CPU core.
dd if=/dev/zero bs=1M count=1024 | md5sum
This line tells the CPU to calculate an md5 hash for an 1gb of “zeroes” and measure how long it takes. For example, on the Pentium G3420 that I have in my office I get this:
dd if=/dev/zero bs=1M count=1024 | md5sum
1073741824 byte (1,1 GB) copiati, 2,10036 s, 511 MB/s dd if=/dev/zero bs=10M count=2048 | md5sum
21474836480 byte (21 GB) copiati, 49,0278 s, 438 MB/s
while on an Intel Xeon W3520 (my web server) I get this:
dd if=/dev/zero bs=1M count=1024 | md5sum
1073741824 bytes (1.1 GB) copied, 2.79137 s, 385 MB/s

dd if=/dev/zero bs=10M count=2048 | md5sum
21474836480 bytes (21 GB) copied, 56.9042 s, 377 MB/s
Hey! It takes 10 seconds more! What? An expensive Xeon is slower than a cheaper Pentium????
Yes, the server is outdated, but I did not expect to have such a difference! It is time to change my web server!