Q: tests/ioctl_kvm_run_common.c failing with KVM_EXIT_FAIL_ENTRY

Masatake YAMATO yamato at redhat.com
Thu May 23 03:23:45 UTC 2019


> Hi,
> 
> Our ioctl_kvm_run* tests started to fail on f30-test.fedorainfracloud.org
> with the following symptoms:
> 
> strace/tests$ ./ioctl_kvm_run; echo \$?=$?
> ioctl(3</dev/kvm>, KVM_GET_API_VERSION, 0) = 12
> ioctl(3</dev/kvm>, KVM_CHECK_EXTENSION, KVM_CAP_USER_MEMORY) = 1
> ioctl(3</dev/kvm>, KVM_CREATE_VM, 0) = 4<anon_inode:kvm-vm>
> ioctl(4<anon_inode:kvm-vm>, KVM_SET_USER_MEMORY_REGION, {slot=0, flags=0, guest_phys_addr=0x1000, memory_size=4096, userspace_addr=0x7fe116af2000}) = 0
> ioctl(4<anon_inode:kvm-vm>, KVM_CREATE_VCPU, 0) = 5<anon_inode:kvm-vcpu:0>
> ioctl(3</dev/kvm>, KVM_GET_VCPU_MMAP_SIZE, 0) = 12288
> ioctl(3</dev/kvm>, KVM_GET_SUPPORTED_CPUID, 0x7fe116ac2378) = -1 E2BIG (Argument list too long)
> ioctl(3</dev/kvm>, KVM_GET_SUPPORTED_CPUID, {nent=26, entries=[...]}) = 0
> ioctl(5<anon_inode:kvm-vcpu:0>, KVM_SET_CPUID2, {nent=0, entries=[]}) = 0
> ioctl(5<anon_inode:kvm-vcpu:0>, KVM_SET_CPUID2, {nent=26, entries=[...]}) = 0
> ioctl(5<anon_inode:kvm-vcpu:0>, KVM_SET_CPUID2, NULL) = -1 EFAULT (Bad address)
> ioctl(5<anon_inode:kvm-vcpu:0>, KVM_GET_SREGS, {cs={base=0xffff0000, limit=65535, selector=61440, type=11, present=1, dpl=0, db=0, s=1, l=0, g=0, avl=0}, ...}) = 0
> ioctl(5<anon_inode:kvm-vcpu:0>, KVM_SET_SREGS, {cs={base=0, limit=65535, selector=0, type=11, present=1, dpl=0, db=0, s=1, l=0, g=0, avl=0}, ...}) = 0
> ioctl(5<anon_inode:kvm-vcpu:0>, KVM_SET_REGS, {rax=0x2, ..., rsp=0, rbp=0, ..., rip=0x1000, rflags=0x2}) = 0
> ioctl(5<anon_inode:kvm-vcpu:0>, KVM_RUN, 0) = 0
> exit_reason = 0x9
> $?=1
> 
> exit_reason 0x9 is KVM_EXIT_FAIL_ENTRY.
> 
> This is happening in the following system:
> Linux f30-test.fedorainfracloud.org 5.0.5-300.fc30.x86_64 #1 SMP Wed Mar 27 20:45:26 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
> 
> There are no such problems in the following systems:
> Linux f29-test.fedorainfracloud.org 5.0.5-200.fc29.x86_64 #1 SMP Wed Mar 27 20:58:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
> Linux rawhide-test.fedorainfracloud.org 4.19.0-0.rc8.git3.1.fc30.x86_64 #1 SMP Thu Oct 18 13:50:54 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
> 
> Any ideas how to handle this?

I tried the test case on my machine where 5.0.5-300.fc30.x86_64 runs.
However, it is not reproduced.

I found an explanations about KVM_EXIT_FAIL_ENTRY in linux Documentation.
linux/Documentation/virtual/kvm/api.txt:

		    /* KVM_EXIT_FAIL_ENTRY */
		    struct {
			    __u64 hardware_entry_failure_reason;
		    } fail_entry;

    If exit_reason is KVM_EXIT_FAIL_ENTRY, the vcpu could not be run due
    to unknown reasons.  Further architecture-specific information is
    available in hardware_entry_failure_reason.

So the issue is very related to the hardware.

I sent a patch to the test case that dumps hardware_entry_failure_reason.
We have to see the value.

I would like to see the output of dmesg when the case is failed.
I also want to know the contents of /proc/cpuinfo.
The value of hardware_entry_failure_reason depends on the architecture.
Here AMD or INTEL.

Masatake YAMATO
> 
> -- 
> ldv


More information about the Strace-devel mailing list