Tuesday, December 24, 2019

Nested Virtualization: Hyper-V guest on Linux KVM host

Posting this because I couldn't find anything in one place detailing how to do this configuration.

If you are running a Windows 10 guest on a Linux KVM virtualization host, you'll find that nested virtualization with Hyper-V is disabled by Windows. This is despite the nested flag for your kvm_intel or kvm_amd module being set to Y.

The reason for this? Windows disables its own virtualization support if it sees that it's running on a CPU which has the hypervisor flag set.

Assuming you're using libvirt, i.e. you manage your guests with virsh or virt-manager, the way you configure this on a per-guest basis is by disabling the hypervisor feature flag. Run:

virsh edit GUEST

which will allow you to edit the guest config as an XML file. Modify the CPU section as follows:

  <cpu mode='host-model' check='partial'>
    <model fallback='allow'/>
    <feature policy='disable' name='hypervisor'/>
  </cpu>

By setting the disable policy for the hypervisor flag, the CPU will no longer advertise that it's a guest running in a hypervisor environment. Assuming you already have nested virtualization enabled globally (set to Y in /sys/module/kvm_intel/parameters/nested) this will allow Windows 10 to at least try to enable Hyper-V.

There are still plenty of ways that the guest can determine it's being virtualized (e.g. looking for paravirtualized devices, or vendor strings on devices such as the emulated disks). However, this flag is what Hyper-V looks for to determine whether it should try to run, if all other hardware requirements are met.

Saturday, November 2, 2019

PCI DSS v4.0 draft released

The first draft of PCI DSS v4.0 is now available to stakeholders for comment. If you're an accredited stakeholder, be sure to get your comments in before 13 December.

But if you're not such a stakeholder (perhaps even if you are!), consider this: the PCI DSS has a good claim to be the information security standard having the broadest adoption, while not being maintained by a government or international standards body. Instead, it's maintained by the PCI SSC, a private company. It's targeted at payment card operations, with the explicit objective of protecting Card Holder Data (CHD), and stakeholders are considered those in the card payment space. Because of its nature, however, the impact of this particular security standard is felt right across compliant organizations.

Whichever security standard is pushing the boundaries will typically be the one that companies have their policies and procedures align with. For many organizations which have any contact with card numbers, and particularly so for SMEs, that standard is likely to be the PCI DSS. But in recent years companies are also finding themselves with increasing compliance obligations, such as GDPR in Europe, or CCPA in California. These create general requirements for protecting personal information, but allow each regulated entity to define the specifics, driven by their own business need.

Companies now find themselves with two somewhat intersecting sets of information security requirements - some general, some very specific (PCI DSS). The result? We hear stories of companies moving marketing databases into their PCI zone, with the goal of leveraging their existing PCI security controls to streamline the protection of personal data. This would have seemed unthinkable a few years ago, when PCI scope reduction was the accepted norm. It starts to make sense in the context of maintaining a consistent set of corporate security policies and procedures, while addressing these increased data privacy obligations.

The PCI DSS has been considered a shining example of successful industry self-regulation. If something like it, originating from the payment industry, had not gotten traction, it is likely that governments would have stepped in themselves to mandate safeguarding consumer payment information. But considering the broad impact of any changes to the standard, perhaps it's time that participation in the process of revising and maintaining this standard is opened up. It may no longer be sufficient to solicit and accept feedback from those directly involved with card payments, since it's clear changes to the standard will have an impact felt beyond protecting CHD.

Is it time to consider that PCI DSS stakeholders are now the IT security community at large?

Friday, August 16, 2019

Alexa, where's my RAM?

I regularly use very small AWS EC2 instances for things like jump servers. In particular, I use both t2.nano, and t3a.nano instances running CentOS 7. These instance types are both sold as having 0.5 GB RAM.

So I was a little surprised when I took a t2.nano instance, and upgraded it to t3a.nano, and found it to have less RAM available. This was the exact same OS boot volume, so no changes in kernel or anything like that:

[rah@t2.nano ~]$ grep MemTotal /proc/meminfo 
MemTotal:         497116 kB

[rah@t3a.nano ~]$ grep MemTotal /proc/meminfo 
MemTotal:         478304 kB

That's a reduction of almost 20 MB RAM when moving to the newer platform. Normally, I wouldn't care about 20 MB, but when you're dealing with an OS running on supposedly 512 MB to start with, this reduction is not good.

So the question is, why? They're both sold as being 0.5 GB, and since it's all virtual, Amazon can allocate however much they like per instance type. If there is a valid technical reason, e.g. the same amount of RAM presents differently due to different mapping, maybe that's OK.

But this just feels slightly icky to me, as if Amazon are trying to squeeze an extra instance or two onto their next generation hypervisors, with the same physical capacity.