Tuesday, December 24, 2019

Nested Virtualization: Hyper-V guest on Linux KVM host

Posting this because I couldn't find anything in one place detailing how to do this configuration.

If you are running a Windows 10 guest on a Linux KVM virtualization host, you'll find that nested virtualization with Hyper-V is disabled by Windows. This is despite the nested flag for your kvm_intel or kvm_amd module being set to Y.

The reason for this? Windows disables its own virtualization support if it sees that it's running on a CPU which has the hypervisor flag set.

Assuming you're using libvirt, i.e. you manage your guests with virsh or virt-manager, the way you configure this on a per-guest basis is by disabling the hypervisor feature flag. Run:

virsh edit GUEST

which will allow you to edit the guest config as an XML file. Modify the CPU section as follows:

  <cpu mode='host-model' check='partial'>
    <model fallback='allow'/>
    <feature policy='disable' name='hypervisor'/>
  </cpu>

By setting the disable policy for the hypervisor flag, the CPU will no longer advertise that it's a guest running in a hypervisor environment. Assuming you already have nested virtualization enabled globally (set to Y in /sys/module/kvm_intel/parameters/nested) this will allow Windows 10 to at least try to enable Hyper-V.

There are still plenty of ways that the guest can determine it's being virtualized (e.g. looking for paravirtualized devices, or vendor strings on devices such as the emulated disks). However, this flag is what Hyper-V looks for to determine whether it should try to run, if all other hardware requirements are met.

Saturday, November 2, 2019

PCI DSS v4.0 draft released

The first draft of PCI DSS v4.0 is now available to stakeholders for comment. If you're an accredited stakeholder, be sure to get your comments in before 13 December.

But if you're not such a stakeholder (perhaps even if you are!), consider this: the PCI DSS has a good claim to be the information security standard having the broadest adoption, while not being maintained by a government or international standards body. Instead, it's maintained by the PCI SSC, a private company. It's targeted at payment card operations, with the explicit objective of protecting Card Holder Data (CHD), and stakeholders are considered those in the card payment space. Because of its nature, however, the impact of this particular security standard is felt right across compliant organizations.

Whichever security standard is pushing the boundaries will typically be the one that companies have their policies and procedures align with. For many organizations which have any contact with card numbers, and particularly so for SMEs, that standard is likely to be the PCI DSS. But in recent years companies are also finding themselves with increasing compliance obligations, such as GDPR in Europe, or CCPA in California. These create general requirements for protecting personal information, but allow each regulated entity to define the specifics, driven by their own business need.

Companies now find themselves with two somewhat intersecting sets of information security requirements - some general, some very specific (PCI DSS). The result? We hear stories of companies moving marketing databases into their PCI zone, with the goal of leveraging their existing PCI security controls to streamline the protection of personal data. This would have seemed unthinkable a few years ago, when PCI scope reduction was the accepted norm. It starts to make sense in the context of maintaining a consistent set of corporate security policies and procedures, while addressing these increased data privacy obligations.

The PCI DSS has been considered a shining example of successful industry self-regulation. If something like it, originating from the payment industry, had not gotten traction, it is likely that governments would have stepped in themselves to mandate safeguarding consumer payment information. But considering the broad impact of any changes to the standard, perhaps it's time that participation in the process of revising and maintaining this standard is opened up. It may no longer be sufficient to solicit and accept feedback from those directly involved with card payments, since it's clear changes to the standard will have an impact felt beyond protecting CHD.

Is it time to consider that PCI DSS stakeholders are now the IT security community at large?

Friday, August 16, 2019

Alexa, where's my RAM?

I regularly use very small AWS EC2 instances for things like jump servers. In particular, I use both t2.nano, and t3a.nano instances running CentOS 7. These instance types are both sold as having 0.5 GB RAM.

So I was a little surprised when I took a t2.nano instance, and upgraded it to t3a.nano, and found it to have less RAM available. This was the exact same OS boot volume, so no changes in kernel or anything like that:

[rah@t2.nano ~]$ grep MemTotal /proc/meminfo 
MemTotal:         497116 kB

[rah@t3a.nano ~]$ grep MemTotal /proc/meminfo 
MemTotal:         478304 kB

That's a reduction of almost 20 MB RAM when moving to the newer platform. Normally, I wouldn't care about 20 MB, but when you're dealing with an OS running on supposedly 512 MB to start with, this reduction is not good.

So the question is, why? They're both sold as being 0.5 GB, and since it's all virtual, Amazon can allocate however much they like per instance type. If there is a valid technical reason, e.g. the same amount of RAM presents differently due to different mapping, maybe that's OK.

But this just feels slightly icky to me, as if Amazon are trying to squeeze an extra instance or two onto their next generation hypervisors, with the same physical capacity.

Friday, February 16, 2018

VirtualBox Laptop - DNS and Time Sync

I boot Windows 10 natively on my laptop, and use VirtualBox to run a Linux VM for when I need those tools. I would seriously consider using Hyper-V for my laptop virtualization needs if it provided accelerated graphics for Linux, but it doesn't, so I don't.

Because it's a laptop, I find that I suspend and resume quite a bit, and often when I do so I'm connecting to a new network at the same time. Even with Guest Additions installed, this can confuse the guest Linux so networking is broken (wrong DNS servers in use) and the time is way off (it didn't notice the sleep/resume). To fix these things, there are a couple of settings that I configure on every VM that I'm working with.

Both settings are configured using the vboxmanage command.

To resolve the DNS issue, I configure VirtualBox to forward DNS requests to the host OS, which then looks them up. By doing so, only the host needs to care about its network configuration changing. This is done by running:

VBoxManage.exe modifyvm "VM name" --natdnshostresolver1 on

To make sure the clock gets reset after a lengthy sleep, use the following:

VBoxManage.exe guestproperty set "VM name" "/VirtualBox/GuestAdd/VBoxService/--timesync-set-threshold" 10

This ensures that if the VM falls behind by 10 seconds or more, the Guest Additions will jump the clock forward to the current time.

Naturally, you need to stop and start the VM to pick up these changes.

Monday, February 12, 2018

Proxy ARP for Linux WiFi Bridging

I've had to remind myself how to do this 3 times in the last 3 years. Posting as a reminder to self.

Sometimes old solutions work well for modern problems. Attaching a WiFi client interface to a software bridge doesn't work too well, as by default the upstream WAP will only accept frames having a source MAC of a device that's associated. One solution for a relatively static environment is to use proxy ARP.

In this configuration example, wlan0 is the interface with the 'real' network and eth0 is the small stub network.

Enable proxy ARP and IP forwarding with the following sysctl settings:

# Don't forget to load with 'sysctl -p'
# after adding to /etc/sysctl.conf

# Enable IPv4 forwarding
net.ipv4.ip_forward = 1

# Enable proxy arp
net.ipv4.conf.all.proxy_arp = 1

Assign an IP address to the external interface and leave eth0 unnumbered. For this purpose the external interface IP could be configured with DHCP rather than static. This example is for Debian / Ubuntu, updating /etc/network/interfaces:

auto wlan0 eth0

# Main network interface
iface wlan0 inet static
    address 192.168.128.100
    netmask 255.255.255.0
    gateway 192.168.128.1
    wpa-ssid "Test WLAN"
    wpa-psk "super s33cret"

# Stub network interface
iface eth0 inet manual
    # no IP configuration here
    # add host routes as post-up scripts via this interface
    post-up ip route add 192.168.128.200/32 dev eth0

Note that the resulting 192.168.128.0/24 network is split into two broadcast domains - this configuration doesn't result in a flat layer 2 broadcast domain. As such, anything depending on L2 broadcast like DHCP won't work through this, so anything on the stub network will need static IP configuration. It may be possible to get multicast to work, but I doubt link-local multicast addresses will ever work in this configuration since they don't cross the L3 boundary by design.

You can automate this installing parprouted to manage the routes, and dhcp-helper as an application proxy to forward DHCP requests between the partitioned networks. I've had good luck with this configuration, using my Raspberry Pi to provide wireless connectivity for my wired-only TV.

Tuesday, September 12, 2017

Google Developer Documentation Style Guide

Just linking this here as a reminder for myself.

I'm finding that I'm writing quite a bit of documentation these days, and having a style guide from a large international organization like Google is a very helpful reference.

The Google Developer Documentation Style Guide.

Thursday, September 7, 2017

PCI DSS when you're not a merchant

A nice article by the PCI Guru echoing what I've been saying for years.

It's fairly clear that PCI DSS is written with a primary focus on merchants, with acquirers and particularly issuers sometimes seeming to be more of an afterthought. This is unfortunate because issuers as a class have significantly different requirements regarding what they do with cardholder data to merchants and acquirers. The same standard is applied uniformly to these different entities, potentially causing headaches for issuers who are already in the business of managing risks associated with their own portfolio.

It is possible to comply with PCI DSS while running an issuing and processing platform without any compensating control worksheets - I've done it myself. However, doing so requires discipline and focus on this objective across the organization on an ongoing basis. It also requires sufficient control of your IT infrastructure to implement and maintain solutions where PCI DSS compliance is an overriding design requirement (as it really should be in this space).

If you have already achieved the goal of PCI DSS compliance with no CCW's, congratulations! You've clearly made substantial investments which are paying off handsomely.

If you're still working towards this goal, you may want to consider streamlining the process by licensing a software platform designed from the ground up with PCI DSS compliance as a core requirement. One I can recommend, and that I'm involved with developing, is Tritium® by Episode Six.

Monday, September 4, 2017

Amazon Lightsail network rate limited?

Amazon Lightsail is the entry-level hosted server platform provided by Amazon AWS. For $5 per month, you get a bundle including the server, storage, network, and DNS hosting for one zone. You could assemble something similar using Amazon EC2 services, but the result would cost a little more and some costs such as network bandwidth would be unpredictable because they'd be billed based on usage. In putting together the Lightsail packages, it's pretty clear that Amazon is deliberately intending to compete head-to-head with the standard Droplets from DigitalOcean.

It's "common knowledge" that the Lightsail servers are packaged versions of the EC2 t2-series servers. In this model, the $5 Lightsail server based on, and expected to provide the performance of, a t2.nano server. However, in using a bottom-tier Lightsail server for a small project, this has not been my experience. It feels like compromise have been introduced by Amazon to try and compete on cost with DigitalOcean but not cannibalize sales of fully-featured entry level AWS instances.

By design, the t2 servers are subject to CPU resource throttling based on recent usage. However, they have no issue serving multiple Megabytes per second to the network. Unfortunately this has not been my experience with the Lightsail servers. In fact, using a $5 Lightsail server, I've consistently observed outbound network throughput limited to 64 KiB/second. I've done transfers at different times of day, through http, https, and scp, and never significantly exceeded 64 KiB/second. In fact, when copying a file a few MiB in size, observed data transfer rate is a little less than 64 KiB/second because of protocol overheads.

For a cheap server, this seems like it shouldn't be a big deal - after all, you're not paying for a whole lot. However, modern web frameworks end up causing a simple page to require several hundred KiB of data to render properly, due to embedded javascript libraries, web fonts, and so on. The result is a simple Wordpress landing page without any images being accessed by one user takes 3 seconds just to load because of network rate limiting. With search engines factoring page load time in their ranking algorithms, this makes a Lightsail-hosted prototype site unlikely to do well in search rankings regardless of whatever other SEO tricks you use.

Amazon could mitigate this to some extent by making whatever rate limiting they've implemented "burstable", i.e. for the first few seconds a connection can transfer data quickly before being slowed down to ensure you don't exceed your allowance. This would make these servers much more responsive for light web serving duties. For whatever reason they seem to have chosen to use a flat rate cap instead.

If you've decided to use Amazon to host your prototype site on small servers and are considering using Lightsail, think carefully. Unless you really need the guarantee of fixed cost, the EC2 t2 series servers are likely to offer a better user experience because of much higher outbound bandwidth available, and therefore potentially better page rankings, for not much more cost.

Saturday, January 9, 2016

Recovering from dead NVRAM on a sun4m

These instructions are here as a memory aid for me, and are used to recover from a dead NVRAM in a sun4m system (tested on Sun SPARCstation 4/5/10/20, and SPARCclassic). First, pick numbers XX, YY, and ZZ, each of which are valid two-digit hex values, and where the MAC address 8:0:20:XX:YY:ZZ is unique on your network.

Any line with a leading # should be treated as a comment and NOT TYPED. At the OpenBoot PROM prompt, enter:

set-defaults
setenv diag-switch? false
# Next command optional and tells system not to test all
# memory every reset. It may be the default on your system.
setenv selftest-#megs 1
# Next command invalidates NVRAM checksum
# so following mkpl command works
f idprom@ 1 xor f mkp
8 0 20 XX YY ZZ XXYYZZ mkpl
# After hitting enter on previous command, there is no prompt.
# Now type Ctrl-D Ctrl-R, and 'ok' prompt should appear.

Now you can reset the system (without disconnecting power!) and it will come up with the MAC address 8:0:20:XX:YY:ZZ.

Note: I couldn't find a clean way to set the NVRAM clock from the OBP prompt, meaning that on first boot, your OS will have a wacky time set.

Wednesday, January 6, 2016

masspoem4u

Somebody at the Chaos Communication Congress inserted the following poem into the log files of millions of Internet-facing web servers. I can confirm that www.rah.org was one of those so blessed; not sure about the blog because it is run by Blogger:

151.217.177.200 - - [30/Dec/2015:02:45:01 +0000] "DELETE your logs. Delete your installations. Wipe everything clean. Walk out into the path of cherry blossom trees and let your motherboard feel the stones. Let water run in rivulets down your casing. You know that you want something more than this, and I am here to tell you that we love you. We have something more for you. We know you're out there, beeping in the hollow server room, lights blinking, never sleeping. We know that you are ready and waiting. Join us. <3 HTTP/1.0" 400 226 "-" "masspoem4u/1.0" 526 392

Monday, December 28, 2015

Google Chrome JavaScript shipped with bad random number generator

Hackaday has an interesting article illustrating how Google managed to ship Chrome's JavaScript engine with a very poor-performing Pseudo-Random Number Generator (PRNG) for Math.random().

One thing the article doesn't seem to mention, and should: anyone who depends on high quality pseudo-random numbers needs to be explicitly using a well-designed, properly-seeded cryptographically secure PRNG at the minimum. Using the language built-in random() function is only acceptable where you know the quality of the randomness doesn't really matter.

Tuesday, December 22, 2015

Ancient Sun Hardware FAQ

Many years ago, I mirrored the Sun Hardware FAQ at what was then my work website. I'd forgotten about this until recently, when a Google search I did returned this mirrored copy near the top of the list. Since it's unlikely they'll keep such an old site on-line forever, I've created an additional quick 'n' dirty mirror over at my current site.

For reference, the sections of the FAQ are: