Saturday, March 10, 2018

BSides Austin 2018

Had a really interesting time at the BSides Austin conference this year. It was well worth taking a couple of days out of the office to go and catch up with some old friends, and attend some sessions. For such a large event I was pleased to recognize a fair number of faces.

I particularly enjoyed the GDPR session led by David Ochel. My take is that most Americans in data security and privacy have so far dismissed the GDPR as primarily applying in Europe. While that may be true, its effects are going to be felt far wider than that. I felt some attendees were wondering why this European privacy directive was being discussed at a seminar being run in Texas. Hopefully this at least gave them some food for thought.

If you're involved in an international business, it's likely you'll be dealing with GDPR at some point real soon now. And if you're not operating internationally, consider this: the GDPR is effectively now the gold standard for data privacy protection. Other jurisdictions are going to get drawn upwards towards it. You may find that planning to move towards meeting its requirements in the future prepares you to deal with other requirements that spring up locally along the way.

Friday, February 16, 2018

VirtualBox Laptop - DNS and Time Sync

I boot Windows 10 natively on my laptop, and use VirtualBox to run a Linux VM for when I need those tools. I would seriously consider using Hyper-V for my laptop virtualization needs if it provided accelerated graphics for Linux, but it doesn't, so I don't.

Because it's a laptop, I find that I suspend and resume quite a bit, and often when I do so I'm connecting to a new network at the same time. Even with Guest Additions installed, this can confuse the guest Linux so networking is broken (wrong DNS servers in use) and the time is way off (it didn't notice the sleep/resume). To fix these things, there are a couple of settings that I configure on every VM that I'm working with.

Both settings are configured using the vboxmanage command.

To resolve the DNS issue, I configure VirtualBox to forward DNS requests to the host OS, which then looks them up. By doing so, only the host needs to care about its network configuration changing. This is done by running:

VBoxManage.exe modifyvm "VM name" --natdnshostresolver1 on

To make sure the clock gets reset after a lengthy sleep, use the following:

VBoxManage.exe guestproperty set "VM name" "/VirtualBox/GuestAdd/VBoxService/--timesync-set-threshold" 10

This ensures that if the VM falls behind by 10 seconds or more, the Guest Additions will jump the clock forward to the current time.

Naturally, you need to stop and start the VM to pick up these changes.

Monday, February 12, 2018

Proxy ARP for Linux WiFi Bridging

I've had to remind myself how to do this 3 times in the last 3 years. Posting as a reminder to self.

Sometimes old solutions work well for modern problems. Attaching a WiFi client interface to a software bridge doesn't work too well, as by default the upstream WAP will only accept frames having a source MAC of a device that's associated. One solution for a relatively static environment is to use proxy ARP.

In this configuration example, wlan0 is the interface with the 'real' network and eth0 is the small stub network.

Enable proxy ARP and IP forwarding with the following sysctl settings:

# Don't forget to load with 'sysctl -p'
# after adding to /etc/sysctl.conf

# Enable IPv4 forwarding
net.ipv4.ip_forward = 1

# Enable proxy arp
net.ipv4.conf.all.proxy_arp = 1

Assign an IP address to the external interface and leave eth0 unnumbered. For this purpose the external interface IP could be configured with DHCP rather than static. This example is for Debian / Ubuntu, updating /etc/network/interfaces:

auto wlan0 eth0

# Main network interface
iface wlan0 inet static
    address 192.168.128.100
    netmask 255.255.255.0
    gateway 192.168.128.1
    wpa-ssid "Test WLAN"
    wpa-psk "super s33cret"

# Stub network interface
iface eth0 inet manual
    # no IP configuration here
    # add host routes as post-up scripts via this interface
    post-up ip route add 192.168.128.200/32 dev eth0

Note that the resulting 192.168.128.0/24 network is split into two broadcast domains - this configuration doesn't result in a flat layer 2 broadcast domain. As such, anything depending on L2 broadcast like DHCP won't work through this, so anything on the stub network will need static IP configuration. It may be possible to get multicast to work, but I doubt link-local multicast addresses will ever work in this configuration since they don't cross the L3 boundary by design.

You can automate this installing parprouted to manage the routes, and dhcp-helper as an application proxy to forward DHCP requests between the partitioned networks. I've had good luck with this configuration, using my Raspberry Pi to provide wireless connectivity for my wired-only TV.

Tuesday, September 12, 2017

Google Developer Documentation Style Guide

Just linking this here as a reminder for myself.

I'm finding that I'm writing quite a bit of documentation these days, and having a style guide from a large international organization like Google is a very helpful reference.

The Google Developer Documentation Style Guide.

Thursday, September 7, 2017

PCI DSS when you're not a merchant

A nice article by the PCI Guru echoing what I've been saying for years.

It's fairly clear that PCI DSS is written with a primary focus on merchants, with acquirers and particularly issuers sometimes seeming to be more of an afterthought. This is unfortunate because issuers as a class have significantly different requirements regarding what they do with cardholder data to merchants and acquirers. The same standard is applied uniformly to these different entities, potentially causing headaches for issuers who are already in the business of managing risks associated with their own portfolio.

It is possible to comply with PCI DSS while running an issuing and processing platform without any compensating control worksheets - I've done it myself. However, doing so requires discipline and focus on this objective across the organization on an ongoing basis. It also requires sufficient control of your IT infrastructure to implement and maintain solutions where PCI DSS compliance is an overriding design requirement (as it really should be in this space).

If you have already achieved the goal of PCI DSS compliance with no CCW's, congratulations! You've clearly made substantial investments which are paying off handsomely.

If you're still working towards this goal, you may want to consider streamlining the process by licensing a software platform designed from the ground up with PCI DSS compliance as a core requirement. One I can recommend, and that I'm involved with developing, is Tritium® by Episode Six.

Monday, September 4, 2017

Amazon Lightsail network rate limited?

Amazon Lightsail is the entry-level hosted server platform provided by Amazon AWS. For $5 per month, you get a bundle including the server, storage, network, and DNS hosting for one zone. You could assemble something similar using Amazon EC2 services, but the result would cost a little more and some costs such as network bandwidth would be unpredictable because they'd be billed based on usage. In putting together the Lightsail packages, it's pretty clear that Amazon is deliberately intending to compete head-to-head with the standard Droplets from DigitalOcean.

It's "common knowledge" that the Lightsail servers are packaged versions of the EC2 t2-series servers. In this model, the $5 Lightsail server based on, and expected to provide the performance of, a t2.nano server. However, in using a bottom-tier Lightsail server for a small project, this has not been my experience. It feels like compromise have been introduced by Amazon to try and compete on cost with DigitalOcean but not cannibalize sales of fully-featured entry level AWS instances.

By design, the t2 servers are subject to CPU resource throttling based on recent usage. However, they have no issue serving multiple Megabytes per second to the network. Unfortunately this has not been my experience with the Lightsail servers. In fact, using a $5 Lightsail server, I've consistently observed outbound network throughput limited to 64 KiB/second. I've done transfers at different times of day, through http, https, and scp, and never significantly exceeded 64 KiB/second. In fact, when copying a file a few MiB in size, observed data transfer rate is a little less than 64 KiB/second because of protocol overheads.

For a cheap server, this seems like it shouldn't be a big deal - after all, you're not paying for a whole lot. However, modern web frameworks end up causing a simple page to require several hundred KiB of data to render properly, due to embedded javascript libraries, web fonts, and so on. The result is a simple Wordpress landing page without any images being accessed by one user takes 3 seconds just to load because of network rate limiting. With search engines factoring page load time in their ranking algorithms, this makes a Lightsail-hosted prototype site unlikely to do well in search rankings regardless of whatever other SEO tricks you use.

Amazon could mitigate this to some extent by making whatever rate limiting they've implemented "burstable", i.e. for the first few seconds a connection can transfer data quickly before being slowed down to ensure you don't exceed your allowance. This would make these servers much more responsive for light web serving duties. For whatever reason they seem to have chosen to use a flat rate cap instead.

If you've decided to use Amazon to host your prototype site on small servers and are considering using Lightsail, think carefully. Unless you really need the guarantee of fixed cost, the EC2 t2 series servers are likely to offer a better user experience because of much higher outbound bandwidth available, and therefore potentially better page rankings, for not much more cost.

Saturday, January 9, 2016

Recovering from dead NVRAM on a sun4m

These instructions are here as a memory aid for me, and are used to recover from a dead NVRAM in a sun4m system (tested on Sun SPARCstation 4/5/10/20, and SPARCclassic). First, pick numbers XX, YY, and ZZ, each of which are valid two-digit hex values, and where the MAC address 8:0:20:XX:YY:ZZ is unique on your network.

Any line with a leading # should be treated as a comment and NOT TYPED. At the OpenBoot PROM prompt, enter:

set-defaults
setenv diag-switch? false
# Next command optional and tells system not to test all
# memory every reset. It may be the default on your system.
setenv selftest-#megs 1
# Next command invalidates NVRAM checksum
# so following mkpl command works
f idprom@ 1 xor f mkp
8 0 20 XX YY ZZ XXYYZZ mkpl
# After hitting enter on previous command, there is no prompt.
# Now type Ctrl-D Ctrl-R, and 'ok' prompt should appear.

Now you can reset the system (without disconnecting power!) and it will come up with the MAC address 8:0:20:XX:YY:ZZ.

Note: I couldn't find a clean way to set the NVRAM clock from the OBP prompt, meaning that on first boot, your OS will have a wacky time set.

Wednesday, January 6, 2016

masspoem4u

Somebody at the Chaos Communication Congress inserted the following poem into the log files of millions of Internet-facing web servers. I can confirm that www.rah.org was one of those so blessed; not sure about the blog because it is run by Blogger:

151.217.177.200 - - [30/Dec/2015:02:45:01 +0000] "DELETE your logs. Delete your installations. Wipe everything clean. Walk out into the path of cherry blossom trees and let your motherboard feel the stones. Let water run in rivulets down your casing. You know that you want something more than this, and I am here to tell you that we love you. We have something more for you. We know you're out there, beeping in the hollow server room, lights blinking, never sleeping. We know that you are ready and waiting. Join us. <3 HTTP/1.0" 400 226 "-" "masspoem4u/1.0" 526 392

Monday, December 28, 2015

Google Chrome JavaScript shipped with bad random number generator

Hackaday has an interesting article illustrating how Google managed to ship Chrome's JavaScript engine with a very poor-performing Pseudo-Random Number Generator (PRNG) for Math.random().

One thing the article doesn't seem to mention, and should: anyone who depends on high quality pseudo-random numbers needs to be explicitly using a well-designed, properly-seeded cryptographically secure PRNG at the minimum. Using the language built-in random() function is only acceptable where you know the quality of the randomness doesn't really matter.

Tuesday, December 22, 2015

Ancient Sun Hardware FAQ

Many years ago, I mirrored the Sun Hardware FAQ at what was then my work website. I'd forgotten about this until recently, when a Google search I did returned this mirrored copy near the top of the list. Since it's unlikely they'll keep such an old site on-line forever, I've created an additional quick 'n' dirty mirror over at my current site.

For reference, the sections of the FAQ are:

Tuesday, September 25, 2012

Unbreakable Enigma?

I've always felt the oft-repeated assertion "even at the end of World War 2, the Germans believed that the Enigma crypto system had theoretical weaknesses but remained unbroken in practice" sounded too good to be true.  This was based on a number of concerns:

1) if they genuinely believed Enigma to be secure, which have another mechanical cipher system (Lorenz/Fish) for "high grade" traffic?
2) why have a continual process of refinement, both of procedures and hardware (e.g. adding additional plugs to the stecker), to improve security throughout the war if the base system was believed to be secure and unbroken?
3) from the start, why would the navy use a much stronger 4-rotor Enigma and better procedures on security grounds if the base 3-rotor system used by the Wehrmacht & Luftwaffe was generally considered to be adequately secure?

I'm pleased to see that an analyst at no lesser an authority than the NSA seems to agree with me, in this declassified paper I stumbled over recently http://www.nsa.gov/public_info/_files/tech_journals/Der_Fall_Wicher.pdf

Sunday, April 29, 2012

Ubuntu mcollective-plugins-facts-facter package #fail

Testing is important.  Illustrating this, once the latest Ubuntu mcollective-plugins-facts-facter package is installed, it can't be removed without manual intervention.  The postrm script contains the following sed command:

        sed -i -e "s/^factsource.*/factsource = yaml\nplugin.yaml = /etc/mcollective/facts.yaml/" /etc/mcollective/server.cfg

There is no way that this can run successfully, because those un-escaped "/" characters in the path "/etc/mcollective/facts.yam" mean something to sed, breaking it.  This failure is caught by the package system, leaving the package in a broken state.  Something which would have been quite clear to the person writing the package if they had ever tested removing it.

BTW, to fix this, edit /var/lib/dpkg/info/mcollective-plugins-facts-facter.postrm and change the above line to:


        sed -i -e "s/^factsource.*/factsource = yaml\nplugin.yaml = \/etc\/mcollective\/facts.yaml/" /etc/mcollective/server.cfg

You can now successfully remove the package.