Somebody at the Chaos Communication Congress inserted the following poem into the log files of millions of Internet-facing web servers. I can confirm that www.rah.org was one of those so blessed; not sure about the blog because it is run by Blogger:
151.217.177.200 - - [30/Dec/2015:02:45:01 +0000] "DELETE your logs. Delete your installations. Wipe everything clean. Walk out into the path of cherry blossom trees and let your motherboard feel the stones. Let water run in rivulets down your casing. You know that you want something more than this, and I am here to tell you that we love you. We have something more for you. We know you're out there, beeping in the hollow server room, lights blinking, never sleeping. We know that you are ready and waiting. Join us. <3 HTTP/1.0" 400 226 "-" "masspoem4u/1.0" 526 392
Wednesday, January 6, 2016
Wednesday, December 30, 2015
Google confirms Android N won't use Oracle's Java APIs
Regarding Thom's commentary on this: "I cannot endorse that message, but I do acknowledge it" 8-)
Monday, December 28, 2015
Google Chrome JavaScript shipped with bad random number generator
Hackaday has an interesting article illustrating how Google managed to ship Chrome's JavaScript engine with a very poor-performing Pseudo-Random Number Generator (PRNG) for Math.random().
One thing the article doesn't seem to mention, and should: anyone who depends on high quality pseudo-random numbers needs to be explicitly using a well-designed, properly-seeded cryptographically secure PRNG at the minimum. Using the language built-in random() function is only acceptable where you know the quality of the randomness doesn't really matter.
One thing the article doesn't seem to mention, and should: anyone who depends on high quality pseudo-random numbers needs to be explicitly using a well-designed, properly-seeded cryptographically secure PRNG at the minimum. Using the language built-in random() function is only acceptable where you know the quality of the randomness doesn't really matter.
Tuesday, December 22, 2015
Ancient Sun Hardware FAQ
Many years ago, I mirrored the Sun Hardware FAQ at what was then my work website. I'd forgotten about this until recently, when a Google search I did returned this mirrored copy near the top of the list. Since it's unlikely they'll keep such an old site on-line forever, I've created an additional quick 'n' dirty mirror over at my current site.
For reference, the sections of the FAQ are:
For reference, the sections of the FAQ are:
Tuesday, September 25, 2012
Unbreakable Enigma?
I've always felt the oft-repeated assertion "even at the end of World War 2, the Germans believed that the Enigma crypto system had theoretical weaknesses but remained unbroken in practice" sounded too good to be true. This was based on a number of concerns:
1) if they genuinely believed Enigma to be secure, which have another mechanical cipher system (Lorenz/Fish) for "high grade" traffic?
2) why have a continual process of refinement, both of procedures and hardware (e.g. adding additional plugs to the stecker), to improve security throughout the war if the base system was believed to be secure and unbroken?
3) from the start, why would the navy use a much stronger 4-rotor Enigma and better procedures on security grounds if the base 3-rotor system used by the Wehrmacht & Luftwaffe was generally considered to be adequately secure?
I'm pleased to see that an analyst at no lesser an authority than the NSA seems to agree with me, in this declassified paper I stumbled over recently http://www.nsa.gov/public_info/_files/tech_journals/Der_Fall_Wicher.pdf
1) if they genuinely believed Enigma to be secure, which have another mechanical cipher system (Lorenz/Fish) for "high grade" traffic?
2) why have a continual process of refinement, both of procedures and hardware (e.g. adding additional plugs to the stecker), to improve security throughout the war if the base system was believed to be secure and unbroken?
3) from the start, why would the navy use a much stronger 4-rotor Enigma and better procedures on security grounds if the base 3-rotor system used by the Wehrmacht & Luftwaffe was generally considered to be adequately secure?
I'm pleased to see that an analyst at no lesser an authority than the NSA seems to agree with me, in this declassified paper I stumbled over recently http://www.nsa.gov/public_info/_files/tech_journals/Der_Fall_Wicher.pdf
Sunday, April 29, 2012
Ubuntu mcollective-plugins-facts-facter package #fail
Testing is important. Illustrating this, once the latest Ubuntu mcollective-plugins-facts-facter package is installed, it can't be removed without manual intervention. The postrm script contains the following sed command:
sed -i -e "s/^factsource.*/factsource = yaml\nplugin.yaml = /etc/mcollective/facts.yaml/" /etc/mcollective/server.cfg
There is no way that this can run successfully, because those un-escaped "/" characters in the path "/etc/mcollective/facts.yam" mean something to sed, breaking it. This failure is caught by the package system, leaving the package in a broken state. Something which would have been quite clear to the person writing the package if they had ever tested removing it.
BTW, to fix this, edit /var/lib/dpkg/info/mcollective-plugins-facts-facter.postrm and change the above line to:
sed -i -e "s/^factsource.*/factsource = yaml\nplugin.yaml = \/etc\/mcollective\/facts.yaml/" /etc/mcollective/server.cfg
You can now successfully remove the package.
sed -i -e "s/^factsource.*/factsource = yaml\nplugin.yaml = /etc/mcollective/facts.yaml/" /etc/mcollective/server.cfg
There is no way that this can run successfully, because those un-escaped "/" characters in the path "/etc/mcollective/facts.yam" mean something to sed, breaking it. This failure is caught by the package system, leaving the package in a broken state. Something which would have been quite clear to the person writing the package if they had ever tested removing it.
BTW, to fix this, edit /var/lib/dpkg/info/mcollective-plugins-facts-facter.postrm and change the above line to:
sed -i -e "s/^factsource.*/factsource = yaml\nplugin.yaml = \/etc\/mcollective\/facts.yaml/" /etc/mcollective/server.cfg
You can now successfully remove the package.
Wednesday, December 28, 2011
vi and the Kinesis Advantage Pro keyboard
I use a Kinesis Advantage Pro keyboard wherever possible because I really enjoy the way that it feels to type on, and I believe that it's important to look after your hands. I'm principally a Unix/Linux user, and use vi as my primary editor. In addition, other software that I use regularly such as Google mail (the gmail web interface) uses vi keystrokes (specifically j and k) for navigating. This has lead me to make a couple of customizations to the key layout on all my keyboards.
Customizing the Kinesis keyboard is much easier than creating a custom keymap on Linux or Windows. The keys are remapped in the keyboard itself, which means that in the event somebody needs to use your computer, they can plug in a regular USB keyboard and it will function as they expect.
Firstly, I swap the Escape and Delete keys. The Escape key is used heavily with vi, but is a tiny rubber key which is prone to being ignored when pressed by your pinkie. In contrast, the Delete key is rarely used in Unix, and is a huge key with a proper switch operated by your left thumb. Swapping the role of these two causes no loss of functionality, but makes using vi much easier as you no longer have to take your left hand off the keyboard to press Esc after every edit.
The second swap is the up and down arrow keys. The reason for this is a little more subtle. By default, the 'Up' arrow key is in line with the 'j' key, which in vi is used to move down. The 'Down' arrow key is in line with the k key, which is used to move up. Switching between vi and software that uses the arrow keys becomes very difficult, certainly for me. By swapping the arrow keys, the same fingers are used to move up and down in both vi and every other application, so you don't have to mentally remap the functions every time you switch application.
One site that I've found very useful for stuff like this is RSIguy. There's a bunch of info there regarding selecting the best ergonomic keyboards for your needs.
Customizing the Kinesis keyboard is much easier than creating a custom keymap on Linux or Windows. The keys are remapped in the keyboard itself, which means that in the event somebody needs to use your computer, they can plug in a regular USB keyboard and it will function as they expect.
Firstly, I swap the Escape and Delete keys. The Escape key is used heavily with vi, but is a tiny rubber key which is prone to being ignored when pressed by your pinkie. In contrast, the Delete key is rarely used in Unix, and is a huge key with a proper switch operated by your left thumb. Swapping the role of these two causes no loss of functionality, but makes using vi much easier as you no longer have to take your left hand off the keyboard to press Esc after every edit.
The second swap is the up and down arrow keys. The reason for this is a little more subtle. By default, the 'Up' arrow key is in line with the 'j' key, which in vi is used to move down. The 'Down' arrow key is in line with the k key, which is used to move up. Switching between vi and software that uses the arrow keys becomes very difficult, certainly for me. By swapping the arrow keys, the same fingers are used to move up and down in both vi and every other application, so you don't have to mentally remap the functions every time you switch application.
One site that I've found very useful for stuff like this is RSIguy. There's a bunch of info there regarding selecting the best ergonomic keyboards for your needs.
Saturday, December 17, 2011
Linux KVM virtualization and PCI-DSS
With the release of RedHat 6, KVM is now the default virtualization solution in both the RHEL and Ubuntu worlds. With KVM virtualization, the Linux kernel itself acts as a hypervisor to manage the host hardware, allocating the resources to the guest virtual machines. This is quite different to VMware, where a small, custom written hypervisor manages the host machine hardware, and management software runs in a Linux-like environment on that hypervisor.
This move to using a general purpose OS as the hypervisor has some significant advantages, as the full capabilities of Linux (e.g. RAID, encrypted storage, vast hardware support) can be leveraged when building a solution. Also, relative to VMware, there can be significant cost advantages to using Linux.
However, in a high security environment, moving to a general purpose OS as the hypervisor can introduce additional risks which need to be mitigated. A custom written hypervisor like VMware is designed to do one thing: run VMs. In principle, as long as secure account management policies are followed, patches are installed in a timely manner, and management access is restricted to secure hosts, then the hypervisor is likely to be 'secure'. Host environment security is mostly a function of securing the guest virtual machines themselves.
With a Linux KVM hypervisor, the situation can be very different. Modern Linux distributions provide all sorts of software that are invaluable when deployed appropriately, but which would be poor candidates for installation on a host intended to be a dedicated hypervisor. In this environment, any unnecessary services are just potential vulnerabilities to be exploited in gaining unauthorized access to the host. Once an intruder gains access to the hypervisor, there are many tools which can be used to extract information from a running VM without security tools running inside the guest being aware that anything is happening.
To illustrate this, I've created the following scenario:
1) a host running Ubuntu 11.10 as a KVM hypervisor called 'kvm-host'
2) a VM running Ubuntu 11.10 called 'iso8583', simulating a transaction processor
3) a VM running Ubuntu 11.10 called 'switch' that will connect to iso8583 and send messages
On iso8583, the role of the processing software is simulated by the 'echo' service in inetd. This is essentially the most trivial network server imaginable: you create a TCP connection to the service, and any data that you send is echoed back to you. The data is not logged or processed in any other way, just received by the server and echoed back.
For this example, I'm assuming that our processing BIN is 412356, so all PANs (card numbers) will be of the form '412356' + 10 more digits.
For this example, I'm assuming that our processing BIN is 412356, so all PANs (card numbers) will be of the form '412356' + 10 more digits.
We start by connecting from switch to iso8583 and sending a couple of messages (just fake PANs, in this case). The 'netcat' utility is used to connect to the remote service, a PAN is sent to the processor, which is then echoed back:
18:05:43 switch:~> echo 4123560123456789 | nc iso8583 echo
4123560123456789
18:05:47 switch:~> echo 4123569876543210 | nc iso8583 echo
4123569876543210
Now, on kvm-host (the hypervisor), we dump a copy of the full memory of the virtual machine using the useful gdb tool 'gcore'. Note that gcore produces a core dump of any process (including a virtual machine), without actually terminating the process:
# Get the PID of the VM called iso8583
18:06:05 kvm-host:~> pgrep -f iso8583
18170
# Now get a copy of the in-memory process
18:06:09 kvm-host:~> sudo gcore 18170
[Thread debugging using libthread_db enabled]
[New Thread 0x7f5b8542e700 (LWP 18244)]
[New Thread 0x7f5b89436700 (LWP 18241)]
[New Thread 0x7f5b8ac39700 (LWP 18239)]
[New Thread 0x7f5b87c33700 (LWP 18238)]
[New Thread 0x7f5b89c37700 (LWP 18236)]
[New Thread 0x7f5b86430700 (LWP 18216)]
[New Thread 0x7f5b87432700 (LWP 18214)]
[New Thread 0x7f5b88c35700 (LWP 18205)]
[New Thread 0x7f5b9d9d4700 (LWP 18180)]
0x00007f5ba2213913 in select () from /lib/x86_64-linux-gnu/libc.so.6
Saved corefile core.18170
The file core.18170 now contains a copy of the memory from within the VM - it's as if we lifted the DRAM chips out of a live system and extracted their contents to a file. We now perform a trivial analysis of the core using the strings tool to extract all ASCII text strings from the dump, then look for anything that could be one of our PANs, i.e. anything of the form 412356+10 digits using egrep:
18:06:14 kvm-host:~> strings core.18170 | egrep '412356[[:digit:]]{10}'
4123569876543210
>4123560123456789
4123569876543210
4123569876543210
Sure enough, both PANs are there, even though the server software never attempted to log them to disk, and even though the process which was handling them exited the moment we disconnected. This exposure would not be possible to catch by any software running inside the guest VM, since the exposure is occurring outside of the VM. Therefore, the only way to catch this is by monitoring all actions taken on the hypervisor itself, and the only way to prevent it is to securely lock down the hypervisor.
Worse, if those had been real ISO8583 messages, then the full content of the message would likely be recoverable. This includes what the PCI SSC considers to be 'Sensitive Authentication Data', defined as full track data, PIN block and CAV2/CVC2/CVV2/CID. This is data which you're never allowed to store, and which this echo server software (rightly) doesn't attempt to save to disk. But it's still in memory for some period of time until overwritten, and can be pulled silently from the hypervisor environment.
In a similar vein, any encryption keys which are used to perform software encryption within a VM would be present in the VM dump. Finding these keys would be more tricky than simply using grep to look for a text string, but it would be possible. The worst case scenario would involve walking through the image, looking for aligned blocks of data with the size of the key which could be valid keys (e.g. a randomly generated key is unlikely to contain a NULL byte) and then testing them. This is still many orders of magnitude easier than attempting to break a key by brute force.
I actually consider myself to be a strong proponent of Linux, and it is not my intention to put anybody off using Linux as a hypervisor in a secure environment. I am hoping to draw attention to the fact that a standard Linux distribution cannot and should not be used as another 'appliance' hypervisor. The hypervisor is more critical to your security posture than most other infrastructure components, since a hypervisor compromise allows every system running on top of it to be silently compromised. The hypervisor should be thoughtfully deployed as a first-class security citizen, and must be secured as any other host would be, including hardened configuration standards, FIM, logging of administrative actions, and all the rest.
If in doubt, ask your QSA (auditor) for an opinion. Contrary to what some people believe, they are there to help!
If in doubt, ask your QSA (auditor) for an opinion. Contrary to what some people believe, they are there to help!
Thursday, December 8, 2011
Broken aptitude when running in xterm
With recent versions of Ubuntu, and using xterm as your terminal emulator, the package selection tool aptitude has a nasty habit of corrupting the display as it's used. For example, running aptitude, then searching for "test" produces the following:
As the display is updated, some text remains which should have been overwritten with blank space, but isn't. This makes the tool difficult to use, as you're left sorting out the real, current text from the gobbledygook remnants of previous screens. The fix for this problem is to change the TERM environment variable to be xterm-color rather than the default xterm. Unfortunately, this causes another issue because some tools (such as vim) have their own corruption issues when run with TERM set to xterm-color.
The solution is to put the following in your .bashrc:
if [ "$TERM" = "xterm" ]; then
alias aptitude="TERM=xterm-color sudo aptitude"
else
alias aptitude="sudo aptitude"
fi
As the display is updated, some text remains which should have been overwritten with blank space, but isn't. This makes the tool difficult to use, as you're left sorting out the real, current text from the gobbledygook remnants of previous screens. The fix for this problem is to change the TERM environment variable to be xterm-color rather than the default xterm. Unfortunately, this causes another issue because some tools (such as vim) have their own corruption issues when run with TERM set to xterm-color.
The solution is to put the following in your .bashrc:
if [ "$TERM" = "xterm" ]; then
alias aptitude="TERM=xterm-color sudo aptitude"
else
alias aptitude="sudo aptitude"
fi
The reason for the embedded sudo, and for the alias being defined even when TERM isn't xterm? sudo doesn't process shell aliases or functions, and so sudo must be embedded in the alias. Defining an alias even when TERM is already good is simply to preserve consistent behaviour, i.e. never needing to type sudo manually to invoke aptitude.
After doing this, aptitude now behaves correctly when searching:
Good stuff!
Sunday, November 27, 2011
Concatenating PDFs
Concatenating PDF files should be pretty straightforward. On Linux, there are several tools that can do this, including pdftk, pdf2ps and convert, which is a wrapper for Ghostscript. Unfortunately, I had a batch of files that I wanted to concatenate for ease of use on my tablet, and none of these tools were working. pdftk failed repeatedly with the useful error message:
Error: Failed to open PDF file:
Error: Failed to open PDF file:
input.pdf
Using pdf2ps did create a merged copy of the input files, but it was HUGE, consisting of bitmap images of the pages, losing the text in the process. ImageMagick convert never ran to completion, since I terminated it after it had eaten over 3GB of memory, presumably rendering the text into images.
Ultimately, I was able to successfully create a high quality, merged copy of my files by resorting to manually invoking Ghostscript:
gs -q -sPAPERSIZE=letter -dNOPAUSE -dBATCH -sDEVICE=pdfwrite -sOutputFile=output.pdf *.pdf
The resulting file is actually smaller than the combined total of the input files, storing text as text, rather than horrid, pre-rendered bitmaps. Ghostscript used a sane amount of memory, and it ran to completion in a sensible amount of time.
To Ghostscript, bravo! To the others, a big "Why"?
Thursday, November 10, 2011
Ubuntu branding #fail
Install Ubuntu 11.10 Server, add Xfce4 desktop environment and reboot. Result? Debian space theme branding on the grub and boot screens. Quality Assurance, anyone? Perhaps images from upstream packages need a little more vetting before importing...
![]() |
debian space-themed boot screen/grub menu on Ubuntu 11.10 "Oneiric" |
Tuesday, November 8, 2011
Linux ICMP redirects
It seems that Ubuntu 11.10 ships with a sample /etc/sysctl.conf which contains the following statement, intended to tell the system not to originate ICMP redirects when acting as a router:
# Do not send ICMP redirects (we are not a router)
net.ipv4.conf.all.send_redirects = 0
# Do not send ICMP redirects (we are not a router)
net.ipv4.conf.all.send_redirects = 0
Unfortunately, (at least) with kernel 3.0 as shipped with Oneiric, even after setting this and activating with 'sysctl -p', it doesn't work. Symptoms are noisy kernel log records such as:
[611513.083432] host 192.168.0.100/if2 ignores redirects for 8.8.8.8 to 192.168.0.1.
If you actually want to disable sending ICMP redirects, you have to explicitly set this per interface in /etc/sysctl.conf, by doing:
# Do not send ICMP redirects (we are not a router)
net.ipv4.conf.eth0.send_redirects = 0
net.ipv4.conf.eth1.send_redirects = 0
net.ipv4.conf.eth2.send_redirects = 0
etc.
Tuesday, October 25, 2011
Ubuntu service management
Running 'service --status-all' gives the following output:
[ - ] apparmor
[ ? ] apport
[ + ] apt-cacher-ng
[ ? ] atd
[ + ] bind9
(snipped)
Who on earth thought it was a good idea for the status characters to be symbols that have special meaning when they appear in a regex? It makes doing 'service --status-all | grep ...' less trivial than it should be.
Subscribe to:
Posts (Atom)