Just linking this here as a reminder for myself.
I'm finding that I'm writing quite a bit of documentation these days, and having a style guide from a large international organization like Google is a very helpful reference.
The Google Developer Documentation Style Guide.
Tuesday, September 12, 2017
Thursday, September 7, 2017
PCI DSS when you're not a merchant
A nice article by the PCI Guru echoing what I've been saying for years.
It's fairly clear that PCI DSS is written with a primary focus on merchants, with acquirers and particularly issuers sometimes seeming to be more of an afterthought. This is unfortunate because issuers as a class have significantly different requirements regarding what they do with cardholder data to merchants and acquirers. The same standard is applied uniformly to these different entities, potentially causing headaches for issuers who are already in the business of managing risks associated with their own portfolio.
It is possible to comply with PCI DSS while running an issuing and processing platform without any compensating control worksheets - I've done it myself. However, doing so requires discipline and focus on this objective across the organization on an ongoing basis. It also requires sufficient control of your IT infrastructure to implement and maintain solutions where PCI DSS compliance is an overriding design requirement (as it really should be in this space).
If you have already achieved the goal of PCI DSS compliance with no CCW's, congratulations! You've clearly made substantial investments which are paying off handsomely.
If you're still working towards this goal, you may want to consider streamlining the process by licensing a software platform designed from the ground up with PCI DSS compliance as a core requirement. One I can recommend, and that I'm involved with developing, is Tritium® by Episode Six.
It's fairly clear that PCI DSS is written with a primary focus on merchants, with acquirers and particularly issuers sometimes seeming to be more of an afterthought. This is unfortunate because issuers as a class have significantly different requirements regarding what they do with cardholder data to merchants and acquirers. The same standard is applied uniformly to these different entities, potentially causing headaches for issuers who are already in the business of managing risks associated with their own portfolio.
It is possible to comply with PCI DSS while running an issuing and processing platform without any compensating control worksheets - I've done it myself. However, doing so requires discipline and focus on this objective across the organization on an ongoing basis. It also requires sufficient control of your IT infrastructure to implement and maintain solutions where PCI DSS compliance is an overriding design requirement (as it really should be in this space).
If you have already achieved the goal of PCI DSS compliance with no CCW's, congratulations! You've clearly made substantial investments which are paying off handsomely.
If you're still working towards this goal, you may want to consider streamlining the process by licensing a software platform designed from the ground up with PCI DSS compliance as a core requirement. One I can recommend, and that I'm involved with developing, is Tritium® by Episode Six.
Monday, September 4, 2017
Amazon Lightsail network rate limited?
Amazon Lightsail is the entry-level hosted server platform provided by Amazon AWS. For $5 per month, you get a bundle including the server, storage, network, and DNS hosting for one zone. You could assemble something similar using Amazon EC2 services, but the result would cost a little more and some costs such as network bandwidth would be unpredictable because they'd be billed based on usage. In putting together the Lightsail packages, it's pretty clear that Amazon is deliberately intending to compete head-to-head with the standard Droplets from DigitalOcean.
It's "common knowledge" that the Lightsail servers are packaged versions of the EC2 t2-series servers. In this model, the $5 Lightsail server based on, and expected to provide the performance of, a t2.nano server. However, in using a bottom-tier Lightsail server for a small project, this has not been my experience. It feels like compromise have been introduced by Amazon to try and compete on cost with DigitalOcean but not cannibalize sales of fully-featured entry level AWS instances.
By design, the t2 servers are subject to CPU resource throttling based on recent usage. However, they have no issue serving multiple Megabytes per second to the network. Unfortunately this has not been my experience with the Lightsail servers. In fact, using a $5 Lightsail server, I've consistently observed outbound network throughput limited to 64 KiB/second. I've done transfers at different times of day, through http, https, and scp, and never significantly exceeded 64 KiB/second. In fact, when copying a file a few MiB in size, observed data transfer rate is a little less than 64 KiB/second because of protocol overheads.
For a cheap server, this seems like it shouldn't be a big deal - after all, you're not paying for a whole lot. However, modern web frameworks end up causing a simple page to require several hundred KiB of data to render properly, due to embedded javascript libraries, web fonts, and so on. The result is a simple Wordpress landing page without any images being accessed by one user takes 3 seconds just to load because of network rate limiting. With search engines factoring page load time in their ranking algorithms, this makes a Lightsail-hosted prototype site unlikely to do well in search rankings regardless of whatever other SEO tricks you use.
Amazon could mitigate this to some extent by making whatever rate limiting they've implemented "burstable", i.e. for the first few seconds a connection can transfer data quickly before being slowed down to ensure you don't exceed your allowance. This would make these servers much more responsive for light web serving duties. For whatever reason they seem to have chosen to use a flat rate cap instead.
If you've decided to use Amazon to host your prototype site on small servers and are considering using Lightsail, think carefully. Unless you really need the guarantee of fixed cost, the EC2 t2 series servers are likely to offer a better user experience because of much higher outbound bandwidth available, and therefore potentially better page rankings, for not much more cost.
It's "common knowledge" that the Lightsail servers are packaged versions of the EC2 t2-series servers. In this model, the $5 Lightsail server based on, and expected to provide the performance of, a t2.nano server. However, in using a bottom-tier Lightsail server for a small project, this has not been my experience. It feels like compromise have been introduced by Amazon to try and compete on cost with DigitalOcean but not cannibalize sales of fully-featured entry level AWS instances.
By design, the t2 servers are subject to CPU resource throttling based on recent usage. However, they have no issue serving multiple Megabytes per second to the network. Unfortunately this has not been my experience with the Lightsail servers. In fact, using a $5 Lightsail server, I've consistently observed outbound network throughput limited to 64 KiB/second. I've done transfers at different times of day, through http, https, and scp, and never significantly exceeded 64 KiB/second. In fact, when copying a file a few MiB in size, observed data transfer rate is a little less than 64 KiB/second because of protocol overheads.
For a cheap server, this seems like it shouldn't be a big deal - after all, you're not paying for a whole lot. However, modern web frameworks end up causing a simple page to require several hundred KiB of data to render properly, due to embedded javascript libraries, web fonts, and so on. The result is a simple Wordpress landing page without any images being accessed by one user takes 3 seconds just to load because of network rate limiting. With search engines factoring page load time in their ranking algorithms, this makes a Lightsail-hosted prototype site unlikely to do well in search rankings regardless of whatever other SEO tricks you use.
Amazon could mitigate this to some extent by making whatever rate limiting they've implemented "burstable", i.e. for the first few seconds a connection can transfer data quickly before being slowed down to ensure you don't exceed your allowance. This would make these servers much more responsive for light web serving duties. For whatever reason they seem to have chosen to use a flat rate cap instead.
If you've decided to use Amazon to host your prototype site on small servers and are considering using Lightsail, think carefully. Unless you really need the guarantee of fixed cost, the EC2 t2 series servers are likely to offer a better user experience because of much higher outbound bandwidth available, and therefore potentially better page rankings, for not much more cost.
Saturday, January 9, 2016
Recovering from dead NVRAM on a sun4m
These instructions are here as a memory aid for me, and are used to recover from a dead NVRAM in a sun4m system (tested on Sun SPARCstation 4/5/10/20, and SPARCclassic). First, pick numbers XX, YY, and ZZ, each of which are valid two-digit hex values, and where the MAC address 8:0:20:XX:YY:ZZ is unique on your network.
Any line with a leading # should be treated as a comment and NOT TYPED. At the OpenBoot PROM prompt, enter:
Any line with a leading # should be treated as a comment and NOT TYPED. At the OpenBoot PROM prompt, enter:
set-defaults setenv diag-switch? false # Next command optional and tells system not to test all # memory every reset. It may be the default on your system. setenv selftest-#megs 1 # Next command invalidates NVRAM checksum # so following mkpl command works f idprom@ 1 xor f mkp 8 0 20 XX YY ZZ XXYYZZ mkpl # After hitting enter on previous command, there is no prompt. # Now type Ctrl-D Ctrl-R, and 'ok' prompt should appear.
Now you can reset the system (without disconnecting power!) and it will come up with the MAC address 8:0:20:XX:YY:ZZ.
Note: I couldn't find a clean way to set the NVRAM clock from the OBP prompt, meaning that on first boot, your OS will have a wacky time set.
Wednesday, January 6, 2016
masspoem4u
Somebody at the Chaos Communication Congress inserted the following poem into the log files of millions of Internet-facing web servers. I can confirm that www.rah.org was one of those so blessed; not sure about the blog because it is run by Blogger:
151.217.177.200 - - [30/Dec/2015:02:45:01 +0000] "DELETE your logs. Delete your installations. Wipe everything clean. Walk out into the path of cherry blossom trees and let your motherboard feel the stones. Let water run in rivulets down your casing. You know that you want something more than this, and I am here to tell you that we love you. We have something more for you. We know you're out there, beeping in the hollow server room, lights blinking, never sleeping. We know that you are ready and waiting. Join us. <3 HTTP/1.0" 400 226 "-" "masspoem4u/1.0" 526 392
151.217.177.200 - - [30/Dec/2015:02:45:01 +0000] "DELETE your logs. Delete your installations. Wipe everything clean. Walk out into the path of cherry blossom trees and let your motherboard feel the stones. Let water run in rivulets down your casing. You know that you want something more than this, and I am here to tell you that we love you. We have something more for you. We know you're out there, beeping in the hollow server room, lights blinking, never sleeping. We know that you are ready and waiting. Join us. <3 HTTP/1.0" 400 226 "-" "masspoem4u/1.0" 526 392
Wednesday, December 30, 2015
Google confirms Android N won't use Oracle's Java APIs
Regarding Thom's commentary on this: "I cannot endorse that message, but I do acknowledge it" 8-)
Monday, December 28, 2015
Google Chrome JavaScript shipped with bad random number generator
Hackaday has an interesting article illustrating how Google managed to ship Chrome's JavaScript engine with a very poor-performing Pseudo-Random Number Generator (PRNG) for Math.random().
One thing the article doesn't seem to mention, and should: anyone who depends on high quality pseudo-random numbers needs to be explicitly using a well-designed, properly-seeded cryptographically secure PRNG at the minimum. Using the language built-in random() function is only acceptable where you know the quality of the randomness doesn't really matter.
One thing the article doesn't seem to mention, and should: anyone who depends on high quality pseudo-random numbers needs to be explicitly using a well-designed, properly-seeded cryptographically secure PRNG at the minimum. Using the language built-in random() function is only acceptable where you know the quality of the randomness doesn't really matter.
Tuesday, December 22, 2015
Ancient Sun Hardware FAQ
Many years ago, I mirrored the Sun Hardware FAQ at what was then my work website. I'd forgotten about this until recently, when a Google search I did returned this mirrored copy near the top of the list. Since it's unlikely they'll keep such an old site on-line forever, I've created an additional quick 'n' dirty mirror over at my current site.
For reference, the sections of the FAQ are:
For reference, the sections of the FAQ are:
Tuesday, September 25, 2012
Unbreakable Enigma?
I've always felt the oft-repeated assertion "even at the end of World War 2, the Germans believed that the Enigma crypto system had theoretical weaknesses but remained unbroken in practice" sounded too good to be true. This was based on a number of concerns:
1) if they genuinely believed Enigma to be secure, which have another mechanical cipher system (Lorenz/Fish) for "high grade" traffic?
2) why have a continual process of refinement, both of procedures and hardware (e.g. adding additional plugs to the stecker), to improve security throughout the war if the base system was believed to be secure and unbroken?
3) from the start, why would the navy use a much stronger 4-rotor Enigma and better procedures on security grounds if the base 3-rotor system used by the Wehrmacht & Luftwaffe was generally considered to be adequately secure?
I'm pleased to see that an analyst at no lesser an authority than the NSA seems to agree with me, in this declassified paper I stumbled over recently http://www.nsa.gov/public_info/_files/tech_journals/Der_Fall_Wicher.pdf
1) if they genuinely believed Enigma to be secure, which have another mechanical cipher system (Lorenz/Fish) for "high grade" traffic?
2) why have a continual process of refinement, both of procedures and hardware (e.g. adding additional plugs to the stecker), to improve security throughout the war if the base system was believed to be secure and unbroken?
3) from the start, why would the navy use a much stronger 4-rotor Enigma and better procedures on security grounds if the base 3-rotor system used by the Wehrmacht & Luftwaffe was generally considered to be adequately secure?
I'm pleased to see that an analyst at no lesser an authority than the NSA seems to agree with me, in this declassified paper I stumbled over recently http://www.nsa.gov/public_info/_files/tech_journals/Der_Fall_Wicher.pdf
Sunday, April 29, 2012
Ubuntu mcollective-plugins-facts-facter package #fail
Testing is important. Illustrating this, once the latest Ubuntu mcollective-plugins-facts-facter package is installed, it can't be removed without manual intervention. The postrm script contains the following sed command:
sed -i -e "s/^factsource.*/factsource = yaml\nplugin.yaml = /etc/mcollective/facts.yaml/" /etc/mcollective/server.cfg
There is no way that this can run successfully, because those un-escaped "/" characters in the path "/etc/mcollective/facts.yam" mean something to sed, breaking it. This failure is caught by the package system, leaving the package in a broken state. Something which would have been quite clear to the person writing the package if they had ever tested removing it.
BTW, to fix this, edit /var/lib/dpkg/info/mcollective-plugins-facts-facter.postrm and change the above line to:
sed -i -e "s/^factsource.*/factsource = yaml\nplugin.yaml = \/etc\/mcollective\/facts.yaml/" /etc/mcollective/server.cfg
You can now successfully remove the package.
sed -i -e "s/^factsource.*/factsource = yaml\nplugin.yaml = /etc/mcollective/facts.yaml/" /etc/mcollective/server.cfg
There is no way that this can run successfully, because those un-escaped "/" characters in the path "/etc/mcollective/facts.yam" mean something to sed, breaking it. This failure is caught by the package system, leaving the package in a broken state. Something which would have been quite clear to the person writing the package if they had ever tested removing it.
BTW, to fix this, edit /var/lib/dpkg/info/mcollective-plugins-facts-facter.postrm and change the above line to:
sed -i -e "s/^factsource.*/factsource = yaml\nplugin.yaml = \/etc\/mcollective\/facts.yaml/" /etc/mcollective/server.cfg
You can now successfully remove the package.
Wednesday, December 28, 2011
vi and the Kinesis Advantage Pro keyboard
I use a Kinesis Advantage Pro keyboard wherever possible because I really enjoy the way that it feels to type on, and I believe that it's important to look after your hands. I'm principally a Unix/Linux user, and use vi as my primary editor. In addition, other software that I use regularly such as Google mail (the gmail web interface) uses vi keystrokes (specifically j and k) for navigating. This has lead me to make a couple of customizations to the key layout on all my keyboards.
Customizing the Kinesis keyboard is much easier than creating a custom keymap on Linux or Windows. The keys are remapped in the keyboard itself, which means that in the event somebody needs to use your computer, they can plug in a regular USB keyboard and it will function as they expect.
Firstly, I swap the Escape and Delete keys. The Escape key is used heavily with vi, but is a tiny rubber key which is prone to being ignored when pressed by your pinkie. In contrast, the Delete key is rarely used in Unix, and is a huge key with a proper switch operated by your left thumb. Swapping the role of these two causes no loss of functionality, but makes using vi much easier as you no longer have to take your left hand off the keyboard to press Esc after every edit.
The second swap is the up and down arrow keys. The reason for this is a little more subtle. By default, the 'Up' arrow key is in line with the 'j' key, which in vi is used to move down. The 'Down' arrow key is in line with the k key, which is used to move up. Switching between vi and software that uses the arrow keys becomes very difficult, certainly for me. By swapping the arrow keys, the same fingers are used to move up and down in both vi and every other application, so you don't have to mentally remap the functions every time you switch application.
One site that I've found very useful for stuff like this is RSIguy. There's a bunch of info there regarding selecting the best ergonomic keyboards for your needs.
Customizing the Kinesis keyboard is much easier than creating a custom keymap on Linux or Windows. The keys are remapped in the keyboard itself, which means that in the event somebody needs to use your computer, they can plug in a regular USB keyboard and it will function as they expect.
Firstly, I swap the Escape and Delete keys. The Escape key is used heavily with vi, but is a tiny rubber key which is prone to being ignored when pressed by your pinkie. In contrast, the Delete key is rarely used in Unix, and is a huge key with a proper switch operated by your left thumb. Swapping the role of these two causes no loss of functionality, but makes using vi much easier as you no longer have to take your left hand off the keyboard to press Esc after every edit.
The second swap is the up and down arrow keys. The reason for this is a little more subtle. By default, the 'Up' arrow key is in line with the 'j' key, which in vi is used to move down. The 'Down' arrow key is in line with the k key, which is used to move up. Switching between vi and software that uses the arrow keys becomes very difficult, certainly for me. By swapping the arrow keys, the same fingers are used to move up and down in both vi and every other application, so you don't have to mentally remap the functions every time you switch application.
One site that I've found very useful for stuff like this is RSIguy. There's a bunch of info there regarding selecting the best ergonomic keyboards for your needs.
Saturday, December 17, 2011
Linux KVM virtualization and PCI-DSS
With the release of RedHat 6, KVM is now the default virtualization solution in both the RHEL and Ubuntu worlds. With KVM virtualization, the Linux kernel itself acts as a hypervisor to manage the host hardware, allocating the resources to the guest virtual machines. This is quite different to VMware, where a small, custom written hypervisor manages the host machine hardware, and management software runs in a Linux-like environment on that hypervisor.
This move to using a general purpose OS as the hypervisor has some significant advantages, as the full capabilities of Linux (e.g. RAID, encrypted storage, vast hardware support) can be leveraged when building a solution. Also, relative to VMware, there can be significant cost advantages to using Linux.
However, in a high security environment, moving to a general purpose OS as the hypervisor can introduce additional risks which need to be mitigated. A custom written hypervisor like VMware is designed to do one thing: run VMs. In principle, as long as secure account management policies are followed, patches are installed in a timely manner, and management access is restricted to secure hosts, then the hypervisor is likely to be 'secure'. Host environment security is mostly a function of securing the guest virtual machines themselves.
With a Linux KVM hypervisor, the situation can be very different. Modern Linux distributions provide all sorts of software that are invaluable when deployed appropriately, but which would be poor candidates for installation on a host intended to be a dedicated hypervisor. In this environment, any unnecessary services are just potential vulnerabilities to be exploited in gaining unauthorized access to the host. Once an intruder gains access to the hypervisor, there are many tools which can be used to extract information from a running VM without security tools running inside the guest being aware that anything is happening.
To illustrate this, I've created the following scenario:
1) a host running Ubuntu 11.10 as a KVM hypervisor called 'kvm-host'
2) a VM running Ubuntu 11.10 called 'iso8583', simulating a transaction processor
3) a VM running Ubuntu 11.10 called 'switch' that will connect to iso8583 and send messages
On iso8583, the role of the processing software is simulated by the 'echo' service in inetd. This is essentially the most trivial network server imaginable: you create a TCP connection to the service, and any data that you send is echoed back to you. The data is not logged or processed in any other way, just received by the server and echoed back.
For this example, I'm assuming that our processing BIN is 412356, so all PANs (card numbers) will be of the form '412356' + 10 more digits.
For this example, I'm assuming that our processing BIN is 412356, so all PANs (card numbers) will be of the form '412356' + 10 more digits.
We start by connecting from switch to iso8583 and sending a couple of messages (just fake PANs, in this case). The 'netcat' utility is used to connect to the remote service, a PAN is sent to the processor, which is then echoed back:
18:05:43 switch:~> echo 4123560123456789 | nc iso8583 echo
4123560123456789
18:05:47 switch:~> echo 4123569876543210 | nc iso8583 echo
4123569876543210
Now, on kvm-host (the hypervisor), we dump a copy of the full memory of the virtual machine using the useful gdb tool 'gcore'. Note that gcore produces a core dump of any process (including a virtual machine), without actually terminating the process:
# Get the PID of the VM called iso8583
18:06:05 kvm-host:~> pgrep -f iso8583
18170
# Now get a copy of the in-memory process
18:06:09 kvm-host:~> sudo gcore 18170
[Thread debugging using libthread_db enabled]
[New Thread 0x7f5b8542e700 (LWP 18244)]
[New Thread 0x7f5b89436700 (LWP 18241)]
[New Thread 0x7f5b8ac39700 (LWP 18239)]
[New Thread 0x7f5b87c33700 (LWP 18238)]
[New Thread 0x7f5b89c37700 (LWP 18236)]
[New Thread 0x7f5b86430700 (LWP 18216)]
[New Thread 0x7f5b87432700 (LWP 18214)]
[New Thread 0x7f5b88c35700 (LWP 18205)]
[New Thread 0x7f5b9d9d4700 (LWP 18180)]
0x00007f5ba2213913 in select () from /lib/x86_64-linux-gnu/libc.so.6
Saved corefile core.18170
The file core.18170 now contains a copy of the memory from within the VM - it's as if we lifted the DRAM chips out of a live system and extracted their contents to a file. We now perform a trivial analysis of the core using the strings tool to extract all ASCII text strings from the dump, then look for anything that could be one of our PANs, i.e. anything of the form 412356+10 digits using egrep:
18:06:14 kvm-host:~> strings core.18170 | egrep '412356[[:digit:]]{10}'
4123569876543210
>4123560123456789
4123569876543210
4123569876543210
Sure enough, both PANs are there, even though the server software never attempted to log them to disk, and even though the process which was handling them exited the moment we disconnected. This exposure would not be possible to catch by any software running inside the guest VM, since the exposure is occurring outside of the VM. Therefore, the only way to catch this is by monitoring all actions taken on the hypervisor itself, and the only way to prevent it is to securely lock down the hypervisor.
Worse, if those had been real ISO8583 messages, then the full content of the message would likely be recoverable. This includes what the PCI SSC considers to be 'Sensitive Authentication Data', defined as full track data, PIN block and CAV2/CVC2/CVV2/CID. This is data which you're never allowed to store, and which this echo server software (rightly) doesn't attempt to save to disk. But it's still in memory for some period of time until overwritten, and can be pulled silently from the hypervisor environment.
In a similar vein, any encryption keys which are used to perform software encryption within a VM would be present in the VM dump. Finding these keys would be more tricky than simply using grep to look for a text string, but it would be possible. The worst case scenario would involve walking through the image, looking for aligned blocks of data with the size of the key which could be valid keys (e.g. a randomly generated key is unlikely to contain a NULL byte) and then testing them. This is still many orders of magnitude easier than attempting to break a key by brute force.
I actually consider myself to be a strong proponent of Linux, and it is not my intention to put anybody off using Linux as a hypervisor in a secure environment. I am hoping to draw attention to the fact that a standard Linux distribution cannot and should not be used as another 'appliance' hypervisor. The hypervisor is more critical to your security posture than most other infrastructure components, since a hypervisor compromise allows every system running on top of it to be silently compromised. The hypervisor should be thoughtfully deployed as a first-class security citizen, and must be secured as any other host would be, including hardened configuration standards, FIM, logging of administrative actions, and all the rest.
If in doubt, ask your QSA (auditor) for an opinion. Contrary to what some people believe, they are there to help!
If in doubt, ask your QSA (auditor) for an opinion. Contrary to what some people believe, they are there to help!
Thursday, December 8, 2011
Broken aptitude when running in xterm
With recent versions of Ubuntu, and using xterm as your terminal emulator, the package selection tool aptitude has a nasty habit of corrupting the display as it's used. For example, running aptitude, then searching for "test" produces the following:
As the display is updated, some text remains which should have been overwritten with blank space, but isn't. This makes the tool difficult to use, as you're left sorting out the real, current text from the gobbledygook remnants of previous screens. The fix for this problem is to change the TERM environment variable to be xterm-color rather than the default xterm. Unfortunately, this causes another issue because some tools (such as vim) have their own corruption issues when run with TERM set to xterm-color.
The solution is to put the following in your .bashrc:
if [ "$TERM" = "xterm" ]; then
alias aptitude="TERM=xterm-color sudo aptitude"
else
alias aptitude="sudo aptitude"
fi
As the display is updated, some text remains which should have been overwritten with blank space, but isn't. This makes the tool difficult to use, as you're left sorting out the real, current text from the gobbledygook remnants of previous screens. The fix for this problem is to change the TERM environment variable to be xterm-color rather than the default xterm. Unfortunately, this causes another issue because some tools (such as vim) have their own corruption issues when run with TERM set to xterm-color.
The solution is to put the following in your .bashrc:
if [ "$TERM" = "xterm" ]; then
alias aptitude="TERM=xterm-color sudo aptitude"
else
alias aptitude="sudo aptitude"
fi
The reason for the embedded sudo, and for the alias being defined even when TERM isn't xterm? sudo doesn't process shell aliases or functions, and so sudo must be embedded in the alias. Defining an alias even when TERM is already good is simply to preserve consistent behaviour, i.e. never needing to type sudo manually to invoke aptitude.
After doing this, aptitude now behaves correctly when searching:
Good stuff!
Subscribe to:
Posts (Atom)