What can the Hotel Hippo debacle teach us about testing?
(Updated 16:08 UTC: Added statement from HotelStayUK)
If you haven't heard about Hotel Hippo, you should start by reading Scott Helme's exposé. It contains a full blow-by-blow account of the problems that he uncovered, and sets the context for this testing-focused article.
In brief, the Hotel Hippo booking website was found to contain some serious security flaws, ranging from misconfigured SSL certificates, to - get this - being able to access any customer's booking details simply by changing the booking ID in the URL.
Scott's article, and subsequent discussion on Twitter, triggered this post from Dan Billing:
So, which is worse? We can't know which was the cause of Hotel Hippo's problems, as they've thrown a shroud of secrecy over the whole affair, but we can certainly speculate and hypothesise.
No testers, or even no testing?
It's very possible that the project had no dedicated testers. HotelStayUK has teams in the UK and US, and although I don't know where the teams' responsibilities are split, it's highly plausible that much of the web development was performed by small remote teams which lacked the budget or resources to have a full-time tester.
It's doubtful that there was no testing at all; one would imagine that somebody in the organisation submitted some test bookings, even if that were a developer, intern or office administrator. However, it's unlikely to have been testing of value; there are very good reasons why test professionals scoff at statements such as "anybody can be a tester".
If you tried to define the testing that a non-tester would perform, most of the time it would fall close to what we call the "happy path". In their case, they'll try to complete a task, and if the task is completed, they'll say "test passed". However, this neglects an important part of happy path testing, which is displayed below as it's defined in the Rapid Software Testing course (with my additional emphasis):
Happy Path: Use the product in the most simple, expected, straightforward way, just as the most optimistic programmer might imagine users to behave. Perform a task, from start to finish, that an end-user might be expected to do. Look for anything that might confuse, delay, or irritate a reasonable person.
I've witnessed non-testers performing their version of "happy path" testing before, and it almost always forgoes the bold part. If they can complete their defined task, that's "test passed"; they're not looking out for the telltale signs of problems along the way.
For instance, would a non-tester notice the SSL error in the first place, or would they just click past it because it's "one of those annoying internet warning messages"? Would they spot that their successful booking contains an ID in the web address, and use this knowledge to inform future security testing? No. (If the answer is Yes: they may have a calling for testing, but they don't know it yet.)
If they're not a tester, they may not even entirely understand why they were given the task in the first place. "John emailed me, and asked me to try making some bookings on Hotel Hippo with his example credit card, so I guess I will." - The job of a tester is to ask why. If you don't know why you're doing something, how can you know where to focus your effort, or judge what's important?
Still, if you don't have any dedicated testers, this isn't necessarily the worst situation. As I tweeted back to Dan: At least if you've not got any testers, the company's management is aware of the fact, and can consider this as a project risk. In some situations, that could be totally acceptable. I've previously worked within organisations which are churning out huge amounts of static content with quick turnaround times, where problems were rarely anything more complicated than typos. But if you're integrating data collection, payments, or both (such as in Hotel Hippo's case), this ceases to be a viable strategy.
...But what if there were testers?
Given the preceding paragraph, surely it follows that "bad testing" is much worse than no testing. If you're doing no testing, usually the business knows this, and can weigh-up its risks accordingly. However, if management thinks that testing is happening, but unbeknownst to them it's poor-quality testing, this could give them a false sense of product quality.
I'm not going to mince my words: Any tester worth their salt (and certainly any tester who wants to keep their job) should be finding simple URL manipulation issues. My first ever paid testing role - long before I'd even heard of ISEB! - involved pre-release testing of an in-house content management system. It was a relatively basic affair: classic ASP with a SQL backend. Even without any training, it was obvious to me that I could manipulate the "magic numbers" in the URL to access other pages, and that (when this contained other customer's data) this was obviously a Very Bad Thing. Likewise, when I submitted form data containing apostrophes and noticed that I was getting strange database statements back? Something was clearly amiss, even if I didn't know the name for it at the time.
So what were they testing, if they weren't spotting such obvious issues? Maybe they weren't looking beneath the surface at all, and were simply entering bookings through the site (in which case I refer you to the previous section: these people aren't testers). Maybe they'd prepared a large number of scripted test cases which all passed, without giving consideration to whether that's testing at all.
The testers certainly weren't fulfilling an important wider role in the organisation, that of process improvement. An organisation founded on quality would struggle to mistakenly deploy a fundamental issue like this. It suggests that account security was not given serious consideration from the offset; no developer considered it when coding, or during peer review; no unit tests were written to prevent future security regressions; and the test team accepted the build-under-test without first asking about such things.
That's not to say that security blunders don't happen in large organisations; remember this Dropbox issue from 2011? However, the big difference is that Dropbox identified, patched and communicated their issue within four hours. The response from Hotel Hippo wasn't exactly on the same level...
An appropriate response?
Hotel Hippo is one of a number of sites which operates under the HotelStayUK umbrella. In the immediate aftermath of the Hotel Hippo revelations, the HotelStayUK website also disappeared into "under maintenance" mode, but other sites remained up.
However, somebody involved was clearly watching Twitter; there was only 20 minutes between these two messages...
Monitoring social media is a good business strategy. One of the best implementations that I've seen of this is at Last.fm, where an in-house bot monitors Twitter for mentions, filters-out chaff (such as certain post formats where people are sharing their Now Playing status), and publishes the remainder to an internal IRC chat room. This was a great way of finding minor issues quickly, by allowing us to hear from users without relying on forum posts or support tickets.
However, silently shuttering one of your sites, just after somebody mentions it on Twitter? That's not good business. All it suggests to me is that the company had forgotten that the Afternoon Tea site even existed, which doesn't give me a very good impression of how they do business. (And if they don't even remember that a site exists, can we really be surprised when its quality bar appears to be set very low?)
Almost a week later, and there's still not been a public statement from owner Chris Orrell. However, the company performed public-facing actions which can be interpreted negatively. For example, the @HotelHippo Twitter account was deleted, and its website replaced with the following message:
It seems as if HotelStayUK have opted to scrub the Hotel Hippo brand, in the hope that their other trading names can remain unaffected by the fiasco. But the act of doing this only adds to the unclear, unreliable attitude that's prevailed since the problem was first raised. What's so bad about open, honest communication, such as the Dropbox example above? I would have some respect for a company which admitted what went wrong, and declared the remedial steps that they would take in order to prevent it from happening again in the future. (Update: HotelStayUK has since issued a statement, which you can see at the end of the article.)
Indeed, Orrell's LinkedIn and Twitter accounts are now linking to a new site, Find Rent Go, which shows all the hallmarks of being unfinished, such as placeholder content in the footer. Perhaps wisely, it's an affiliate site, where users are sent to external sites to perform bookings. Testing still doesn't seem to be on the agenda though; there's only a small number of inputs (destination, date, length of stay, guests) but manipulating them in unexpected ways can produce erratic results. Who fancies a holiday in XXXX for New Year's Day, 1AD? Apparently there are rooms available...
Chris, if you're reading this, drop me a line. If you are serious about your site's quality, I'd be willing to give some of my time for free, if you're willing to invest time in fixing the issues that I find.
This post was heavily inspired by Joe Strazzere's "Perhaps They Should Have Tested More" series. If you enjoyed my post, you'll get a kick out of Joe's for sure.
Updated: Statement from HotelStayUK, 16:08 UTC
"A statement was issued to interested media on 2nd July and was reported by some.
HotelHippo has shut down and will not reopen. Our investigations showed that just 24 customers were affected by the issues with HotelHippo. This was a small very little used site. But for even one customer, it is obviously completely unacceptable and we are very sorry. We have therefore contacted all these customers and have offered them compensation. We have also set up a helpline where customers can contact us by calling 08446 606 007.
Security of our customers’ data is of the upmost importance to us. Despite there being no issues with our other sites, as the login process is quite different, as a precaution, we advised affected customers and took down all sites in the group one by one to put them through rigorous testing by independent experts to ensure their safety and security. These independent experts will be employed on an on-going basis to regularly test our sites."
(Maybe it's just my tester's curiosity, but it seems like quite a jump to go from "not particularly considering any security testing" to "hiring multiple security testing experts to audit our entire website portfolio inside 3 working days". If accurate, it does sound like a proportional response.)