The art of building and testing horrible software

We Are Mammoth builds horrible software. At least, the first time it’s released for testing. The second time around, we have our redeeming qualities. The third, fourth, and fifth time, we become masters of our domain (yeah, I said that). Post-launch, we’ve almost always deployed an application which exceeded original expectations.

This certainly comes at a cost. Clients aren’t always happy with “the way things went” during testing. And, it’s often somewhat of a gray finish-line in terms of deliverables. We persist, however, because it’s been a successful formula.

You see, it’s impossible (and as such, infinitely boring) to account for 100% of an application’s complexity at the onset of a project. We’re not robots, I think.

Instead, let’s plan for and build about 70% and give it to the user. Now that they’ve seen a living application (albeit, incomplete), let’s start tackling all those sticky “I want to have this thing this way” scenarios. It leads to things like this. It’s stressful, for sure. But so is creating a 75 page functional specification with nothing but wild imagination and thin air, and subsequently failing to live up to its standard.

So, how do we do it? Let me give you a nutshell.

Test early, test often. Plan less at the start. Then build less up front. Next, push it out to the client, so that we can all collaboratively decide what’s really missing (shortly before the deadline!).

Is this really testing? Or just some perverted form of rapid prototyping? Timing-wise, it probably should be called testing. Client expectation-wise, it’s tough to convince them we’re ‘testing’ a final product. So, let’s call it the “feature definition when it really counts” phase.

Craigism: “You might think you want a fried pickle. Then you get it.”

So, get the product to a high-level place where lower-level decisions can be made from a sounder vantage point. Fine. This will save up front costs on blind functional requirements and catering to wild imaginations, right? Yep. Won’t you blow that save once you start testing and realizing the additional effort? Not necessarily. Keep in mind, you’re building only the necessities shortly before the project is due to wrap up.

Let’s be clear on a few things though. Some online stuff shouldn’t be full of holes. Our practice applies to a larger effort generally fitting the “rich web application” description. Got a straightforward portfolio site to build? Do it right the first time.

Also, one big pitfall of a somewhat ‘loose’ definition is that you’ll seemingly get slammed with additional requests. For your sanity, make sure you work out some prudent fee arrangements. More importantly, be fair to your client. They’re looking to you to be even-keeled at this point. Once either party feels the other is pulling tricks, the game gets tougher.

This goes for timing as well. A client who is acutely sensitive to a shifting launch date probably isn’t going to be laughing heartily towards your end date. Make sure you’ve primed them on your process. In fact, print the following out on a big sheet cake and send it to them.

The “Just so you know how its going to be” Theory On Testing
The effort and complexity of the testing phase is inverse to that of the original requirements specification. Thus, it’s safe to say that the testing phase actually begins at the start of the project and the effort thereof can be predicted as such. Read: If you want us to get started immediately, there’s going to be some pay off down the road.

We find this process works best for us. There are several different approaches to User Acceptance testing. We happen to be a very small and flexible team of developers who build everything in house. We actually like our clients, and like speaking with them. That’s why.

Outsourcing your code development to a nation far away won’t fit this description – it implies that the software build is abstracted from the day-to-day interactions of the product designers and stakeholders. Protocol and specifications are more necessary in these environments – both legally, and procedurally. It also implies many more fluorescent lights and bitter, corporate coffee.

In the end, software doesn’t make mistakes. That doesn’t mean that the process of building that software is inherently sterile though. With reasonable communication practices, a good issue tracking tool and a robust framework and approach to building your software, a more conversational approach to finalizing functionality and testing will ensure the end product is tested against real humans, not an Excel spreadsheet.