days
-6
-8
hours
-1
-6
minutes
-3
-1
seconds
-3
-1
search
Sharing is caring

What are programmers most scared of this Halloween?

JAX Editorial Team
Halloween

Frightened pumpkin Halloween image via Shutterstock

Every developer has endured an uncomfortable, nightmarish situation at one point. If the following examples don’t bring up reminiscences you have tried to repress, we invite you to share your most terrifying memories. This Halloween is all about facing your demons.

Every day can be a Halloween terror if you have to face any of the following: intense boredom, release delays or being forced to use tools that you hate. As Craig Buckler wrote in an article on Sitepoint, there is nothing scarier than the word “just” — when suggestions begin to pour, chances are that you might drown in (extra) work. Add this oh-so-common expression “It won’t take long” and you have yourself a full nightmare. 

Horror stories — Happy Halloween!

What comes to your mind when you hear that one programmer crashed a stock exchange or that another one deleted an entire operating system? Panic? Distress? These are all true Halloween scares participants from previous JAX London conferences have gone through.

“I crashed the New Zealand stock exchange”

So…I crashed the New Zealand stock exchange. I shut down the servers in the wrong window.

We were sitting in the Ops room, and we had a police siren set up for when the stock exchange goes down because we need to know about it. So I enter my command on the keyboard, and all of a sudden, the siren kicks off.

I didn’t put two and two together to start with. I thought, “Cool! Our siren’s gone off.” So I started looking at various systems, and then I thought: oh, hang on a second.

And that’s the day we discovered our disaster recovery did actually work. They’d never been tested before.

“I managed to delete an entire operating system”

A long time ago, I had a different setup of computers than we have today. They were on the removable disks – they’re all hidden away these days – but you had great big cabinets. We had something like £25m worth of developed software on the disks, and I was doing the rotating backups. Somebody set a switch on the front of the computer the wrong way, which was auto-start.

So, what it did was, I put the disk in, and we had little power glitch, and it auto-started halfway through the backup. I corrupted the first £25m worth of software, and then I put the next disk in and corrupted that one. And then I put the next in because I was doing the full rotation! And in the end, I had none left.

I spent the next three weeks patching together a nearly complete operating system and software. Never quite got there. I lost about two months’ work in the end. Oh my god, was I slaving away for hours and hours.

SEE ALSO: How developers celebrate Halloween

“The software we released had a huge security flaw”

We released a service that allowed people to download software, which had a huge security flaw in. Which we realized about fifteen minutes after we deployed it. And then frantically fixed the vulnerability and pushed the fix out there. We didn’t get exploited or anything, so everything was fine, but that was quite a good example of, “when the shit hits the fan, how good are you at getting a fix out?”

“The cleaner’s elbows were smacking up against the servers”

A lot of companies forget to lock where their server farms are, and the cleaners will just go in and clean, right? So they’ll dust on the computers, and off switches will happen – Chaos Monkey style. That’s happened at a major, major investment bank here in London.

They let the cleaners into the server rooms, and all of a sudden major production systems started going down. Everyone’s going, “what’s going on?”, rang security down thinking they were being hacked, and the cleaner’s like hoovering, headphones in, elbows smacking up against the servers.

“A bug that cost £100,000”

I used to work for a really famous bank, and we were doing some work for one of our clients who take credit card payments. And I had to make a change, and accidentally committed something without a test, and then six months later they found out there was a bug, and it cost them £100,000.

That’s not the worst bit. So I patched it, and we went to work on a couple of branches. And no-onepulled it in. I was there for another eighteen months, and by the time I left it still hadn’t been patched.

And the client – we had no idea of identifying which version of the code they had. Because they were all physical sites somewhere. And it reminded me of an engineer going out with a USB pen, when he feels like it, just plugging it in and upgrading their software. So that bug is probably still out there…

Do you have any horror stories to share?