logo

Database Horror Stories by Spiceworks Community

IT is a thrilling world, full of unpredictable cybersecurity threats. Databases in particular are a place where you always need to watch out for perils and pitfalls. With Halloween fast approaching, we offer some hair-raising database stories to make you feel the terrifying spirit of the holiday.

In this blog, brave IT pros from Spiceworks community will walk you through the scariest database incidents that really happened to them, and instruct you on how to fight these dangers in the most effective way.

Now grab some candy — and turn the lights on, as the stories below will definitely makes your blood run cold.

Story #1 by sjenkins

It was a dark and stormy day in the IT department.

I manage the on-site school databases for all things student related. Skyward, Pearson Access Next, ACT — if it’s educational and technological in any way shape or form, chances are it will land smack dab in my lap.

It was the start of a new school year and things looked bleak. I’d spent the previous days in Zoom and Webex meetings, prepping for how to do testing in the time of Covid. I’d barely taken a sip of my first cup of coffee when an email crossed my path.

“All student information must be entered in PearsonAccessNext for the fall EOC testing by September 25th or DOOM SHALL BEFALL THEE.”

I looked at my calendar. It was the 18th. A week? “No problem,” I thought, forgetting that thinking such a thing usually leads to disaster and unmitigated horrors you wouldn’t wish upon your worst enemy.

I had to create the student profiles by hand in Pearson from the student profiles in the Skyward database, a time-consuming, mind-numbing task, but I had coffee and a will to power through so that it wouldn’t be waiting for me on Monday morning.

I got to work. Typing and double-checking every state ID number and birthdate as I went, the hours flew by. Everything seemed to be going fine.

Four hours in, the coffee I’d been drinking coursed its way through my system and told me it was time for a break.

I walked away feeling good about how the day was going. Little did I know.

When I returned, I logged back into my computer and the PearsonAccessNext website.

As I clicked on the “Students” section, a chill coursed up and down my spine.

“NO RESULTS” stared back at me.

The hours I’d just spent creating almost all of our students had vanished!

Nothing remained! Not a single student!

I cried! I wept! I drank more coffee and despaired!

Where oh where had my testing database gone!

I wailed in my office, frightening the only other person I shared it with.

I clicked everything I could think of, searching for the work I had so painstakingly created, but it was a ghost in the machine.

It haunted me the rest of the day. I couldn’t face re-entering everything in the last half of the day. I would wait until Monday and try again.

The specter of my lost database loomed over me the entire weekend as I tried in vain to recover.

On Monday morning, I returned to face my doom.

I logged once more into the Pearson site. Nothing.

I looked at the page once more, as though through a phantom mist.

The page said “registered students.” I hadn’t registered anyone on Friday. I had merely created the entries for the students.

I clicked “SHOW ALL REGARDLESS OF REGISTRATION.”

My data came flooding back in!

A few entries were still missing but three quarters had survived the mishap!

I rejoiced and resumed my task and praised the tech gods above that I wouldn’t have to start from scratch.

Story #2 by patcutrone

Last month, I was asked by my managers to move our analytics program over to a new domain.

The software is a FOSS program called Matomo. Our current implementation was running on Ubuntu with Apache as the back end, and I was the one who originally built it about 3 years ago. I’m a documentation fiend and had tested a similar operation just a few months before, so this should have been a walk in the park.

To make it more interesting, the managers had asked me to try it with a different web server to see if we could improve the performance a little bit. So, off I went to rebuild the instance on our cloud platform. It took me all of a few hours, and by the end of the day, I had a working instance ready for the DB to be restored. I felt great! I had completed the request in record time — the data restore would only take a couple of hours and we’d be up and running by our coffee break.

I grabbed my documentation and ran the command that I had just tested only a couple of months before that to restore the database — and then I could no longer log in to my phpmyadmin. EH? (That means WTF in Canadian.)

I started going through everything again. I could log into MySQL via the Ubuntu command line and run operations. I could see all the data was there, but when I tried to log on via the web … nothing.

I have almost ZERO database training. I took an Oracle class in 2003 as part of my degree, so everything I ‘know’ has come from building my servers on open source software, and I document literally every command I ever run. I checked everything: I went over all the commands, checked every permission I could think of. No luck. Everything worked fine until I tried to restore. I even went so far as to go to the current instance and take a new backup of the data and try with that.

Out of sheer desperation, I thought maybe it was because the second instance wasn’t 100% identical: I had used NGINX instead of Apache, the original instance didn’t have phpMyAdmin, we use a proxy server at HQ — SOMETHING. So I started over.

I built the second one as an exact duplicate — same programs, same passwords, everything I could find. I get the instance up, I install a clean Matomo, and everything works again. Then I restore the database: F*%S!#S$ … SAME issue! I was pulling my hair out. I talked to my manager, our software architect the real developers. I had 4 people looking at this dang thing, trying to figure it out. NO ONE had a clue. It should have worked but I was still getting the same error. It pointed to a password issue, but the passwords all matched what I had documented AND I could access it via MySQL on the server directly.

I opened up another CLI to my live instance and started doing a side-by-side comparison of things — checking authentication methods, permissions, configs, anything that could cause this authentication problem.

Then I logged into MySQL and noticed that there were 2 entries in the user table for my DBUser.

REMEMBER … I know a TON about databases (not!).

It’s a duplicate! Maybe that’s the problem!?? There’s a duplicate user! Maybe it hadn’t removed the original one from the old database?? I was so happy — maybe I fixed it!? OMG … maybe that’s it!

By this point, I had rebuilt this instance or restored from a backup at least 4 times, I had had countless people looking at it and making suggestions that didn’t work, I had spent hours and hours trying to Google the error. Everything came back authentication, so the logic that it could have been a duplicate user made sense in my addled brain.

Not wanting to be rash, I quickly ask someone (also not a DB admin), “Hey, do you think it could be a problem if there’s a duplicate in this table?” … “That may be,” he says.

DROP USER ‘database_user@’localhost’;

And then I realized what I had done.

That was NOT my new server….

<<Freaks Out>>

“OK”, I thought, “I’ll just restore it from the backup! I have a backup! this is good! It’s the original instance that I built, this is the exact one I had documented … easy, easy.” So off I go, grab the database, open up my password manager, copy the DBUser password, run my restore command….

Now my LIVE instance has the SAME problem as the new one! I wanted to puke. WHY WHY WHY? My brain is yelling, “It’s the SAME instance, what changed?? WHY is it complaining about the password?? It’s RIGHT!! It’s in the password manager, it’s RIGHT THERE!” It was a very intense conversation in my basement office at home; thankfully my elderly mother, who lives upstairs in my house, is a heavy sleeper.

Then it hit me….

I went to our secure file share archive for the password manger and downloaded the copy that was made at the time that I originally set up that instance.

F%$#!? The original password for that DBUser was about 6 characters longer than the one that was documented in the most recent version of that password manager. At some point, that password may have been accessed and accidentally truncated and the change had been propagating forward for months. It WAS wrong.

I ran the restore command again — with the right password — and it worked! Everything came back! My new instance had the same problem AND the same solution. By the end of the day, I had them both working and all the data restored.

My MONTH-long horror had finally ended.

Story #3 by IT_Sean

We find our hero, a lowly helpdesk drone working for a company that manufacturers encrypted communication hardware.  He’s sitting in his crappy office, in a crappy building, in a crappy part of a crappy state, in a crappy part of the country, idly contemplating the idea of packing up his personal crap, telling his boss what he really thinks of him, and leaving. Suddenly, a customer calls.

Our intrepid hero sighs, then answers the phone with his usual greeting. The caller turns out to be one of the company’s larger customers, who use the company’s devices for out-of-band management access to remotely deployed hardware at unmanned locations. Apparently, the server on which they housed the database containing all of the encryption keys and device configurations had suffered a catastrophic failure!

Our hero leaps into action and offers the customer a number of options for recovering the database from a backup. His customer pauses for a moment. A deep sigh comes through the phone, and the customer quietly admits that they have no backup of the database. He then asks our hero if there is any possible way to regain access to these devices, or perhaps remotely reset them. Our hero sadly informs the customer that without the encryption keys stored in the now unrecoverable database, any communication with the devices is wholly impossible until they are reset, which can only be done in person, on the device itself.

Crestfallen, the caller thanks our hero and ends the call … likely pondering his surely impending transfer to a radar station somewhere in the back end of Alaska.

Several months after this call, our hero did bravely pack up his personal belongings, tell his boss what he really thought of him, and leave, never to set foot in that office again. At that time, the customer was still sending people to each of their devices to insert a key, turn it to the Reset position, turn it back, and remove it — a process that took them nearly THREE years to complete for all of their devices.

Always have good backups.

Story #4 by shivarr

I work for an IT services provider and this happened in the year 2011 and in my memory it seems as though it happened just now as I can still very vividly remember the issue. We had a windows server admin and when he was trying to apply a patch to windows server, we still could not figure what he did and neither could he explain what he did but in reality the patch was not applied and on top of it the database got corrupted and brought the operations completely down.

The client immediately called when they could could access the database and it sent us scrambling and luckily we had a good hygiene of backing up the entire server and  we were able to restore everything in a matter of ~ 2 1/4  hours – the important lesson for us was not do any manual patching and look for an automated patching solution that can minimize human error.

Luckily, the customer was hosted by us as a single tenant instead of a multi-tenant environment and we were able to limit the damage to a single server but that server was holding the database as well. Even though we had to pay damages for not keeping up 99.99%, the fact that we resored everything back to normal was the biggest saving grace.

Product Evangelist at Netwrix Corporation, writer, and presenter. Ryan specializes in evangelizing cybersecurity and promoting the importance of visibility into IT changes and data access. As an author, Ryan focuses on IT security trends, surveys, and industry insights.