I got hacked๐Ÿ’ฅ and blew up prod!

ยท

2 min read

I have been around the block enough times in my 15-year or so career to have broken things, and at times quite badly.

Here are some stupid mistakes I have made, and how to easily avoid them:

Running SQL in production

Sometimes you must run destructive statements like UPDATE or DELETE in production. The problem comes in when you forget the WHERE ๐Ÿ˜ญ clause.

Luckily, when I mistakenly did this, it was data I could get back easily from a backup or some log file.

How to avoid:

  1. Test in a local DB first (seems obvious right!).

  2. Use transactions: Transactions will not execute until you commit the statement, and worst-case scenario, you can always roll back.

  3. Start your statement backward i.e. "where x = y", this way, if you accidentally press enter, the statement will fail or only apply to a subset of results and not the whole table.

Deleting volumes when stopping containers

I once ran a container with "docker run --rm"! Suffice to say this was a bad idea if you have attached volumes. Docker will destroy the volume when you take down the container.

How to avoid:

  1. Avoid the "--rm" flag unless you are using stateless containers.

  2. Create the volume using "docker volume create" and then bind the volume to the container using "-v".

Leaving debug mode on in production

๐Ÿ˜ฑ This is a really stupid rookie move, but alas, it happened and this exposes API keys among other sensitive information. Lucky for me this was just a side project, I pushed up late one evening.

My DB got hacked, but it was a test project with no real sensitive data ๐Ÿ˜ฎโ€๐Ÿ’จ

How to avoid: Stop pushing code at night! ๐Ÿ˜

Taking down production with unoptimized scripts

When running maintenance jobs that iterate over records in a large DB, using LIMIT OFFSET may not always be a good idea to paginate data.

I once (or maybe more than once) ran a script in production that looped through millions of rows to perform some kind of update or cleanup operation.

This worked fine until it got to a large page size, OFFSET pagination tends to slow down, this, in turn, kept too many open connections and was a memory hog.

As you can imagine this maxed out the available DB connections and locked some rows.

How to avoid:

  1. Use primary key pagination instead. Basically, you start at 0 and keep track of the last row ID processed.

  2. Implement proper connection pooling, essentially you should keep a persistent connection for as long as possible instead of spawning a new connection at every iteration.

ย