Time in a bottle

Why is my database slowing down?

Cory Isaacson
bottle

Learn how to take on the three enemies of database performance head-on, and stop your data flows congealing into quagmire.

It’s no secret – as Big Data grows, performance degrades, often alarmingly so.

If this is such a common problem, it’s important to understand the underlying reasons for “database slowdown”, so you can identify the signs as early as possible and respond accordingly. Specifically, you need to know The 3 Enemies of Database Performance.

These are the most common issues I’ve seen in my career, and having a handle on them will help you avoid the situation in many cases.

The 3 Enemies of Database Performance

There are 3 big “enemies” of database performance that occur all too frequently. While most of us are familiar with these issues in the traditional RDBMS (Relational Database Management System) arena, they can happen just as easily with a poor data model design or incorrect approach using NoSQL, “NewSQL” or virtually any type of database storage engine.

In fact, I recently heard a story from an attendee at a meet-up telling us how a NoSQL engine ran one of his map-reduce jobs for 10 full days! So these 3 enemies can affect us all.

Enemy #1: Table Scans

Let’s start by covering the worst offender when it comes to defeating database performance: Table Scans.

A Table Scan will take place when a query requests a row (object, etc.) or more frequently a range of rows (objects) from the DBMS engine. If there is no available index to support the query, or if the DBMS optimizer doesn’t select an available index, then the engine has no choice other than performing a sequential scan of the data set from front to bottom (table, or otherwise).

The Table Scan often accounts for rapid performance degradation – the “hockey stick” graph I showed in last month’s article.

Let’s look at an example using an RDMBS SQL query. Assume we have the following table structure:

CREATE TABLE purchase_transaction (

 transaction_id INTEGER PRIMARY KEY,

 customer_id INTEGER,

 order_id INTEGER,

 order_date DATE,

 order_amount DECIMAL(10,2)

)

The only index we have on the table is the Primary Key (in a NoSQL store it would be the object key itself). Now let’s apply this SQL query to the table:

SELECT *

FROM purchase_transaction

WHERE order_date BETWEEN ‘1/1/2012’ AND ‘1/31/2012’

 AND order_amount >= 1000.00 

 AND order_amount <= 2000.00

Innocent enough, we just want to find all of the purchase_transaction rows from the month of January, between $1000 and $2000. This table has no index on order_date  or order_amount, so the DBMS must perform a Table Scan.

Things run fine for months, then one day we see database performance degrade precipitously (i.e., it goes “off the cliff”). Now, why does this cause the sudden performance degradation curve?

The reason is that the Table Scan was happening all along, it just wasn’t noticeable when the table was relatively small. At a certain point, the table expanded past a threshold where it would fit comfortably in memory or disk cache, and it started reading intensively from the disk drive itself. The increase in size further aggravated the issue, as a lot of data has to be read from disk for each query, hitting the disk even harder. Given that traditional hard disks are about 100X slower than memory, you can see the effect this can have. Now add multiple users trying to run the same query, and the entire DBMS engine is quickly bogged down, dedicating all resources to this one offensive query.

In my experience, any routine query that takes more than 1 second to execute will be the eventual cause of big trouble, so its important to monitor for this condition.

Enemy #2: Concurrency Contention

Concurrency Contention is a close cousin to Table Scans, in fact Table Scans often lead to this problem as well.

All databases need to serve multiple requests and users, after all Big Data is normally driven by large concurrent users of a given application. And for most organizations, more users means more revenue and success, this is exactly what we desire from a business perspective.

Concurrency Contention can take many forms; the essence is that too many users are competing for the same resources, and the database can easily “lock up” waiting to serve requests. Here are some common issues that result on Concurrency Contention:

Traditional RDMBS engines: these are multi-threaded, and designed to serve multiple requests at the same time. This is great, but of course we also want transactional integrity. Therefore, to meet this objective, the engine will lock rows for exclusive write control (or even worse, sometimes an entire table!).

When rows are locked by one user or connection, other users cannot read those same rows. The most common cause of this situation is utilization of database transactions, that span too long of a time window. For example, imagine a single transaction that has a read query mid-stream – a query that performs a Table Scan (or even worse an UPDATE that . That is a certain recipe for disaster, as resources will be locked during the entire time of the offensive long-running read query.

  • Today, most NoSQL engines attempt to avoid this type of contention because they are single-threaded implementations. In other words, they are normally very fast, but they only serve one request at a time. This works very well in most cases, but all it takes is one offensive query or operation that consumes a few seconds, called by 100s or 1000s of requests, to bring the entire engine to its knees in a hurry.
  • Another common source of contention is running out of available connections. Again this is most common with an RDMBS, where there are hard limits on how many simultaneous connections can be made to the engine. Exceed that limit, and application processes are denied access immediately.
  • The general rule to avoid contention is to limit the size of your transactions, and again be sure that all query and write statements are efficient. The maximum 1 second for execution time is a good general rule of thumb (of course in some extreme high-volume systems, 1 second is far too long, but the principle is the same).

Enemy #3: Slow Writes

This “enemy” is also deceptive, and sometimes hard to predict or plan for. As any single table or data store grows in size, it typically exhibits the “hockey stick” curve for write speeds. The cause is typically the use of database indexes. In traditional RDMBS engines, this has long been a culprit, especially when multiple indexes are implemented on a single large table.

The most common index type is a B-Tree, which requires more computational and disk resources as the index “tree” grows in depth. The result is the dramatic slowdown on writes to the indexed table. As a comment, I’ve noticed that the NoSQL and “NewSQL” engines are already adding features like indexes, to meet the needs of developers. My prediction is that the various types of DBMS engines will become more similar over time, given that the extensive RDBMS feature set evolved from customer demand over the years. (The details of how B-Tree indexes will be the subject of a future article).

The solution for this case is to limit the use of indexes to what is really required, and use caution as your tables or data set sizes get exceedingly large.

Wrapping it up

In this article, I’ve covered the 3 Enemies of Database Performance. Be sure to watch for these, particularly with the telltale signs of increasing disk I/O and initial warnings signs of database slowdown. Catching the problem early is always the best solution, and with a watchful eye hopefully you will be able to do just that.

Author
Cory Isaacson
Cory Isaacson is CEO/CTO of CodeFutures Corporation, maker of dbShards, a leading database scalability suite providing a true
Comments
comments powered by Disqus