# Find the next non-NULL row in a series with SQL

Have you heard of our Masterclass workshops? Here is a Masterclass special from your SQL trainer, Lukas Eder. Looking at a time series of data points where certain events happen, there are several issues that can occur. Here’s how you can deal with those problems.

bgquwge

Have you heard of our **Masterclass workshops**? JAXenter Masterclass consists of four intense workshops that provide comprehensive and up-to-date know-how on **advanced Java**, **reliability**, **SQL** and **microservice architecture**.

If you are interested in powering up your skills and learn from the absolute best, visit JAXenter Masterclass today and find out more information on our workshops!

But for now, here is a Masterclass special from your SQL trainer, **Lukas Eder**. Find all the information on his workshop here.

### ~~~

*This post was originally published over at jooq.org, a blog focusing on all things open source, Java and software development from the perspective of jOOQ.*

## Find the next non-NULL row in a series with SQL

I’ve stumbled across this fun SQL question on Reddit, recently. The question was looking at a time series of data points where some events happened. For each event, we have the start time and the end time:

The desired output of the query should be this additional count column:

So, the rule is simple. Whenever an event starts, we would like to know how many consecutive entries it takes until the event stops again. We can visually see how that makes sense:

Some observations and assumptions about the problem at hand:

- No two events will ever overlap
- The time series does not progress monotonously, i.e. even if most data points are 1h apart, there can be larger or smaller gaps between data points
- There are, however, no two identical timestamps in the series

How can we solve this problem?

## Create the data set, first

We’re going to be using PostgreSQL for this example, but it will work with any database that supports window functions, which are most databases these days.

In PostgreSQL, we can use the `VALUES()`

clause to generate data in memory easily. For the sake of simplicity, we’re not going to use timestamps, but integer representations of the timestamps. I’ve included the same out-of-the-ordinary gap between 4 and 6:

values (1, 1, null), (2, null, null), (3, null, null), (4, null, 1), (6, null, null), (7, null, null), (8, 1, null), (9, null, 1)

If we run this statement (yes, this is a standalone statement in PostgreSQL!), then the database will simply echo back the values we’ve sent it:

## How to deal with non-monotonously growing series

The fact that `column1`

is not growing monotonously means that we cannot use it / trust it as a means to calculate the length of an event. We need to calculate an additional column that has a guaranteed monotonously growing set of integers in it. The `ROW_NUMBER()`

window function is perfect for that.

Consider this SQL statement:

with d(a, b, c) as ( values (1, 1, null), (2, null, null), (3, null, null), (4, null, 1), (6, null, null), (7, null, null), (8, 1, null), (9, null, 1) ), t as ( select row_number() over (order by a) as rn, a, b, c from d ) select * from t;

The new `rn`

column is a row number calculated for each row based on the ordering of a. For simplicity, I’ve aliased:

`a = timestamp`

`b = start`

`c = end`

The result of this query is:

Nothing fancy yet.

## Now, how to use this rn column to find the length of an event?

Visually, we can get the idea quickly, seeing that an event’s length can be calculated using the formula `RN2 - RN1 + 1`

:

We have two events:

- 4 – 1 + 1 = 4
- 8 – 7 + 1 = 2

So, all we have to do is for each starting point of an event at RN1, find the corresponding RN2, and run the arithmetic. This is quite a bit of syntax, but it isn’t so hard, so bear with me while I explain:

with d(a, b, c) as ( values (1, 1, null), (2, null, null), (3, null, null), (4, null, 1), (6, null, null), (7, null, null), (8, 1, null), (9, null, 1) ), t as ( select row_number() over (order by a) as rn, a, b, c from d ) -- Interesting bit here: select a, b, c, case when b is not null then min(case when c is not null then rn end) over (order by rn rows between 1 following and unbounded following) - rn + 1 end cnt from t;

Let’s look at this new `cnt`

column, step by step. First, the easy part:

**The CASE expression**

There’s a case expression that goes like this:

All this does is check if `b is not null`

and if this is true, then calculate something. Remember, `b = start`

, so we’re putting a calculated value in the row where an event started. That was the requirement.

**The new window function**

So, what *do* we calculate there?

A window function that finds the minimum value over a window of data. That minimum value is RN2, the next row number value where the event ends. So, what do we put in the `min()`

function to get that value?

Another case expression. When `c is not null`

, we know the event has ended (remember, `c = end`

). And if the event has ended, we want to find that row’s `rn`

value. So that would be the minimum value of that case expression for all the rows *after* the row that started the event. Visually:

Now, we only need to specify that `OVER()`

clause to form a window of all rows that *follow*the current row.

The window is ordered by `rn`

and it starts 1 row after the current row (`1 following`

) and ends in infinity (`unbounded following`

).

The only thing left to do now is do the arithmetic:

This is a verbose way of calculating `RN2 - RN1 + 1`

, and we’re doing that only in those columns that start an event. The result of the complete query above is now:

## Leave a Reply