Last but not least

Community post: The final keyword in Java

AlexCollins
Java-logo.1

Alex Collins on why use of the ‘final’ keyword is useful for reducing state mutation, albeit with caveats.

In a new feature for 2013, we’re highlighting blog posts by
members of the community. This week’s post comes from Alex Collins, a UK-based Java
developer by day and a Clojure, Python, Grails or Android hacker by
night. The original post can be found here.

Whilst many will frown at the use of the
final
keyword in Java code, I find it a breath of fresh air.
Perhaps it’s that I tend to lean on the side of caution,
conservatism and safety when it comes to code: immutability being
one of many examples of such. In this post I argue using it better
than not, much like avoidance of doubt lends itself to stronger
solutions.

Immutability is therefore the strongest reason I quote when one
asks why I declare as much as possible as final in my code, but as
with many things the use of final does have its caveats.


final
Variables

According to the
Java Language Specification
:

A variable can be declared final. A final variable may only be
assigned to once. Declaring a variable final can serve as useful
documentation that its value will not change and can help avoid
programming errors.

It is a compile-time error if a final variable is assigned
to unless it is definitely unassigned (§16) immediately prior to
the assignment.

which means local or class scope, if you declare a variable
final it must be assigned and (according to certain rules) and the
compiler complains on subsequent attempts.

To me, this is useful because if something cannot change then why
let it? If I can let the compiler do something for me reliably and
consistently, then why shouldn’t I? Again: if a value shouldn’t
change then make sure it doesn’t.

Readability

Some might find the following code more readable than an equivalent
with the use of final:


public float calculateAverageAge(Collection personCol) {
float ageSum = 0;
for (Person p : personCol) {
ageSum += p.getAge();
}
return ageSum / personCol.size();
}

Yet, when we compare, there’s little difference:



With that said, the old adage “code is read many more times than
it’s written” is a strong case against; though I personally feel
it’s not actually that unreadable: but I risk venturing
into an argument of subjectivity which we all know is futile.
Perhaps it’s only less readable because you’re not used to it?
Perhaps if it were commonplace we’d never have a problem. If it’s a
screen real estate issue:
final
has the same number of letters as the C keywords

const
and there are
those
who argue that using
const
is good practice too; not to mention that we’re no
longer in the days of 80-character-wide terminals.

Immutability

I think it’s fair to state that software is complex. Reducing
complexity where possible makes it easier to reason about
solutions. Solutions that are easier to reason about are therefore
easier to implement at a programming level.

One assault on complexity in any given codebase is the mutation of
state: changing properties of some entity that some aspect of the
system’s logic relies on leads to a growth in what has to be
considered. One could argue then, that reducing state mutation
reduces complexity and this therefore leads to an easier solution.
It is here where
final
can aid. My reference cannot change. Recursively, the
properties within that object reference cannot change, which in
turns means I have less surprise and less to reason about when
using those objects. It is a solution that can cascade. [Update: as
stated on reddit, the final keyword does not extend to the fields
of an object instance, unlike C’s <pre>const</pre> on a
struct. I omitted this deliberately for a follow up post!] If you
don’t agree: replace “easier” with “simpler” and reconsider.

Again: if something cannot change then why let it? In a block of
code that declares 5 references and only one can be change
(reassigned) then it is that one we have to worry about. It is that
one that the unit test should cover in more cases. It is that one
that the new programmer reading the code 6 months after it was
written has to watch.

Functional Programming

In the functional programming world the idea of purity is a
fundamental tenet. Functions are pure if they have no side effects.
They are idempotent in nature: the same input produces the same
output. Whereas in contrast the OO world does not have such a
practice, at least in the same way.

To satisfy encapsulation we have mutators which provide an
interface to some property of an object. Coupled with abstraction
these allow the internal structure of that object to change without
forcing change on its clients; but herein lies the problem. Compile
time changes are but one fraction of change. Of equal concern are
the semantics of that dependency. If the code doesn’t have to be
recompiled then great, but what about the actual intent behind that
link? Has the logic changed? What impact does that have? How can I
tell?

The answer in either case is that you cannot tell without testing
or without analysing the code. I don’t see that as a huge problem:
with change we need to assert its validity. If we can encourage
code to “do what it says on the tin” then we have simpler
solutions. If side effects are non-existent then we have another
string to our bow of simplicity. K.I.S.S., right?

Conclusion

It’s never easy to argue a case for doing something absolute in the
professional software development “arena”. One learns either the
hard way or through teamwork that these absolute rules are few and
far between. Similarly, applying a concept or pattern blindly or
where it is inappropriate for the solution leads to problems.

Whilst I’d argue — as I have above — that reducing complexity is
always something a solution should head towards, sometimes it’s
unavoidable. Why then, would we not reduce it where we can, letting
us spend energy on the elements that are complex, on the components
that cannot be diluted further?

Author
Comments
comments powered by Disqus