Implementing additional functionality

Tutorial – CDI Extension Programming


Ronald Steininger, Arne Limburg, and Mark Struberg delve into the world of portable CDI extensions

In the old days of
Enterprise Java, a lot of frameworks were created, each with a very
clear technological goal. Unfortunately these quickly became
overloaded and stuffed with tons of features to fulfill each and
every need. The CDI Expert Group aimed to avoid this problem with
the CDI specification by strictly focusing on the core problems of
dependency injection and defining a very powerful extension
mechanism for implementing additional functionality on top of it.
The CDI extension mechanism quickly became popular and a lot of
high-quality extensions are already available.

This mechanism is
called ‘portable CDI extensions’ because it is based on a
well-specified SPI which must be supported by every certified
standalone CDI container and EE 6 server. Thus a CDI extension
written on one CDI container will also run on all others: JBoss
Weld, Apache OpenWebBeans, Caucho Resin CanDI, IBM WebSphere 8,
Oracle Glassfish 3, Oracle Weblogic 12c or Apache TomEE.

Because of the
portable, vendor-independent way these extensions are implemented,
they can be reused with all CDI provider implementations. Projects
like Apache MyFaces CODI and JBoss Seam 3 provide collections of
“important”, commonly-needed extensions, letting CDI users easily
augment their applications with the extensions they need without
weighing down the core CDI implementation.

How does a CDI
Extension work?

The CDI extension
mechanism provides ways to hook custom code into the lifecycle of
the CDI container. To leverage this power in our own CDI extensions
we must therefore know the very basics of the CDI container
lifecycle itself. We will first give you a quick overview of this
process and dive into the details later in the article.

The first task a CDI container performs is to load
all CDI extensions. Next, all of the classes contained within JAR
files that have a META-INF/beans.xml marker file will be scanned
also. For each of these classes an
will be constructed based on
the given class. This object contains metadata about all
annotations, constructors, methods, fields, etc. for the processed
class. This information can be modified by extensions. Later the
CDI container will expose this metadata in Bean<T> instances
used to manage contextual objects.

The container will
start all available contexts after all classes have been scanned
and all constraints have been verified.

Writing a CDI
Extension for a job scheduler

An example
integration of a job scheduling service will highlight what is
necessary to leverage the extension mechanism in a CDI environment.
In this article, we want to go through the steps needed to build
such scheduler integration by writing a CDI extension.

Creating a CDI extension is as simple as writing a
class that implements the interface

javax.enterprise.inject.spi.Extension. This class will contain the functionality of our
extension. We will see in later chapters how this class can
interact with the CDI container.

The Container will pick up our extension via the
ServiceLoader mechanism. Simply place a file named
javax.enterprise.inject.spi.Extension (the same name as the interface) within the
META-INF/services folder of your JAR and ensure this file contains
the fully-qualified name of the Extension implementation. The
container will look up the name of the class at runtime and
instantiate it via the default constructor (therefore the class
needs a default constructor).

Before we delve
into the content of the extension, we will first look into Quartz,
the scheduler we want to integrate with, and how Quartz would be
used without our extension.

The classic way
to schedule jobs with Quartz

Quartz is an open-source job
scheduling service. The user defines “jobs” and lets Quartz know
when these jobs should be executed via so-called “triggers”. The
scheduler itself will then run these jobs following the given

The easiest way to use Quartz in a servlet
container is probably to use the

QuartzInitializerServlet Quartz provides. This load-on-startup servlet initializes
the scheduler automatically and adds a second servlet which
actually schedules the jobs. Listing 1 shows a typical code block
in that servlet which schedules a job named
MyJob to run every ten minutes. 

Listing 1

import static org.quartz.JobBuilder.*;2
import static org.quartz.TriggerBuilder.*;
import static org.quartz.CronScheduleBuilder.*;
SchedulerFactory schedFact = new org.quartz.impl.StdSchedulerFactory();
Scheduler scheduler = schedFact.getScheduler();

JobDetail job = newJob(MyJob.class)
Trigger trigger = newTrigger()
   .withSchedule(cronSchedule("0 0/10 * * * ?"))

scheduler.scheduleJob(job, trigger);

That’s a lot of
code just to schedule one job (and the code for that job is still
missing) but will work just fine for simple use cases and a small
number of jobs that don’t change much.

As soon as the jobs get a bit more complicated,
there’s a problem: the jobs will not be aware of the CDI container,
which means that they cannot use the features provided by CDI. This
will be no problem for very simple jobs but won’t cut it as soon as
the user wants to use a simple
@Inject in one of the jobs or a service that uses
CDI. Some sort of integration is needed to make all the features
provided by CDI available to the job, and the best way to
accomplish this is by writing an Extension.

Schedule jobs the
CDI way

Let’s see how the
jobs used with our extension will look: 

  1. The job implements
    java.lang.Runnable interface rather than the Job interface provided by Quartz.
    This way, the code is not Quartz-specific and the scheduler used by
    our extension can easily switched later. The scheduler will call
    run() method of this Runnable based on the schedule we
  2. It’s possible to use CDI features like injection or
    interceptors in these jobs.
  3. A @Scheduled annotation will tell our extension that this class
    defines a job that must be scheduled and what its schedule will be
    (Listing 2).

Listing 2

@Scheduled(“0 0/10 * * * ?”)
public class MyJob implements Runnable {
  private MyService service;

  public void run() {

That’s all that is
needed to convey the same information as the code example

For this example, @Scheduled
is a simple annotation used to
set the schedule:

public @interface Scheduled {
 String value(); //the job’s schedule

It would be
trivial to change this annotation to enable more advanced features
like setting the schedule via dynamic configuration rather than
hard-coding it in the job’s source; the annotation needs only to
hold the information necessary to let our extension decide how to
configure the job. Now it’s time to look at the extension

Creating the CDI

A CDI extension class implements the
javax.enterprise.inject.spi.Extension interface. This is just a marker interface and
thus does not define any methods itself. Instead of using a fixed
set of methods statically defined in an interface, the container
will fire a series of CDI Events during the container lifecycle.
The Extension can interact with the CDI container by defining
observer methods for such events. An observer method is a
non-static non-final method which has an
parameter annotation with the
type of the observed event. It can not only gather information from
the CDI container this way; it can also modify this information and
even transmit back new information to the container. As a rule of
thumb, all things possible via CDI annotations can also be
performed programmatically in an extension. You can very easily add
or modify scopes, Interceptors, Decorators,

Producer methods,
ObserverMethods, etc. The defined container lifecycle events

  • BeforeBeanDiscovery
  • ProcessAnnotatedType
  • ProcessInjectionTarget and ProcessProducer
  • ProcessBean and ProcessObserverMethod
  • AfterBeanDiscovery
  • AfterDeploymentValidation
  • BeforeShutdown

The first article of this series described how to
observe custom CDI events. Extensions can observe the container
system events in exactly the same way. To find every class that has
@Scheduled annotation, the extension should observe the
ProcessAnnotatedType event. This system event is fired for every class scanned
by the container during boot. A Quartz job and Quartz trigger can
be created and started after the container initialization is
complete for any such annotation found (Listing 3).

Listing 3

public void scheduleJob(@Observes ProcessAnnotatedType pat) {
 AnnotatedType t = pat.getAnnotatedType();
 Scheduled schedule = t.getAnnotation(Scheduled.class);
 if (schedule == null) {
   //no scheduled job, ignoring this class
 Class<Runnable> jobClass
   = t.getJavaClass().asSubclass(Runnable.class);
 if (jobClass == null) {
   LOG.error("Can't schedule job " + t);
 JobDetail job = newJob(CdiJob.class)
   .usingJobData(CdiJob.JOB_CLASS_NAME, jobClass.getName())
 Trigger trigger = newTrigger()
 scheduler.scheduleJob(job, trigger);

An observer for the BeforeBeanDiscovery
event which is fired before the
scanning process starts can be used to initialize the

public void initScheduler(@Observes BeforeBeanDiscovery event) {
 scheduler = StdSchedulerFactory.getDefaultScheduler();

The scheduler also has to be started. This can be
done in an
AfterDeploymentValidation event observer. That event is fired after
the container has validated that there are no deployment problems,
so all jobs will be scheduled at this point. Additionally,
BeanManager can be stored in the Extension for later use (e.g. in
CdiJob class).

public void startScheduler(@Observes AfterDeploymentValidation event, 
                          BeanManager bm) {
 beanManager = bm;
 try {
   scheduler.start();“Started scheduler.”);
 } catch (SchedulerException se) {
   throw new RuntimeException(se);

Finally, the shutdown()
method of the scheduler is
called in an observer of the

public void shutdownScheduler(@Observes BeforeShutdown event) {
 try {
 } catch (SchedulerException se) {
   throw new RuntimeException(se);

The following class does the work needed, so that a CDI-managed
instance of the Runnable (our actual job) we defined is run as a
Quartz job. 

  1. It extracts the job class from the Quartz configuration.

  2. Before calling the run() method, it receives the actual CDI-managed
    instance from the

  3. After the run() method returns, the instance is destroyed (Listing

Our extension sets up all scheduled jobs to use this class,
so that all of this happens in a manner completely transparent to
the user. Keep in mind that contexts like the

RequestContext and SessionContext are not active for this job (there is no active
request nor session). If the injected services need these contexts
because they depend on beans like a
, the
contexts can be started and stopped in

@PostConstruct and @PreDestroy methods of the job. This is container-specific at
the moment, but work is underway within the Apache DeltaSpike
project to make this possible in a vendor-independent

Listing 4

public class CdiJob implements org.quartz.Job {

 public final static String JOB_CLASS_NAME = “CDI_JOB_CLASS_NAME”;

 public void execute(JobExecutionContext context)
     throws JobExecutionException {
   JobDataMap jobData = context.getJobDetail().getJobDataMap();
   String className = jobData.getString(JOB_CLASS_NAME);
   Class<Runnable> jobClass;
   try {
     jobClass = Class.forName(className).asSubclass(Runnable.class);
   } catch (ClassNotFoundException e) {
     throw new JobExecutionException(e);
   BeanManager bm = QuartzExtension.getBeanManager();
   Set<Bean<?>> jobBeans = bm.getBeans(jobClass);
   Bean<?> jobBean = bm.resolve(jobBeans);
   CreationalContext c = bm.createCreationalContext(jobBean);
   Runnable job = (Runnable) bm.getReference(jobBean, Runnable.class, c);
   try {;
   } finally {
     jobBean.destroy(job, c);


One of the most important features of CDI is the possibility to
create extensions. With this mechanism, CDI can be extended in a
portable, vendor-independent way to provide new features (like
custom scopes), implement application-specific functionality (like
loading configuration from an external database) or to integrate
other technologies in a CDI-like fashion.

Author Info:

is a Senior Software Engineer for the Research
Group for Industrial Software (INSO) at the Vienna University of
Technology. He spent the last six years working with Java on both
rich clients and web applications. He and his colleagues use CDI
and Java EE 6 for all of their projects since late 2009. He spends
his spare time working on his master thesis about the software
architecture of campus management systems.

is Enterprise Architect at open knowledge GmbH in
Oldenburg, Germany. He is an
experienced developer, architect and trainer in the Enterprise
environment and also regularly involved in Android development. He
frequently speaks at conferences and conducts workshops in those
fields. He is also an active participant in open source projects,
e.g. as a committer of Apache OpenWebBeans and as initiator and
Project Lead of JPA Security.

is a software architect with over 20 years of
programming experience. He has been working with Java since 1996
and is actively involved in open source projects in the Java and
Linux area. He is Apache Software Foundation member and serves as
PMC and Committer for Apache OpenWebBeans, MyFaces, Maven, OpenJPA,
BVal, DeltaSpike and other projects. He is also a CDI Expert Group
member actively working on the specification. Mark works for the
Research Group for Industrial Software (INSO) at the Vienna
University of Technology.

This article originally appeared in Java Tech Journal: CDI. For
more articles of a CDI-nature, download that
issue here

Mark Struberg is a software architect with over 20 years of programming experience. He has been working with Java since 1996 and is actively involved in open source projects in the Java and Linux area. He is Apache Software Foundation member and serves as PMC and Committer for Apache OpenWebBeans, MyFaces, Maven, OpenJPA, BVal, DeltaSpike and other projects. He is also a CDI Expert Group member actively working on the specification. Mark works for the Research Group for Industrial Software (INSO) at the Vienna University of Technology.
comments powered by Disqus