Eclipse Modeling Framework – Interview with Ed Merks

hs
Auge

A conversation about the Eclipse Modeling Framework, the perspectives of modeling beyond the scope of software development, the cooperation with Mircosoft and the relation between Modeling and Coding.

Ed, you are the lead of the Eclipse Modeling Framework
and of the Eclipse Modeling Project. What’s your exact role in this
context and what’s the difference of those two
projects?

The Eclipse Modeling Framework (EMF) project is a subproject of
the top-level Eclipse Modeling project.  EMF has been hosted
at Eclipse since 2002, and was previously part of the top-level
Tools project.  I’ve been the technical lead for EMF since its
inception.  In that role, I am a committer, i.e., I have
access permission to commit changes in the CVS repository for EMF,
and I oversee all the other EMF committer’s activities.  EMF
is used by a large and growing number of other projects at Eclipse,
e.g., XML Schema Definition (XSD), Unified Modeling Language (UML),
and Web Tools project (WTP), so it’s important that we focus on
quality and stability whenever we work on enhancements that make
EMF ever more flexible and efficient. Over time, related projects
such as the Graphical Modeling Framework project (GMF) and the
Generative Modeling Tools project (GMT) began to appear and as such
it quickly became apparent that a top-level Modeling project to
oversee all these related subprojects would be valuable.  Rich
Gronback of Borland and I, from IBM at that time, became the
Project Management Committed (PMC) co-leads of the new umbrella
project.  Think of the Modeling project as like an onion with
many layers and EMF at its core. 

In the PMC lead role, Rich and I participate in the Eclipse
Architecture Council and the Eclipse Planning Council, hold monthly
open meetings with other modeling project leads to discuss issues
that affect all our projects, as well as perform various other
project management activities.  The EMF lead role is primarily
a technical role while the PMC lead role is primarily a management
role. I am also an elected Commiter Representative on the Eclipse
Board of Directors and as such I have influence over a wide range
of activities at Eclipse from the technological minutia of EMF to
the high level direction of the Eclipse foundation.  Eclipse
is an exciting place to be and one of the really great things for
me is that all these roles are mine as an individual, i.e., even
though I left IBM in July, after 16 years, and am now working
closely with the innovative people at itemis AG, my role at Eclipse
hasn’t changed at all.

You’ve done this for a very long time and with each year
EMF became even more successful. What do you think is key to the
success of EMF?

There are several factors that were key to EMF’s success. 
EMF started out internally at IBM as an implementation of the
Object Management Group’s (OMG) Meta Object Facility (MOF). 
IBM projects were mandated to use it as a way so ensure that the
large number of geographically distributed teams would produce an
integrated cohesive collection of interrelated models. 
Certainly it went a long way towards delivering on that promise,
but the MOF model was very large and complex and the code
generation patterns were focused on keeping generated and hand
written code separate and hence produced vast quantities of stilted
code that looked nothing like what one would produce with a clean
hand-written design.  As a whole, it was perceived as complex,
bloated, and slow so clients were growing increasingly
unhappy. 

Frank Budinsky and I were charged with the task of cleaning
house.  We drastically simplified the MOF meta model, i.e.,
Ecore, rewrote the runtime with a maniacal focus on simplicity and
performance, and redesigned the generator patterns to produce
simple clean code that supports merging so clients could safely mix
generated and hand written code.  Our Ecore simplifications
eventually led to the OMG’s splitting MOF into Essential MOF (EMOF)
and Complete MOF (CMOF); Ecore is effectively isomorphic to EMOF so
we can read and write standard EMOF serializations.  The
runtime jar footprint was reduced by 2/3 and the performance of the
generated code was arguably optimal, e.g., reflective lookup of the
value of some feature of an object is faster than a hash map
lookup.  All this is to say that flexibility, quality, and
performance are the essential starting point on the road to
success.

Starting the EMF work in a context where there was a great deal
of negative perception, much of it in fact justified, certainly
made the task way more of a challenge.  People don’t change
their perceptions easily even when what’s in front of them changes
drastically.  It’s easy to ruin a good reputation but very
hard to fix a bad one.  Also, when you provide foundation
infrastructure, it’s very easy to become a scapegoat for bad
designs built on top.  For example, a design with lists that
contain a million elements probably isn’t going to perform well no
matter whether you use EMF or not.  It’s unfortunately also
the case that there are a large number of misconceptions around
modeling in general.  It’s taken a great many years to get
more people to see the light and that task seems never
ending.  In any case, overcoming the challenges of the past
leaves me feeling empowered: nothing will stand in the way for
long.  In other words, determination in the face of adversity
is necessary for maintaining your pace on the road to success;
perhaps adversity itself is necessary.  Show the people who
say it can’t be done that in fact it can; what they’re really
saying is they don’t know how to do it, so you need to show them
the way.

Finally, do not underestimate the power of your community. 
Listen to them carefully and learn from them.  Your success is
intertwined with their’s.   Listen attentively to their
problems and use their collective voice as a guide for where to
focus your efforts, though keep in mind that you lead the
community, not the other way around, i.e., pandering to every whim
will get you nowhere.   Demonstrate your resolve with
your actions.  Make fixing bugs quickly your number one
priority.  How you manage your communications will reflect on
your personally and on your project generally.  At Eclipse
Summit Europe some Ph.D students from Munich said to me, “All our
students know who you are because you answer their questions on the
newsgroup.”  “When we come to the EMF newsgroup and you answer
their questions, they know you care about their problems, but when
they go to some other newsgroup and they don’t get an answer, they
don’t feel welcome.”  Perception is reality.

If we think about Modeling from a software development
perspective it’s all about model driven software development. Do
you know some interesting or maybe “exotic” projects in some
industries or maybe in academic research, or somewhere else, which
use Eclipse modeling technologies – but have nothing to do with
software development?

An interesting example that comes to mind is Daniel Ford’s work
at IBM Research on the  Spatiotemporal Epidemiological Modeler
<http://wiki.eclipse.org/index.php/STEM
(STEM) which uses EMF to model various factors that are involved in
the spread of disease, e.g., weather or traffic patterns,
incubation times, infection rates, and so on, and then composes
those models to simulate the likely progression of some
hypothetical disease event.  It can be used by policy makers
to help prepare for events such as a potential global flu pandemic
or bioterrorist attacks.  EMF not only helps produce software
more effectively, it even helps prevent disease and makes the world
safer.  If only it could weed my garden!

Some years ago there was that big vision of Model Driven
Architecture, driven by the OMG and the software tools industry.
The idea was based on a strong belief that it’s possible to model
an entire software system on a highly abstract level and then
deducate all the relevant technical artifacts by just pressing a
button. Why did this vision fail?

Perhaps it hasn’t failed at all and we’re actually still on that
path.  Definitely I look at MDA as having promised way too
much way too soon and I hold that unreasonable level of hype
responsible for many of the negative misconceptions that continue
to cloud the future of modeling.  Eclipse modeling has been
successful because we’ve stayed focused by starting small yet
thinking big.  We built a solid core meta model first, i.e.,
Ecore is the defacto reference implementation of EMOF, and then we
built a stack of models on top, e.g., UML, XSD, OCL, BPMN, and so
on.  If you think about it, much of the OMG’s landscape is
gradually surfacing at Eclipse; perhaps the OMG looks at Eclipse
and sees it as a demonstration of the success of their vision. I
wouldn’t argue with that, though personally I think there will
continue to be a need for writing programs in general purpose
programming languages far into the foreseeable future; I see
modeling as complementary to that effort.  As we continue to
find better ways to abstract, domain specific languages (DSLs) will
become more and more attractive.  But I don’t think it’s so
important to anticipate where ultimately we will end up. 
Instead we should focus on making incremental progress in the right
direction.

What do you think about today’s role of the OMG and it’s
modeling standards?

I think the OMG needs to be forward looking.  The world is
rapidly changing.  Open source is a powerful force and the
individuals who drive it, have influence beyond what might have
been anticipated.  Reference implementations speak louder than
words.  Not only that, they’re greener, i.e., they kill far
fewer trees because people aren’t tempted to print them on
paper.  Standards bodies can all too easily fall into a
political trap with committee efforts that are full of compromises
that ultimately produce compromised results.  In the worst
case, standards can become a weapon that stagnates innovation by
excluding the small agile players and enforcing mediocrity. 
Standards ought to focus on codifying best practices.  I see
open source as the vehicle for establishing those innovative best
practices and for demonstrating the soundness and efficacy of the
evolving standards.

It seems, that innovation in the modeling space is now
driven by projects which ship real code – like EMF etc. – and not
by organizations which care for standards. Do you see a general
shift from specs to code?

Yes, that’s exactly the point.  Just as the horse leads the
cart,  innovation comes first and standards follow.  At a
recent meeting with people interested in establishing an industry
vertical working group at Eclipse I suggested that they focus on
building a living breathing reference implementation to establish a
defacto standard and that they do this by sending their best
software engineers rather than their most skilled political
engineers.  Real developers writing real code are going to
focus on real problems and solve them for real.  A reality
check is what’s badly needed.

Microsoft recently joined the OMG. At the same time they
are investing into something called Oslo, which pretty much looks
like the .Net version of Eclipse modeling. What do you think about
these initiatives? How could Oslo influence EMF or vice
versa?

I’m still expecting to start a dialog with the folks like Don
Box and Douglas Purdy of Microsoft.  At Eclipse Summit Europe
I had the good fortune to meet Steve Sfartz who subsequently
introduced me to Jean-Marc Prieur at MD Day in Paris. 
Jean-Marc and I used that opportunity to chat about our common
interest in modeling and he gave me an extensive demo of
Microsoft’s graphical DSL technology.  I compare the Microsoft
and Eclipse efforts as follows.  Microsoft’s graphical DSL
technology is the analog of generated EMF models augmented by GMF;
as far as I understand it, the Microsoft domain meta model exists
only at development time, not at runtime, so there is no MOF-like
reflection support. Microsoft’s textual DSL technology is the
analog of Xtext (i.e., MGrammar), Ecore (i.e., MSchema), and
EObject (i.e., MGraph). So while at Eclipse we provide Ecore/EMOF
as the common standards-based meta model to support both graphical
and textual DSLs, and hence provide model-based abstract reflection
for all domain models, at Microsoft there are two different
technologies to cover these two closely related Aspects. I’m
confident that this provides Eclipse with a very strong technical
foundation that will be hard to beat. 

In the end though, the success of modeling is one of my personal
goals, so I would be very happy to see Microsoft’s effort be a
great success.  I believe it will help revive the North
American’s market’s appetite for modeling.  Already analysts
are asking questions about Eclipse’s established technology as they
begin to review Microsoft’s latest developments.  Many of us
believe strongly that modeling is a key technology for moving the
software industry forward, so those of us with that vision ought to
work together.

If we talk about model driven software development today
it’s all about pragmatic approaches. How can people learn where
MDSD makes sense and where it doesn’t?

Definitely, it’s the old saying about using the right tool for
the job.  I’d have a hard time thinking of anything
significant where modeling wouldn’t play at least a small
part.  That’s not because I’m a zealot, though some might
disagree, that’s because all data can be and I would argue should
be modeled. Because most software is focused on manipulating data
it follows that models play a ubiquitous role.  As
practitioners gain experience with the tools and technology at
their disposal, they’ll  learn how to apply them more
effectively.  It’s better to have tried and failed than never
to have tried at all. We learn as much from our failures as our
successes.

Regarding productivity in software development we see
these days a big trend towards textual domain specific languages.
How do you consider do DSLs and the world of Models relate to each
other?

I only see models!  At Eclipse Summit Europe, Dave Thomas’
provocative key note speech focused on many of the things he sees
as technological barriers that are holding our industry back. 
For example, he railed about the complexity of modeling and the
horrors of meta meta nonsense, yet he also talked about DSLs as one
of the key technologies to lead us into a brave new world.  I
found that incongruous, so I asked him afterwards, “What’s the
difference between a model and a DSL.”  He replied, “They’re
the same thing.”  Darn, I thought I had him trapped, but he’s
too wily for me.  I think back a few more years, and I recall
someone at IBM telling me, “Ecore isn’t a language, it’s only a
model.”  I was taken aback so I had to ask, “Why isn’t Ecore a
language?”  I was told, “Because it doesn’t have a concrete
syntax, and the XML serialization doesn’t count.” He anticipated my
next question! According to that reasoning, as soon as Chris Daly
defined the Emfatic syntax for Ecore, it became a language. It
seems a superficial point of view to me.

At MD Day I made the comment that XML is a very poor excuse for
human readable syntax and for that purpose it sucks.  It’s
great for machine interchange and it’s great that developers can
avoid writing lexers and parsers, but let’s not forget the human
element in all this.  People applauded, so I’m not the only
one who feels this way.  This is why I’m so happy to see
things like Xtext and Oslo’s MGrammar emerging from the
darkness.  I can only hope that we pull back from the XML
nuclear winter that has swept the planet.  We should aim to
please humans not machines. 

I love languages a lot.  I always have. I love XML too, but
I think it’s been loved almost to death.  I am confident that
modeling technology like Xtext will help us quickly create all the
new specialized languages that are needed.

What are the main unresolved problems which the modeling
community has to face?

Problems, we have no problems.  Don’t open that
closet!  The biggest problem I can see close to home is the
whole issue around scalability. Of course once people have their
beautiful models with clear simple APIs for accessing them, they
start to throw more and more data at it.  And of course they
can always throw in so much data that they run out of room for it
all.  So we’re looking at ways to reduce instance footprint,
but clearly, unless you can reduce the footprint to zero bytes, it
will always be possible to produce so much data you run out of
memory.  That’s where I think some exciting technologies like
Connected Data Objects (CDO) can play a major role.  With CDO,
an object acts just as a facade with all data access delegated to a
backing store (typically hosted on a server), which in turn is
responsible for managing the data.  The facade objects carry
no hard object references and can and will be garbage collected
when the application on longer holds references to them.  Eike
Stepper wrote a great blog titled How Scalable are my Models
<http://thegordian.blogspot.com/2008/11/how-scalable-are-my-models.html
that’s well worth reading.

The most gratifying thing for me is that all the unresolved
problems, such as scalability, human readable notation, and
presentation modeling, have people actively working on them. 
The Eclipse community is a happening place and it’s a great
privilege to be a small part of it.  In the end, one of the
biggest unresolved problems is not even a technological problem,
but rather a sociological one, i.e., the pervasive misconceptions
about modeling.

Do you have a vision how models and code could work
together in the future?

I think the future is here and now.  Look at Eclipse and at
what people are doing with modeling and coding there.  They’re
mixing generated and hand written code because EMF’s generator
supports merging. Developers can alternate between modeling and
coding as the need arises and therefore benefit equally from both.
Look also at the work being done on DSLs and consider the earlier
discussion about DLSs being models; modeling is coding and DSLs
will further blur the line between the two.  Consider for
example that when I write an interface X with a getY method in
Java, it’s clear that I’m coding, but if I create an EMF class X
with a feature y, can it really be argued that I’m not
coding?  To me they’re the same thing and the dividing line
between modeling and coding is merely a mirage that disappears upon
closer inspection.

The biggest shortcoming today is the lack of a complete
repertoire of high-end model-aware tools.  Eclipse’s Java
Development Tools (JDT) set an extremely high standard for
productive coding in Java.  It’s the benchmark that modeling
technologies must strive to achieve in order to meet the raised
expectations of the development community .  For example,
things like automated support for refactoring of Ecore models,
which takes into account the impact on the generated code to employ
Java refactoring as an integrated aspect of the overall
refactoring, are an important aspect for managing really large
model-based projects.  Often the criticisms leveled at
modeling technology isn’t so much about the modeling itself but
rather about the inadequate tools to support it well. 
Necessity is the mother of invention, so I’m confident that these
too shall come. In general, if you look at the community that’s
growing around Eclipse, the large and small organizations, the
individuals, the researchers, it’s clear we are all paving the way
into a more productive future.

Ed, after so many years working in Open Source projects
and in the modeling space – what are your personal lessons
learned?

I’ve learned that I’m not just a freaky geeky who should stay in
my little corner hiding behind my big monitor because programming
is what I do best. As it turns out, I do a surprising number of
things rather well.  I’ve learned how much satisfaction there
is in working with others, particularly how much someone else’s
success feels like my own when I’ve helped them in some small
way.  I’ve learned that it’s best not to under estimate
myself; others are more than happy to do that for me.  And
I’ve learned that doing what I think is right, with a focus on my
own path, guided by the voice of the community, ensures that I’m
always moving in a good direction.  

Thank you very much for this conversation.


 

Ed Merks
recently founded his own small consulting company, Macro
Modeling. He is a coauthor of the authoritative book “EMF: Eclipse
Modeling Framework” which is published as a second
expand
ed edition. He has been an elected member
of the Eclipse Foundation Board of Directors for the past two years
and has been recognized by the Eclipse Community Awards as Top
Ambassador and Top Committer. Ed is well know
n for his
dedication to the Eclipse community, posting literally thousands of
newsgroup answers each year. He spent 16 years at IBM, achieving
the level of Senior Technical Staff Member after completing his
Ph.D. at Simon Fraser University. He is a partner of itemis AG and
serves on Skyway Software’s Board of Advisors. His experience in
modeling technology spans 25 years.

Dokumentenvorlage

           
Ed, you are the lead of the Eclipse Modeling Framework and

           
of the Eclipse Modeling Project. What’s your exact role in this
context and

           
what’s the difference of those two projects?

 

 

The Eclipse Modeling Framework (EMF) project is a subproject of
the top-level Eclipse Modeling project.  EMF has been hosted
at Eclipse since 2002, and was previously part of the top-level
Tools project.  I’ve been the technical lead for EMF since its
inception.  In that role, I am a committer, i.e., I have
access permission to commit changes in the CVS repository for EMF,
and I oversee all the other EMF committer’s activities.  EMF
is used by a large and growing number of other projects at Eclipse,
e.g., XML Schema Definition (XSD), Unified Modeling Language (UML),
and Web Tools project (WTP), so it’s important that we focus on
quality and stability whenever we work on enhancements that make
EMF ever more flexible and efficient. Over time, related projects
such as the Graphical Modeling Framework project (GMF) and the
Generative Modeling Tools project (GMT) began to appear and as such
it quickly became apparent that a top-level Modeling project to
oversee all these related subprojects would be valuable.  Rich
Gronback of Borland and I, from IBM at that time, became the
Project Management Committed (PMC) co-leads of the new umbrella
project.  Think of the Modeling project as like an onion with
many layers and EMF at its core. 

 

In the PMC lead role, Rich and I participate in the Eclipse
Architecture Council and the Eclipse Planning Council, hold monthly
open meetings with other modeling project leads to discuss issues
that affect all our projects, as well as perform various other
project management activities.  The EMF lead role is primarily
a technical role while the PMC lead role is primarily a management
role. I am also an elected Commiter Representative on the Eclipse
Board of Directors and as such I have influence over a wide range
of activities at Eclipse from the technological minutia of EMF to
the high level direction of the Eclipse foundation.  Eclipse
is an exciting place to be and one of the really great things for
me is that all these roles are mine as an individual, i.e., even
though I left IBM in July, after 16 years, and am now working
closely with the innovative people at itemis AG, my role at Eclipse
hasn’t changed at all.

 

 

 

           
You’ve done this for a very long time and with each year

           
EMF became even more successful. What do you think is key to the
success of

           
EMF?

 

 

There are several factors that were key to EMF’s success. 
EMF started out internally at IBM as an implementation of the
Object Management Group’s (OMG) Meta Object Facility (MOF). 
IBM projects were mandated to use it as a way so ensure that the
large number of geographically distributed teams would produce an
integrated cohesive collection of interrelated models. 
Certainly it went a long way towards delivering on that promise,
but the MOF model was very large and complex and the code
generation patterns were focused on keeping generated and hand
written code separate and hence produced vast quantities of stilted
code that looked nothing like what one would produce with a clean
hand-written design.  As a whole, it was perceived as complex,
bloated, and slow so clients were growing increasingly
unhappy. 

 

Frank Budinsky and I were charged with the task of cleaning
house.  We drastically simplified the MOF meta model, i.e.,
Ecore, rewrote the runtime with a maniacal focus on simplicity and
performance, and redesigned the generator patterns to produce
simple clean code that supports merging so clients could safely mix
generated and hand written code.  Our Ecore simplifications
eventually led to the OMG’s splitting MOF into Essential MOF (EMOF)
and Complete MOF (CMOF); Ecore is effectively isomorphic to EMOF so
we can read and write standard EMOF serializations.  The
runtime jar footprint was reduced by 2/3 and the performance of the
generated code was arguably optimal, e.g., reflective lookup of the
value of some feature of an object is faster than a hash map
lookup.  All this is to say that flexibility, quality, and
performance are the essential starting point on the road to
success.

 

Starting the EMF work in a context where there was a great deal
of negative perception, much of it in fact justified, certainly
made the task way more of a challenge.  People don’t change
their perceptions easily even when what’s in front of them changes
drastically.  It’s easy to ruin a good reputation but very
hard to fix a bad one.  Also, when you provide foundation
infrastructure, it’s very easy to become a scapegoat for bad
designs built on top.  For example, a design with lists that
contain a million elements probably isn’t going to perform well no
matter whether you use EMF or not.  It’s unfortunately also
the case that there are a large number of misconceptions around
modeling in general.  It’s taken a great many years to get
more people to see the light and that task seems never
ending.  In any case, overcoming the challenges of the past
leaves me feeling empowered: nothing will stand in the way for
long.  In other words, determination in the face of adversity
is necessary for maintaining your pace on the road to success;
perhaps adversity itself is necessary.  Show the people who
say it can’t be done that in fact it can; what they’re really
saying is they don’t know how to do it, so you need to show them
the way.

 

Finally, do not underestimate the power of your community. 
Listen to them carefully and learn from them.  Your success is
intertwined with their’s.   Listen attentively to their
problems and use their collective voice as a guide for where to
focus your efforts, though keep in mind that you lead the
community, not the other way around, i.e., pandering to every whim
will get you nowhere.   Demonstrate your resolve with
your actions.  Make fixing bugs quickly your number one
priority.  How you manage your communications will reflect on
your personally and on your project generally.  At Eclipse
Summit Europe some Ph.D students from Munich said to me, “All our
students know who you are because you answer their questions on the
newsgroup.”  “When we come to the EMF newsgroup and you answer
their questions, they know you care about their problems, but when
they go to some other newsgroup and they don’t get an answer, they
don’t feel welcome.”  Perception is reality.

 

 

           
If we think about Modeling from a software development

           
perspective it’s all about model driven software development. Do
you know

           
some interesting or maybe “exotic” projects in some industries or
maybe in

           
academic research, or somewhere else, which use Eclipse
modeling

           
technologies – but have nothing to do with software
development?

 

 

An interesting example that comes to mind is Daniel Ford’s work
at IBM Research on the  Spatiotemporal Epidemiological Modeler
<http://wiki.eclipse.org/index.php/STEM
(STEM) which uses EMF to model various factors that are involved in
the spread of disease, e.g., weather or traffic patterns,
incubation times, infection rates, and so on, and then composes
those models to simulate the likely progression of some
hypothetical disease event.  It can be used by policy makers
to help prepare for events such as a potential global flu pandemic
or bioterrorist attacks.  EMF not only helps produce software
more effectively, it even helps prevent disease and makes the world
safer.  If only it could weed my garden!

 

 

 

 

           
Some years ago there was that big vision of Model Driven

           
Architecture, driven by the OMG and the software tools industry.
The idea

           
was based on a strong belief that it’s possible to model an entire
software

           
system on a highly abstract level and then deducate all the
relevant

           
technical artifacts by just pressing a button. Why did this vision
fail?

 

 

Perhaps it hasn’t failed at all and we’re actually still on that
path.  Definitely I look at MDA as having promised way too
much way too soon and I hold that unreasonable level of hype
responsible for many of the negative misconceptions that continue
to cloud the future of modeling.  Eclipse modeling has been
successful because we’ve stayed focused by starting small yet
thinking big.  We built a solid core meta model first, i.e.,
Ecore is the defacto reference implementation of EMOF, and then we
built a stack of models on top, e.g., UML, XSD, OCL, BPMN, and so
on.  If you think about it, much of the OMG’s landscape is
gradually surfacing at Eclipse; perhaps the OMG looks at Eclipse
and sees it as a demonstration of the success of their vision. I
wouldn’t argue with that, though personally I think there will
continue to be a need for writing programs in general purpose
programming languages far into the foreseeable future; I see
modeling as complementary to that effort.  As we continue to
find better ways to abstract, domain specific languages (DSLs) will
become more and more attractive.  But I don’t think it’s so
important to anticipate where ultimately we will end up. 
Instead we should focus on making incremental progress in the right
direction.

 

 

           
What do you think about today’s role of the OMG and it’s

           
modeling standards?

 

 

I think the OMG needs to be forward looking.  The world is
rapidly changing.  Open source is a powerful force and the
individuals who drive it, have influence beyond what might have
been anticipated.  Reference implementations speak louder than
words.  Not only that, they’re greener, i.e., they kill far
fewer trees because people aren’t tempted to print them on
paper.  Standards bodies can all too easily fall into a
political trap with committee efforts that are full of compromises
that ultimately produce compromised results.  In the worst
case, standards can become a weapon that stagnates innovation by
excluding the small agile players and enforcing mediocrity. 
Standards ought to focus on codifying best practices.  I see
open source as the vehicle for establishing those innovative best
practices and for demonstrating the soundness and efficacy of the
evolving standards.

 

 

           
It seems, that innovation in the modeling space is now

           
driven by projects which ship real code – like EMF etc. – and not
by

           
organizations which care for standards. Do you see a general shift
from

           
specs to code?

 

 

Yes, that’s exactly the point.  Just as the horse leads the
cart,  innovation comes first and standards follow.  At a
recent meeting with people interested in establishing an industry
vertical working group at Eclipse I suggested that they focus on
building a living breathing reference implementation to establish a
defacto standard and that they do this by sending their best
software engineers rather than their most skilled political
engineers.  Real developers writing real code are going to
focus on real problems and solve them for real.  A reality
check is what’s badly needed.

 

 

           
Microsoft recently joined the OMG. At the same time they

           
are investing into something called Oslo, which pretty much looks
like the

           
.Net version of Eclipse modeling. What do you think about these
initiatives?

           
How could Oslo influence EMF or vice versa?

 

 

I’m still expecting to start a dialog with the folks like Don
Box and Douglas Purdy of Microsoft.  At Eclipse Summit Europe
I had the good fortune to meet Steve Sfartz who subsequently
introduced me to Jean-Marc Prieur at MD Day in Paris. 
Jean-Marc and I used that opportunity to chat about our common
interest in modeling and he gave me an extensive demo of
Microsoft’s graphical DSL technology.  I compare the Microsoft
and Eclipse efforts as follows.  Microsoft’s graphical DSL
technology is the analog of generated EMF models augmented by GMF;
as far as I understand it, the Microsoft domain meta model exists
only at development time, not at runtime, so there is no MOF-like
reflection support. Microsoft’s textual DSL technology is the
analog of Xtext (i.e., MGrammar), Ecore (i.e., MSchema), and
EObject (i.e., MGraph). So while at Eclipse we provide Ecore/EMOF
as the common standards-based meta model to support both graphical
and textual DSLs, and hence provide model-based abstract reflection
for all domain models, at Microsoft there are two different
technologies to cover these two closely related Aspects. I’m
confident that this provides Eclipse with a very strong technical
foundation that will be hard to beat. 

 

In the end though, the success of modeling is one of my personal
goals, so I would be very happy to see Microsoft’s effort be a
great success.  I believe it will help revive the North
American’s market’s appetite for modeling.  Already analysts
are asking questions about Eclipse’s established technology as they
begin to review Microsoft’s latest developments.  Many of us
believe strongly that modeling is a key technology for moving the
software industry forward, so those of us with that vision ought to
work together.

 

 

 

           
If we talk about model driven software development today

           
it’s all about pragmatic approaches. How can people learn where
MDSD makes

           
sense and where it doesn’t?

 

 

Definitely, it’s the old saying about using the right tool for
the job.  I’d have a hard time thinking of anything
significant where modeling wouldn’t play at least a small
part.  That’s not because I’m a zealot, though some might
disagree, that’s because all data can be and I would argue should
be modeled. Because most software is focused on manipulating data
it follows that models play a ubiquitous role.  As
practitioners gain experience with the tools and technology at
their disposal, they’ll  learn how to apply them more
effectively.  It’s better to have tried and failed than never
to have tried at all. We learn as much from our failures as our
successes.

 

 

 

           
Regarding productivity in software development we see

           
these days a big trend towards textual domain specific languages.
How do you

           
consider do DSLs and the world of Models relate to each other?

 

 

I only see models!  At Eclipse Summit Europe, Dave Thomas’
provocative key note speech focused on many of the things he sees
as technological barriers that are holding our industry back. 
For example, he railed about the complexity of modeling and the
horrors of meta meta nonsense, yet he also talked about DSLs as one
of the key technologies to lead us into a brave new world.  I
found that incongruous, so I asked him afterwards, “What’s the
difference between a model and a DSL.”  He replied, “They’re
the same thing.”  Darn, I thought I had him trapped, but he’s
too wily for me.  I think back a few more years, and I recall
someone at IBM telling me, “Ecore isn’t a language, it’s only a
model.”  I was taken aback so I had to ask, “Why isn’t Ecore a
language?”  I was told, “Because it doesn’t have a concrete
syntax, and the XML serialization doesn’t count.” He anticipated my
next question! According to that reasoning, as soon as Chris Daly
defined the Emfatic syntax for Ecore, it became a language. It
seems a superficial point of view to me.

 

At MD Day I made the comment that XML is a very poor excuse for
human readable syntax and for that purpose it sucks.  It’s
great for machine interchange and it’s great that developers can
avoid writing lexers and parsers, but let’s not forget the human
element in all this.  People applauded, so I’m not the only
one who feels this way.  This is why I’m so happy to see
things like Xtext and Oslo’s MGrammar emerging from the
darkness.  I can only hope that we pull back from the XML
nuclear winter that has swept the planet.  We should aim to
please humans not machines. 

 

I love languages a lot.  I always have. I love XML too, but
I think it’s been loved almost to death.  I am confident that
modeling technology like Xtext will help us quickly create all the
new specialized languages that are needed.

 

 

 

           
What are the main unresolved problems which the modeling

           
community has to face?

 

 

Problems, we have no problems.  Don’t open that
closet!  The biggest problem I can see close to home is the
whole issue around scalability. Of course once people have their
beautiful models with clear simple APIs for accessing them, they
start to throw more and more data at it.  And of course they
can always throw in so much data that they run out of room for it
all.  So we’re looking at ways to reduce instance footprint,
but clearly, unless you can reduce the footprint to zero bytes, it
will always be possible to produce so much data you run out of
memory.  That’s where I think some exciting technologies like
Connected Data Objects (CDO) can play a major role.  With CDO,
an object acts just as a facade with all data access delegated to a
backing store (typically hosted on a server), which in turn is
responsible for managing the data.  The facade objects carry
no hard object references and can and will be garbage collected
when the application on longer holds references to them.  Eike
Stepper wrote a great blog titled How Scalable are my Models
<http://thegordian.blogspot.com/2008/11/how-scalable-are-my-models.html
that’s well worth reading.

 

The most gratifying thing for me is that all the unresolved
problems, such as scalability, human readable notation, and
presentation modeling, have people actively working on them. 
The Eclipse community is a happening place and it’s a great
privilege to be a small part of it.  In the end, one of the
biggest unresolved problems is not even a technological problem,
but rather a sociological one, i.e., the pervasive misconceptions
about modeling.

 

 

 

           
Do you have a vision how models and code could work

           
together in the future?

 

 

I think the future is here and now.  Look at Eclipse and at
what people are doing with modeling and coding there.  They’re
mixing generated and hand written code because EMF’s generator
supports merging. Developers can alternate between modeling and
coding as the need arises and therefore benefit equally from both.
Look also at the work being done on DSLs and consider the earlier
discussion about DLSs being models; modeling is coding and DSLs
will further blur the line between the two.  Consider for
example that when I write an interface X with a getY method in
Java, it’s clear that I’m coding, but if I create an EMF class X
with a feature y, can it really be argued that I’m not
coding?  To me they’re the same thing and the dividing line
between modeling and coding is merely a mirage that disappears upon
closer inspection.

 

The biggest shortcoming today is the lack of a complete
repertoire of high-end model-aware tools.  Eclipse’s Java
Development Tools (JDT) set an extremely high standard for
productive coding in Java.  It’s the benchmark that modeling
technologies must strive to achieve in order to meet the raised
expectations of the development community .  For example,
things like automated support for refactoring of Ecore models,
which takes into account the impact on the generated code to employ
Java refactoring as an integrated aspect of the overall
refactoring, are an important aspect for managing really large
model-based projects.  Often the criticisms leveled at
modeling technology isn’t so much about the modeling itself but
rather about the inadequate tools to support it well. 
Necessity is the mother of invention, so I’m confident that these
too shall come. In general, if you look at the community that’s
growing around Eclipse, the large and small organizations, the
individuals, the researchers, it’s clear we are all paving the way
into a more productive future.

 

 

           
Ed, after so many years working in Open Source projects

           
and in the modeling space – what are your personal lessons
learned?

 

 

I’ve learned that I’m not just a freaky geeky who should stay in
my little corner hiding behind my big monitor because programming
is what I do best. As it turns out, I do a surprising number of
things rather well.  I’ve learned how much satisfaction there
is in working with others, particularly how much someone else’s
success feels like my own when I’ve helped them in some small
way.  I’ve learned that it’s best not to under estimate
myself; others are more than happy to do that for me.  And
I’ve learned that doing what I think is right, with a focus on my
own path, guided by the voice of the community, ensures that I’m
always moving in a good direction.  

 

 

 

           
Thank you very much for this conversation.

 

Author
Comments
comments powered by Disqus