Archive for the ‘eclipse’ Category

Quicker start guide for Oomph

Monday, June 30th, 2014

Scope of this post

When working on a common project, it makes sense to use a common IDE. For a current project, an Eclipse installation as well as an initial workspace was put together manually and made available for download. Of course, updating settings for all developers does not work well (read not at all) that way. This is where Oomph is extremely helpful as it allows to automate the provisioning of the IDE as well as the updating. Start reading here if you have not heard about Oomph before.

While trying to write the setup file for our project, we came across questions and problems, a number of which were of the “If we had only known in the beginning!”-type. So hopefully, this post will reduce the effort for some of you. Note, that we do not need all supported features (like Git cloning, targlets etc.), so this post will not address them.

Project Requirements and our approach to meet them

Same setup file for all developers

Using Oomph makes sense only, if all developers use the same (instance of the) project setup model, i.e. changes to that one file must take effect everywhere. Basically, you have to make sure that this central setup is included in the project setups. Note that the project href may be a file- or a http-url pointing to your setup.

The org.eclipse.oomph.setup project contains launch configurations that look quite interesting with respect to redirecting product and project catalogs via start parameters of the installer, but we have not tried out that option, yet.

SVN

Currently, SVN repositories are not supported by Oomph and a committer hinted that this feature might be reserved for a commercial version. We have a single SVN repository and need one project containing some helpful launch configurations anyway, which is why we use the Project-Set-Import-Task. In a running IDE, you simply export a Team Project set file containing the needed projects(Export/Team/Team Project Set), and use the Project-Set-Import-Task to import it. As a consequence the SVN-repository will be present in the repository view and the project will be imported into the workspace.

If the task is run on every startup, users will be annoyed as the project needs to be imported only once (follow this bug). You can improve the situation by using the advanced properties in the Properties view of the tasks and use the trigger MANUAL rather than STARTUP+MANUAL.

Possibly common questions, pitfalls and some suggestions

Can I aggregate tasks from several setup files?

I would have liked to separate tasks into several setup models – to keep them smaller, separate different aspects of an installation, having them ready for reuse (in arbitrary combination). Currently this is not possible, at least not the way I had imagined it. You can use a project container, i.e. you can derive from one common setup model inheriting everything that is defined there, but you cannot make your setup pick up tasks from arbitrary setup files. In order to have some more structure nevertheless, we suggest to

Make heavy use of Compound-Tasks

In order to know which tasks belong together, you can group them in Compound-Tasks (e.g. P2- and Preference-Tasks for a particular plugin). They are nothing but named container for arbitrary tasks. That way you can also easily copy a complete working portion of one setup model to another.

Note that this also works for the “recording preferences”-feature. Before you record changes in the preference page, first create a new Compound-Task, select it and then start recording. The result of the recording will now be contained in that Compound-Task and can be reviewed easily.

A feature cannot be installed using a P2-Task

If you want to declare a feature (rather than a plugin) as a requirement in a P2-Task (P2-Director), you have to append “.feature.group” to the feature’s id, this will mark the requirement as feature (see the corresponding property in the properties view – I don’t know why you cannont simply switch the feature property to true). I.e. if the feature id is “org.example.foo.feature”, the id to be used is “org.example.foo.feature.feature.group). If this does not work you can enumerate the single plugins of that feature in the P2-Task instead (possibly with the version constraint removed, as that version did not exist anymore). However, there were cases where even that did not work. As a first workaround we used a Resource-Copy-Task to copy the artifact directly from the update site to the dropins folder of the eclipse application – making use of the variables (targetURL=”${installation.location|uri}/eclipse/dropins/artifact.jar”).

Version ranges

You cannot use variables in version ranges, e.g. if you have a number of plugins for which you need the same version. Also, be aware of the meaning of those ranges.

Active annotations use cases

Monday, March 17th, 2014

Active Annotations is a language feature of Xtend to

So active annotations are an addition and in some cases even an alternative to the classic approach of defining domain specific languages and writing code generators for these DSLs.

This is especially true when your DSL tend to evolve to a full blown programming language with some domain specific customizations. In such a case you should consider reusing a general purpose language (GPL) like Xtend and customize it with active annotations to your needs. So you avoid the overhead of reimplementing a complete IDE infrastructure for a DSL.

So the active annotations mechanism isn’t a simple code generator (although you can use it in that way, too) but a transformation working on the Java model AST where you can add new fields and methods. After editor save these members are then immediately visible (in scoping, type computation and content assistance) when further editing the Xtend file.

In this blog post I want present you two use cases how active annotations eases programming.

Message bundles

In programming you should try to avoid situation where you have to keep things in sync manually. One such common situation is the handling of message bundles. It is always a good idea to extract and centralize messages so there is one place to adapt them or even add internationalized messages. These message strings may contain wildcards that can be bound from outside. In Eclipse OSGi there is already an abstract class, NLS, in place that enables handling of message bundles. Despite of that it is still up to the programmer to keep the keys in the message bundle manually in sync with the static String constants in the Java class.

Sven already blogged about how to externalize strings to a properties file and even derive methods to bind type safe the wild card parameters. I want to show you the other way round:

message.properties:

INVALID_TYPE_NAME=Entity name {0} should start with a capital.
INVALID_FEATURE_NAME=Feature name {0} in {1} should start with a lowercase.
EMPTY_PACKAGE_NAME=Package name cannot be empty.
INVALID_PACKAGE_NAME=Invalid package name {0}.
MISSING_TYPE=Missing {0} type {1}.

IssueCodes.xtend:


import de.abg.jreichert.activeanno.nls.NLS

@NLS(propertyFileName="messages")
class IssueCodes {
}

DomainmodelJavaValidator.java:


public class DomainmodelJavaValidator extends XbaseJavaValidator {

   @Check
   public void checkTypeNameStartsWithCapital(Entity entity) {
      if (!Character.isUpperCase(entity.getName().charAt(0))) {
         warning(
              IssueCodes.getMessageForINVALID_TYPE_NAME(entity.getName()),
              DomainmodelPackage.Literals.ABSTRACT_ELEMENT__NAME,
              ValidationMessageAcceptor.INSIGNIFICANT_INDEX,
              IssueCodes.INVALID_TYPE_NAME, entity.getName()
         );
      }
   }
   ...
}

You see that for each key in messages.properties a static String constant of same name is created. Moreover a method prefixed with getMessageFor[KEY_NAME] is derived for each key taking exactly so many parameters as place holders appearing in the message for this key in messages.properties.

The complete example can be found here, including the active annotation processor.

So every time you change messages.properties, code referencing then non existing keys or passing an invalid count of parameters will get error markers in IDE.

As properties file changes usually doesn’t trigger the Java builder there is an ANT builder added to the project that touches IssueCodes.xtend when messages.properties has been changed.

Some details about the implementation of the NLS active annotation:

  • The plug-in using this annotation have to have org.eclipse.osgi.util.NLS on its classpath, this is checked by the call to findTypeGlobally: if the class cannot be resolved an error marker is created at @NLS
  • By navigating over annotatedClass.compilationUnit.filePath you have access to the file path where the class resides that is annotated with @NLS. So the properties file can be accessed.
  • If there is no properties file with the name used for the annotation property propertyFileName or the properties file cannot be loaded appropriate error markers will be created.
  • As active annotations and Xtend itself doesn’t currently support a static initializer block, this is emulated by a static field. A function containing the initialization logic is called and assigned to this field.
  • Before creating new fields or methods it is checked if there is already a member with same name in place. In this case an error marker will be produced. This is currently not very elaborated as it doesn’t take method overloading in consideration, but for the NLS annotation the parameter count check is enough.
  • Via a regular expression the count of wildcards in a message is calculated and exactly this number of Object parameters are then added to the getMessageFor method.

Since Xtend 2.5 it is possible to write

initializer = '''
   new «Function0»<«String»>() {
      public «string» apply() {
         «NLS /* this is the class literal of org.eclipse.osgi.util.NLS */».initializeMessages(
              «annotatedClass.findDeclaredField(BUNDLE_NAME_FIELD).simpleName», 
              «annotatedClass».class
         );
         return "";
      }
   }.apply();
'''

All class literals inside the rich string assigned to the initializer are now wrapped with toJava automatically. Compare this with the old notation used in the code on GitHub. This is much more readable now.


 

Hibernate Criteria Builder

In the second example I want to handle the problem of building type safe SQL queries. Hibernate ORM defines a fluent API to create criteria queries, a programmatic, type-safe way to express a database query. It depends on a static meta model that enables static access to the meta data of the entities contained in the domain model. This meta model have to be generated by the JPA Static Metamodel Generator, a annotation processor. It requires a class defining volatile static fields corresponding to the attributes of the entity as input.

An alternative approach is JOOQ, but here you also have an extra generation step.

The third approach, Sculptor, is a generator framework to describe 3-tier enterprise applications following the domain driven design approach. It uses Xtext based DSLs to define entities, repositories, services and front end. Out of the DSL artifacts code for well established frameworks like JPA, Hibernate, Spring and Java EE is generated. Sculptor itself also ships with some useful static framework classes then called by the generated code. Similar to the before mentioned JPA Static Metamodel Generator Sculptor generates attribute accessor classes for every entity defined in the DSL to be used for a self defined critera query builder.

Wouldn’t it be nice to see immediately which queries break when you change your domain model?

In the following the static framework classes of Sculptor will be reused. But instead using the Sculptor DSL and generator these classes are combined with active annotations. Find the complete example here and in particular the the active annotation processing class.

So with annotating a class with @Entity generates an id field and derives the classes later to use when creating type safe database queries. With annotating a class only with @EntityLiteral will leave off adding the id field. The @Property annotation will add getter and setter method for the annotated field.

Having the domain model for a P2 repository structure (see Database.xtend for the complete entity model)

    • Location <>— * Unit <>— * Version

and each entity is annotated with @Entity the following typed queries are now possible (copied from LocationManager.xtend):

def Set getLocationURLsContainingUnitWithVersion(String unit, String version) {
   val urls = newHashSet
   val session = SessionManager::currentSession
   val unitFindByCondition = new CustomJpaHibFindByConditionAccessImpl(Unit, session)
   var unitCriteriaRoot = ConditionalCriteriaBuilder.criteriaFor(Unit)
   unitCriteriaRoot = unitCriteriaRoot.withProperty(
   UnitLiterals.name()).eq(unit).and().withProperty(UnitLiterals.versions().name()).eq(version)
   unitFindByCondition.addCondition(unitCriteriaRoot.buildSingle())
   unitFindByCondition.performExecute
   val unitIds = unitFindByCondition.getResult().map[id]
   val locationFindByCondition = new CustomJpaHibFindByConditionAccessImpl(Location, session)
   var locationCriteriaRoot = ConditionalCriteriaBuilder.criteriaFor(Location)
   locationCriteriaRoot = locationCriteriaRoot.withProperty(
   LocationLiterals.units().id()).in(unitIds)
   locationFindByCondition.addCondition(locationCriteriaRoot.buildSingle())
   locationFindByCondition.performExecute
   val result = toLocationList(locationFindByCondition.getResult())
   result.forEach[urls.add(url)]
   urls
}

The method above will return the URLs of those locations that contain a unit with the given name and version.

The nice thing about the active annotation here is that if you rename version’s attribute name to id and save this change will immediately produce an error marker in LocationManager.xtend as now the access .name() in the query isn’t valid anymore.

Some implementation details about the active annotation here as well:

  • the EntityProcessor calls EntityLiteralProcessor (to create the literal classes) and PropertyProcessor (to create getter and setter for the here create id field), so you see, it is possible to chain active annotation processors
  • all additionally created Java classes during active annotation processing have to be registered globally in method doRegisterGlobals
  • the EntityLiteralProcessor checks fields for having the @Property annotation – only for those fields corresponding methods in the literal classes are created
  • currently only constants can be used as values for annotation properties (both in active annotations as well as when creating new annotations during active annotation processing)
  • AnnotationExtensions provides some common methods e.g. used to find existing annotations either by name or by name and property value

Other use cases

Besides the both use cases described above there are several other examples of how to use the power of active annotations:

Summary

I hope I was able to give you a good impression, what you can achieve with active annotations. If you want to start writing your own active annotation processors have a look at the official documentation and at this best practices guide as well. Also don’t hesitate to ask questions in the Xtend forum and filling feature requests or bugs here.

If you happen to attend the Eclipse Con North-America 2014 starting at March 17th don’t miss the session about Automating Java Design Patterns with Xtend.

Xtext2 keyword hovers

Tuesday, February 12th, 2013

The current default implementation restricts hovers to significant region of an object. Think of them as the region of the name of the object. However, you may want also want to provide information for keywords (e.g. explanations for complicated modifiers).

The first step towards keyword hovers is overriding the binding for IEObjectHover.

public class MydslEObjectHover extends DispatchingEObjectTextHover {

  @Inject 
  MyDslGrammarAccess grammarAccess;
  
  @Override
  protected Pair<EObject, IRegion> getXtextElementAt(XtextResource resource,
      int offset) {
    Pair<EObject, IRegion> temp = null;
    ILeafNode node = NodeModelUtils.findLeafNodeAtOffset(
      resource.getParseResult().getRootNode(), offset);
    if(node.getGrammarElement() instanceof Keyword){
        IRegion region=new Region(node.getOffset(), node.getLength());
        temp = Tuples.create(node.getGrammarElement(), region);
    }else{
    	temp = super.getXtextElementAt(resource, offset);
    }
    return temp;
  }

  @Override
  public Object getHoverInfo(EObject first, ITextViewer textViewer,
      IRegion hoverRegion) {
    if(first instanceof Keyword){
      return getHoverInfoForKeyword((Keyword)first);
    }else{
      return super.getHoverInfo(first, textViewer, hoverRegion);
    }
  }

  private Object getHoverInfoForKeyword(final Keyword keyword){
    //use grammarAccess here to see which Keyword you are dealing with
    //and determine the text to show
//    if(keyword==grammarAccess.getGreetingAccess().getHelloKeyword_0()){
//      //...
//    }
    return keyword.getValue();
  }
}

The second step is providing the information for the keyword hover, the third making it look nice.

Xtext: empty string linking

Saturday, February 2nd, 2013

Xtext’s cross reference mechanism is based on named elements. The out-of-the-box support requires that the (simple) name is not empty – there must be some syntactic element that can be associated with the link (both source and target). Linking empty names may not be a default requirement, but it is not purely academic. XML QNames allow the namespace prefix as well as the local name to be empty. While working on Xturtle – an eclipse editor for the RDF serialization format turtle – I came across this use case.
@prefix :<http://www.example.org/>.
:thing a :thing.

The name of the default prefix is empty. Now, if you want the expected editor features like go to declaration, find references or rename refactoring, you kind of need the actual linking.

So here is a list of the main components that need to be adapted. This project provides you with a stripped down working example.

Grammar

In your grammar, you have to make sure that the name an the reference are both mandatory, however the actual string may be empty.
Target: "target" name=Name ".";
Link: "link" to=[Target|Name]".";
Name: ID?;

QualifiedName calculation

In the instantiated model, the name attribute value will be null. Your IQualifiedNameProvider will have to turn that into an empty name. The default implementation of the IQualifiedNameConverter throws an exception for empty names, so that has to be adapted as well.

Linking

If you don’t provide a name in the model, there will be no element in the node model attached to the name.
target /*this is a linking target with an empty name*/
/*The problem is, which empty string between the
keywords target and full stop represents the name*/
/*Xtext cannot know, so empty names are not supported out of the box*/
/*By the way, the same is true for the link.
At which position does the link start?*/
.

Having made the features mandatory, there will be cross reference node, but the linking will not pick it up and create a proxy, as there is no suitable node to attach it to – the corresponding code has to be adapted.

Hyperlinking

You can now navigate from link to target in the semantic model, however hyperlinking is not working yet. You’ll have to tell the framework, from which position in the link actually to jump to the target (for the empty name case).

ILocationInFileProvider

This service is responsible for calculating significant regions of objects, i.e. which part of the file to reveal and highlight. Again the default implementation is bound to fail if there are no actual nodes for the name element.

Refactoring

I have not adapted the refactoring component yet and a first 15 min investigation indicates that quite a bit of work has to be done. I will update the post when there are news.