Thursday, March 29, 2012

Gemini Web feat. Gemini Naming or Injecting OSGi Services the Java EE Way

A presentation by Violeta Georgieva of SAP on the Gemini project on which she works.
  • Gemini is a web container
  • Develops with Libra installed that provides WTP and PDE working together. MANIFEST.MF in PDE manifest editor but right-click project > add servlet.
  • New dynamic web project, add osgi bundle in config
  • Run under Gemini web, can install other WARs or WABs (Web Archive Bundle)
  • Right-click > Run as > OSGi framework
  • A WAR/WAB can use OSGi services exposed in another WAB.
    • ServiceTracker in servlet to obtain an OSGi serviceJNDI
    • jndi with osgi-based url
      • e.g., new InitialContext.lookup("java.something")
      • Meta-inf/context.xml entry
  • Can use Dependency Injection (DI) using J2EE 6 annotations:
    • @Resource("name=LogService")
A pretty cool demo. How long do we have before we should deploy OSGi on the server? It's nice to know there's tooling available now and that OSGi service use is covered.

Eclipse 4.2: Tips on API best practices for a 3.x

A presentation by Eric Moffatt , Remy Suen [IBM], Paul Webster [IBM Canada] on common bugs they found in Eclipse code when run on pure 4.x. 
  • Aspects to investigate:
    • usage patterns
    • command patterns
    • part management
    • interaction with framework
  • Commands
    • Abstraction of some behaviour
    • Not visual
    • Not an implementation
    • Usage:
      • Declaratively via extension
      • Programatically by executing Handler service
      • Do not use Command.setHandler() - doesn't work
      • Do not use Command.executeWithChecks() - sometimes won't work
  • Handlers
    • Should not carry state
    • Get it using HandlerUtil, Command, or framework
  • Parts
    • Access a service locally if possible, i.e. getSite().getService(). It will scope and cleanup.
    • Parent composite assumptions
      • Never assume anything about a Composite that is given to you (e.g., the layout, styles, etc.)
      • Layout calls might not happen when you think
      • Shouldn't set layout on it either if you're not sure it doesn't have any children but you because siblings may not render if they use a different layout.
      • E4 is more flexible, so parts can show up anywhere
    • Keep parts isolated from one another.
    • Caching of values: Part's shell can change in 4.x when part is de/reattached, so don't cache it
    • setFocus() needs to be implemented, don't leave it blank. Set focus on a control in the part. Always called from UI thread.
    • Avoid downcasting to get implementation API, it won't be there in 4.x.
    • Preferences will continue to work through workbench or Equinox API.
    • Avoid Workbench.getProgressService().
    • SWT containment honored in 4.x, "Big Lie" in 3.x was that every view and toolbar was actually parented by shell. Do not cache Shell.

We should build a 4.x product and test installing our 3.x features in it. Compatibility layer should make them work but would point out such bugs.

Build Trust in Your Build to Deployment Flow!

A presentation by Yoav Landman of JFrog, the creator of Artifactory, on Continuous Integration (CI).
  • Benefits of CI:
    • Latest version
    • no maintenance release, just do frequent releases including bugfixes
    • less concern about backward compatibility
  • Challenges of CI:
    • Version tracking
    • root cause analysis
    • not everyone ready for this
  • Devs have agile tools. So do Testers. But DevOps?
  • All need access to versioning, tracabilty, access control, promotion, etc.
  • Binary repositories (leave the source at the build stage):
    • need Proxying
    • need smart storage, e.g. find source when needed
    • critical for CI & ALM
  • Artifactory Pro has P2 virtual repo!
  • Move binaries through phases, e.g. testing, staging, prod.
  • Need traceabilty from version control system and build server
  • Plugins for Hudson and Jenkins upload build info to Artifactory repository.
  • Releasing:
    • Your next build is RC
    • Once built and tested, push a button.
    • Version switch, move to another repo, tag
    • Process: snapshots, declare one RC, release
    • Release with Artifactory plugin - a little rigid but works on previously=built snapshot
    • Redundant release build can fail.
    • Should move snapshot, use binaries storage to promote, can script destination, etc.
    • Need to update pom and rename snapshot to be promoted
    • Done in Artifactory
We currently have problems using the maven-release-plugin with tycho. Artifactory removes that plugin's use, i.e. Maven does not release - Artifactory does. It renames and distributes snapshot and makes required POM changes in source. 

Artifactory seems to require artifacts.jar and contents.jar before exposing a virtual p2 repo... but isn't that a real p2 repo. It reportedly can't expose bundles in maven repos as bundles, which Nexus Pro says it can. I wonder though if we can generate an artifacts and metadata.jar for a maven repo in order to expose its bundle artifacts.

A Modular and Extensible OSGi Shell


A presentation by Lazar Kirchev of SAP on the new, upcoming Equinox shell (OSGi console).
  • Improvements in usability, added telnet and SSH support.
  • Currently cannot correct mistypes - no backspace, no history, no tab completion.
  • New version based on Apache Gogo
  • Can start with port, e.g. -console 2222, for remote access.
  • Command line editing and tab completion (telnet or remote only), history, telnet, SSH
  • JAAS or public key authentication
  • 4 bundles to autostart
  • Piping, grep, help [command]
  • config.ini entries: osgi.console and osgi.console.ssh , followed by port number
  • Virgo, the enterprise web application server using OSGi deploys, used to demonstrate remote access:
    • Two regions: user and kernel
    • Two shells too
    • One region or shell cannot access the other - security
    • Configuration through ConfigAdmin (2 instances also)
    • By default, its config.ini has telnet and ssh disabled - enable to use
    • Ports 2401 and 2501 - kernel and user
  • Programming for shell, i.e. writing OSGi console commands:
    • console commands are OSGi services with scope and function properties
    • osgi.command.scope : "eclipsecon"
    • osgi.command.function : new String[] { "printfile" }
    • Converters and formatters, e.g. pass bundle id argument, convert to bundle object to be received as actual argument.
    • Install new bundle and start
    • help | grep printfile <- tests that new service is available
  • Available with Juno.
  • "Mostly" backward compatible with 3.7, 3.8.
I spend a lot of time in the OSGi console debugging installations and OSGi services, both for our team and when requested by others. This will greatly improve the effectiveness of such debugging. For example, a developer can start with -console [port] and I can debug his runtime using his IP and putty. We can debug prod the same way if we want! Huge!

Making Mylyn the Agile Oil, and Glue, for your ALM stack

"The heterogeneous ALM stacks commonly found in enterprises challenge users with a lack of integration." "Mylyn's broad ecosystem of extensions bridges that gap with the popular IDE tooling that providing visibility into projects. These tools are based on the frameworks of the Mylyn sub-projects that cover key ALM concerns."
  • Lets bring everything in to the IDE
  • Integration is difficult
  • Mylyn creates an API for consistant integration between Eclipse and task tracking tools

  • Mylyn Framework
    •  Tasks (Bugs, issues, stories, requirements)
    • Context (Activity and artifact tracking, focus)
    • Versions (SCM, change sets, linking)
    • Builds (Releases, continuous integration)
    • Reviews (Task-based code review)
    • Docs (Wiki documentation, transformation)
  • http://eclipse.org/mylyn/

The Future of ALM

A keynote presentation describing recent trends in Application Lifecycle Management (ALM) using a car manufacturing example. With, for example, four dozen software suppliers, we see huge software ecosystem changes. His observations:
  • Taiwan now approaching Germany as top eclipse downloader
  • Software development increasing much faster than workforce
  • Henry Ford doubled employee salary, automation next
  • Toyota's just-in-time gave every worker autonomy - able to stop line
  • Empower the people
  • Autonomy, Transparency, Collaboration
  • Software delivery silos: testers, project managers, devs, business analysts, operations
  • Cultural gaps, devs see all devs as rocket scientists, etc.
  • Boeing had delays because of software traceability
  • Handing off to each production stage lost accountability and traceability
  • Long term cost
  • They stopped production to fix traceability
  • Requirements -> development -> testing -> operations
  • Just in time = reduce inventory
  • In development, it's requirements
  • No link between req and ops
  • Toyota proved it's faster not to batch up what's handed off
  • Optimize task batch size = collaboration
  • Task needs to maintain workflow, activity and context
  • Incorporate social stream
  • Mylyn's model: planning, product, user story, task, commit
  • Expose Mylyn workflow
  • Contribution, workflow, automation
  • Gerrit -> Hudson -> review task -> build -> Mylyn workflow -> review -> push = collaboration
  • All involved need to collaborate.

How I Learned to Stop Worrying and Love the Build

"With Hudson driving builds from the top; Git, Gerrit, Maven, and Tycho in the middle; and Mylyn controlling the pieces from the developer's desktop, The Eclipse Foundation provides an impressive stack of technologies for building software."

  • Why I was worrying
    • closed, private build jobs
    • cron jobs, shell scripts, ant scripts
    • unpredictable results

  • Continuous integration
    • ex. Hudson
  • Common build infrastructure (CBI)

  • Recipe for Success
  • Rules of Engagement
    • Transparency
      • Invite participation
    • Openness
      • Accept participation
    • Meritocracy
    • "With Hudson driving builds from the top; Git, Gerrit, Maven, and Tycho in the middle; and Mylyn controlling the pieces from the developer's desktop, The Eclipse Foundation provides an impressive stack of technologies for building software."

      • Earn participation
  • The Four Cs
    • Code (must show up at Eclipse with code)
    • Community
      • End users, Adopters, Developers
      • All are important
    • Cleanliness (from an intellectual property perspective)
      • Where does the code come from
      • Copyright
      • Ownership
      • Licensing
    • Cwality (Quality)
      • Transparent issue tracking, list discussion
      • Reviews (developing community, project is more than just code)
      • Inviting/Accepting participation
      • Diversity

  • ALM "Stack"
  • Build Maturity
    • Modular builds make CI possible
  • Tracking IP
    • Licenses
    • Third-party libraries
    • Developers
    • Contributors + Contributions
  • CI = continuous integration
  • IP = intellectual property

Open Standards and Open Source for the SmartGrid


This presentation was about an application developed for designing electrical grid systems and managing the vast amounts of data generated from smart grid systems.

Company profile: Open Grid Systems:

  • Users of Eclipse
  • Consultancy and software company
  • Model driven focus
  • Tries to use open, cutting edge, technologies

Product: Cimphony:

  • Power systems data viewer in Eclipse
  • 125+ Eclipse RCP plug-ins
  • Very modular
    • Supports headless deployment
    • Some components exposed as web services
    • Can be deployed as smaller applications
    • All OSGi
  • Model driven frameworks
    • Browser
    • Graphical editor using GMaps hosted on Jetty
    • Single Line Diagrammer
  • Handles many data formats used by the many layers of companies in the electrical market
    • Separates structure and format
    • Uses OCL for validation outside of ECore based on profiles
    • Added annotated comments to OCL with human readable error messages
    • QVTO data transformation language
      • User and internally executable transformations on data
      • Transformation registry with auto-discovery
      • Added annotated comments for UI error messages
    • Uses CDO as data store for a very flexible local repository

Challenges in SmartGrid technology:

  • Bi-directional communication between home and electrical companies (distributors/re-sellers)
    • New trend is people generating their own power and are no longer just simple consumers
  • Open, but complicated, standards
    • Electrical network models
    • Mathematical functional models
    • Common Information Model (UML)
    • RDF XML
    • Challenge is to establish standards for the data formats
    • Very complicated
  • Large amounts of data
  • Many proprietary power systems have many legacy data formats
    • Problematic when networks need to interact
    • Many companies involved at different levels
  • Smart meters involve real-time analysis and control

Persona Non Grata - Don't forget the users when doing your designs!

"Pinning down some user characteristics using persona development techniques can save you time and money by offering a window into the minds of your potential users. More than that, by naming these individuals and making them actual pseudo-people, you have an easy point of reference you can come back to again and again."
  • Persona-Based Development
  • Traditional Requirements + Use Cases = New Features
  • Users learn things in many different ways
    • Tactile/Visual/Aural/etc
  • Personas help to address this
    • The Inmates are Running the Asylum by Alan Cooper
    • Collection of details about a person
    • Persona is a character in a user story based on a plot inspired by you use case
    • A perspective of a character in a story
  • How do I create a persona?
    • "Assumption-based" approach
    • The process requires some role playing
  • Persona example:
    • Common quote
    • Short description
    • Sample resume with broad picture of his experience
  • Persona is applied to problem by role playing to look at it from the other person's perspective
    • Where would he start
    • What questions does he have
    • What documentation is available
  • Benefits
    • Consistant approach
    • Shared vision - keeps user in mind
    • Everyone can use the personas
  • Use is iterative and applied throughout the development
  • Personas referred to by name
  • User story = Persona+Use case
  • Catalog of personas to share them along with user stories
  • Keep 2-3 personas to prevent things from getting out of hand

Commands in Eclipse: some advanced patterns


This presentation was about the commands framework in Eclipse.  While titled as advanced, I found that most of the patterns were things that I've already came across, and solved.

Commands/Handlers:

  • Command provides enabled state, and any arbitrary state
  • Handler controls commands enabled state
    • Can be global or local (based on state)
  • Keybindings can point to different handlers base on state
  • Menus and toolbars expose commands to user visually
  • Programatically done through Eclipse services (not OSGi services)
Patterns:
  • Using a parameter:
    • Static string key/value pair
    • Used to change behaviour of handler
    • Can support other types, via parameter convertor
  • Toggle
    • Handler's duty to change command state
    • HandlerUtil.toggleCommandState()
  • Radio
    • Command needs parameter for state, managed by handler
  • Tool item drop down
    • Usually a command with parameter, filled by menu ID
  • Dynamic sections in menu
    • Command contribution items
  • Operating on selection
    • Use core expressions
    • Use core expressions with a property tester for more complicated tests
  • Control in toolbar
    • Contribution filling in a composite, has to handle orientation, not part of a part
  • Property tester
    • Used for complicated tests done in code
    • Best to try to avoid these since plug-in needs to be loaded for them to work
    • API to cause reevaluation
    • Use handler services to update application state
  • Use services to access commands and handlers from platform; don't try to access them directly.
  • Don't call execute commands/handlers directly, instead invoke them via services

Commands in E4 and 3 are very similar.

Eclipse 4 meets CDO - Now you see it, and so do they!


This demonstration took E4 modeled UI and stored it in CDO.  Multiple clients were started, and connected to  CDO.  When the UI of one instance was updated, all of the other ones followed.

CDO stores the EMF objects.  When an object changed, an event was sent to update the UI as well as to update the CDO data store.  CDO then forwards the event to other clients of the store, each of which update their UIs with the changes.

By using CDO's temporal functionality, the changes could be reversed and re-applied, and all of the listening UIs would react to the changes.

CDO (Connected Data Objects) can be though of as a run-time persistence platform for models, and provides many features:

  • Multi-user access
  • Transactios
  • Transparent temporality
  • Parallel evolution
  • Scalability
  • Thread safety
  • Collaboration
  • Data integrity
  • Fault tolerance
  • Offline work

The demo was an impressive display of how flexible E4 with EMF and CDO are, and opens up a lot of potential use cases for the technology such as:

  • Workspace in the cloud
  • Preconfigured perspectives
  • Central preference store
  • Kiosk Applications
  • Instant provisioning
  • Pair, or more, programming UIs
  • Code reviewing application
  • Multi-user graphs
  • etc.

Debugging in 2012

This presentation was about Chronos, a proprietary Java debugger.  There is a free trial on their website.

While programs have advanced, debuggers haven't changed very much in the last 10-20 years.  Current debuggers:
  • Assume flow is sequential
  • Have a focus on single threads
  • Designed for short running programs
  • Not ideal for modern, multi-threaded, long running, or cloud applications
Logging is a broken technique for debugging:
  • Tries to predict where the bug is in advance, and the prediction is usually wrong
  • Messes up code
The future of debuggers? Chronon:
  • Records entire execution of program for playback
  • Recording is system, hardware, and platform independent
    • Can record on 64-bit machine and replay on 32-bit or vice versa
    • Can record on 12 core processor then replay on 2 cores
  • Recordings can be shared
    • Tester can record their session and result can be debugged by a developer
  • How does it perform?
    • Latest version, 3, is very optimised
    • Intended for production environment
    • Select what libraries need to be recorded; don't need to record Java API calls and state
  • What can you do?
    • Step forward and backwards in code
    • See full state of the application at a given time
    • Execution path of current method is highlighted, so you don't need to step through it to see what was called
    • Bookmarks for jumping to parts of a recording
    • Post-execution logging
      • Go to line in run log and add logging statement separately from code
      • Log messages can include state
  • Timeline view
    • See history of changes to a variable and method calls
    • Filter based on values or line numbers
    • Replay and see the logs
    • Keeps log messages out of the code
  • Exception log view
    • Jump to where exception was thrown, and debug from there
  • Limitations:
    • External environment isn't available, so you can't make changes and see the results
    • Need to ensure that code in workspace is the same as the recording being replayed
If you've ever had a bug that a tester found and you can't reproduce, Chronon might be the answer.  I've come across a few bugs like this, and I have one in particular which I might try to apply Chronos to.  I'm not sure what the licensing model is, but I think that the debugger itself might be free.

Wednesday, March 28, 2012

Continuous Feedback

  • The Agile Consensus
    • Trustworthy Transparency     \
    • Reduction of Waste              -      Focus on Flow
    • Flow of Value                       /
  • Build/Measure/Learn (The Lean Startup)

  • unit tests are good, but we can't rely soley on those
  • exploratory testing is just as important
  • the first person that knows that there's a problem isn't us, it's the user(s)

Harnessing Peer Code Reviews

This presentation was essentially a demo of Gerrit Code Review. Looks like it's great at what it does. But I think it represents too much overhead for anything but large, distributed team.
  • quality mentoring via peer code reviews
  • gerrit code review
  • scales well
    • 15,000+ users and 17,000+ groups
  • contains lots of its own servers, so it's ready to go out of the box
 What role does gerrit play:
  • git doesn't provide workflow enforcement
  • git doesn't provide code review facilities

OSGi in the cloud - quo vadis?

  • cloud computing is a form of outsourcing
  • problem: dependability
    • <example of cloud down time>
  • distribution adds complexity and its own failure models
  • modularity in OSGi
    • package dependency = tight coupling
    • services = loose coupling
  • problem: parrots are inherently unreliable
  • Remote Service Admin: Mechanism
    • expose this local service
    • import the remote service
  • Topology Manager: Magic
(Eucalyptus cloud)

Goal: when the service you are supplying goes down,  be able to swap to an alternative service with as little service interruption as possible

Best Practices for (Enterprise) OSGi applications

"Moving traditional Java EE applications to an OSGi stack is intentionally as easy as possible, however there are a number of common mistakes that can make it feel very hard. This session will describe some best practices for developing Enterprise OSGi applications and OSGi bundles, allowing developers to utilise the power of OSGi in a painless way."
  • Enterprise OSGi, first release 2010
  • OSGi for "Enterprise" applications
    • web applications
    • databases
    • managed transactions
    • remoting services
  • in terms of scope, OSGi bundles are like JARs with better isolation
    • good, but Enterprise Applications are rarely single JARs
  • for a long time OSGi had no scope beyond the bundle
How should I use OSGi:
  • well designed Object Oriented code -> modular properties
  • classes being cohesive and loosely coupled
  • avoid tight coupling
    • require-bundle is like casting to an implementation
  • do enough to be cohesive
  • don't do too much in your bundle
    • too much is as bad as too little
  • version everything!
    • package exports can be versioned too
    • imports can declare range of accepted versions
    • semantic versioning
      • specifying a single version means: that version to infinity
    • major/minor/micro versions
  • use services for looser coupling
    •  java lacks a satisfactory way to get implementation objects
    • using new introduces tight coupling
    • OSGi has a service registry
      • services are registered using their API, so clients don't need to _____ them
  • make services substitutable
    • sharing services needs same version of API
  • don't do it all yourself
    • OSGi is powerful, but hard to use
    • Enterprise specifications offer helpful tools
  • acessing services
    • it is difficult to use use OSGi services properly
    • several OSGi dependencies injection containers
      • Blueprint & DS
      • DS is light-weight, great for simple wiring
  • accessing data
    • trying to use traditional access patterns leads to unpleasant hacks
      • there are standard ways to get hold of things in OSGi
      • use these
OSGi specs @ osgi.org

Xcore: Ecore Meets Xtext

Xcore is a DSL for describing EMF models, and is an alternative to creating models in ECore.

Why Xcore?  ECore's graphical and tree-based editor can be difficult to use.  It requires switching between the editors and code, and a separate code generation step.  Furthermore, the graph-based approach is cumbersome when a model is more complicated and contains many relationships.

What does Xcore do?

  • Text-based DSL for Ecore
  • Cool editor (Xtext)
    • Auto-completion
    • Templates
    • etc.
  • When saved, the code is generated, resulting in a more streamlined workflow
  • Custom behaviours can be specified in-place

Java FX Past, present and Future - SWT and Swing


JavaFX is a reboot of the Java UI platform that is light weight and hardware accelerated to meet tomorrow's needs.

JavaFX is a full featured and modern API, which includes:

  • Data binding
  • FXML declarative markup
  • CSS
  • Tasks
  • Graphics
  • Controls
  • Events
  • Layout
  • Media
  • Browser
  • Charts
  • Swing and SWT integration

JavaFX is scene based.  Graphics are stored in a tree structure, and unlike Swing, there is no painting.  Instead, to do custom drawing, one should use components in leaf nodes of the tree.  Graphics support transformations (e.g., scaling, rotation, etc.), effects (e.g., blur, drop shadow, etc.) and animations.

JavaFX has its own threading model, with one thread for native UI updates.  However, when integrated in SWT via the FXCanvas component, it shares the same thread as the SWT thread for its events.  The result is more seamless.

Eclipse 4: The Path of Least Resistence


Or, why the E4 API is better than the old one.

The 3.X API is fragmented from years of development, with technical dept, and too many mechanisms to do the same thing, call chaining to get at things, etc.  This is typical of any far reaching API; it's very difficult to anticipate what developers will need in the future.

This is what E4 came into being.  The goal:  make things simple, and hard things possible.


The E4 UI controlled by a model:
  • Model is persistent and contains everything necessary to render the UI
  • Model is the API; everything it can do, you can do
    • Add listeners to any aspect of the model
  • Dependency injection (DI) replaces listeners
    • Registering the selection as a parameter to a method means that the method will be invoked every time that the selection changes.
The result is that E4 has little to no API, and as a result is extremely flexible.

A few other improvements:

  • No internal APIs are accessible anymore (i.e., internal packages)
  • Down casting is unnecessary

The evidence of how good Eclipse 4.2 is is that they were able to implement an Eclipse 3.8 layer to support backwards compatibility with very few changes to E4.

I am looking forward to being able to start using E4 for our applications.  There will be a lot of concepts that are difficult to grasp and effectively use at first, but the flexibility that it offers, especially in terms of being able customise the UI to support different work-flows, is amazing.  

Eclipse code recommenders: Code completion on steroids


The results of a joint research project between several universities about improving the state of code completion was demonstrated.

In a world where we are constantly faced with larger and larger APIs, many of which have limited documentation, or we simply lack the time and experience to gain insight into how to correctly use them.  Improved code completion can be used to help the problem by using existing patterns found in code to offer suggestions based on the current context.

Auto-method completion:

  • Analyse current method and calculate probability for the use of each method in the list of possibilities

Improving code templates:

  • Templates are very useful, especially with frameworks such as SWT.  However, we don't often take the time to identify when they can be used and write them.
  • Solution: dynamic templates
    • Figure out how a variable was used before and offer suggestions based on that 
    • If you create certain objects, you tend to initialise it the same way over and over again
  • Annotation recommendations for annotations, which will be used extensively in E4
  • Multiple hops (call chains) to get at member for assignment
    • Search API graph and find matching paths
    • Collect stats to improve recommendations in complex APIs
Indirectly improving API  documentation:

  • Documentation gets outdated quickly and mistakes are made (i.e., parameters or conditions not updated)
  • Generated documentation is often obvious and not useful
  • Help users by extracting common usage patterns to help with the correct use of a class or method; leverage existing bodies of code, and provide extended documentation based on this information
  • When extending a class, recommend what methods should be overridden based on previous use

Code snippet search:

  • Search on-line snippet repository and insert directly into code
  • Generate snippit repositories based on existing bodies of code
  • How do I use this variable?
    • Analyse current code to see how it is being used and display suggestions
  • Helpful alternative to find type functionality

Future work:

  • Collect user usage data centrally
  • Users can rate suggestions; then take this into account
  • Identify API problems based on suggestions (e.g., if something always has to be set, but isn't)


Program, thou shalt behave!

In this presentation, a DSL for behaviour-driven development was demonstrated.  Behaviour-driven development is an evolution of test-driven development, where behaviours are features requested by customers.

The DSL provides a simple syntax for customers to specify stakeholder requests and acceptance criteria in natural language.  Developers can then take these requests and implement code to fulfill the specification.  For specific implementation details, developers can use a specification DSL to efficiently specify and implement the specs necessary to for stakeholder requests.  Both DSLs generate JUnit test cases that can be evaluated to assess whether the specs or acceptance criteria have been achieved.  The DSLs also provide a means of supplying multiple examples of inputs and expected outputs.

Since we are often developing UI-centric applications, my experience has been that we tend to not use test-driven development, in particular because UI logic is so hard to test.  However, for systems that have been able to adopt a test-first methodology in an Agile environment, I think that DSLs like this will prove to be a useful way to simplify the writing of test cases and improve communication with customers.


I cheated on EMF with RDF. And I may do it again!


This presentation was about the Resource Description Framework (RDF).  The talk wasn't directly related to tooling in Eclipse, but instead about what RDF is and how it is different from EMF.

The world is a complicated place, and EMF helps to make sense of it by providing great tools for interacting with data including generating code and creating rich UIs from a minimal description.  A weakness of EMF is when the domain is more complicated or not fully known.

RDF is an alternative to modeling.  It is best to think of RDF as a semantic graph containing:

  • Subjects (resource)
  • Objects (resource, value, or another subject)
  • Predicates (an association between a subject and an object)

A graph in RDF is always right or wrong, and its meaning (or model) is deduced based on what information is available.

RDF graphs can be merged, and sub-properties are used to describe equivalent properties so that a merged graph doesn't contain irrelevant duplicate information.  The result is that the migration between different versions of a graph or between different schemas containing the same types of information can be seamless.

To help tools with the visualisation of different data, RDF has vocabularies that are used as a baseline to specify the intent of the data, such as display labels.  Clients of the data have to be flexible and accept that the information needs to be interpreted before it becomes useful.

The approach that RDF takes is interesting in that it seems to accept that the world is complicated, and that there is no ideal strict representation.  However, a flexible representation facilitates the modeling of anything, and flexible viewers can be used to provide meaning.

Tuesday, March 27, 2012

M2Eclipse: The collaboration of the Maven & Eclipse Platforms

A simple demonstration of the features that exist in M2Eclipse.
  • Maven & Eclipse ~ Water & Oil
  • There should be no need for Alt + Tab-ing between Eclipse and shell
    • Run Configuration
      • Goal: clean install
  • Maven will pick workspace resolution first and the local repository second
  • Dependency Hierarchy -> most useful thing that is often overlooked
  • Architypes - quickstart
  • M2e-WTP still at Sonatype for now
  • "Run as webby" uses jetty to quickly run a webap
Follow up:
  • Maven + classpath visibility
    • maven-enforcer-plugin
  • Configurator?
  • Tesla?

3MF: EMF to infinity and beyond


This presentation pointed out a few limitations in the current implementation of EMF and proposed solutions to make it better.  Since I'm not an active user of EMF, the issues weren't relevant to me.  However, it was still interesting to learn more about EMF.

3MF is an experiment to find solutions to these limitations.  The issues and solutions presented include:
  • Support for multiple versions of a model at once
    • Same model from different vendors with different versions used at the same time
    • Eclipse extensions points require that only one version of a plug-in be present
    • EMF registry keys reference a particular model
    • Proposed solutions:
      • Change extension registry reader to support this
      • Or use OSGi services, which already have support for versions
      • Create a versioned registry for use in EMF
  • Generated EMF interfaces have dependencies on the EMF implementation
    • Static member in interface points to implementation
    • This is messy and prevents us from changing the implementation to something else, such as a derived class containing additional implementation
    • Proposed solution:
      • Add method to EObject to get the EPackage, from which the implementation can be accessed.
  • Using EMF with another OSGi run-time
    • EMF has a dependency on and references to Eclipse core.
    • There are checks in place for different environments, but only one test for OSGi
    • Proposed solutions:
      • Repackage for different run-times, but this is too much manual work and has to be done each release
      • Use import package for dependency, but this breaks backwards compatibility
      • Most compatible way is to add a hook to test which run-time is being used

Eclipse SDK's Greatest Hits: The First Ten Years

ElementTree
  • efficient for keeping track of deltas
  • store one full tree + delta
JavaBuilder
  • revisit starting assumptions
  • sometimes optimal solutions are not the best solutions
WorkbenchAdvisor
  • "policy" vs "mechanism"
  • might another application want to do something differently
Keybindings (Command Framework)
  • incrementally adding functionality until a fundamental rethink is required
  • pure model
p2
  • satisfying install dependencies is a NP-Complete problem
  • understanding algorithm complexity is important
  • recognize a HP hard problem
  • if it's a hard, someone else has probably already found a solution

A Fresh Look at Graphical Editing


For users, graphs are great!  Diagrams are easy to digest, nice to look at, and they can emphasize important information using connections, layout, and styling.

However, there are disadvantages to the way that graphs are currently done:

  • Implementing a graph using the current frameworks can be a cumbersome task.  As a result, functionality that is essential to the usability (e.g., undo/redo, copy and paste) of a graph will be left out by developers.
  • Certain operations on a graph don't intuitively map to a graphical tool (i.e., deleting a parent in a tree - should the children be removed, or should the children be converted into new trees).
  • Editing graphs that contain details with text requires switching between the keyboard and mouse which is annoying.
  • As a result of these points, graphs end up displaying the entire model, including details that don't make sense in a graph (e.g., too many links and nodes)
The proposed solution:
  • Have a model and a view.
  • Use an efficient DSL (or other model editor) to edit the model
    • The editor can display all the necessary details in an easy to edit format
    • The editor is a more direct representation of the content, and therefore the operations that it support have a strong mapping between the user and implementation
  • Display one or more graphs generated from the model
    • Support only the manipulations that make sense in the graph (e.g., layout, expanding and collapsing, etc.)

The solution also used a DSL (created using XText) that facilitated the conversion of an instance of a model into a graph and another DSL for manipulating the styling of a graph.

This approach was interesting, especially for cases where a graph is user content driven.  In cases where a graph's content is generated from multiple data sources, this model doesn't seem intuitive to me.  Also, in cases where the user wishes to persist the rendering of a graph for later use.

Application Lifecycle Management: Imperatives to succeed, agility to scale (presented by IBM)

  • great user experience is the sum of great design decisions
  • application lifecycle management (management of flow)
    • across the disciplines of requirements, design, development, build, test...
5 Imperatives for successful ALM:
  • Collaboration
    • comments are in-context of artifact
  • Planning
    • one plan with multiple views
    • bug fixes as part of plan
  •  Lifecycle Traceability
    • linking tasks to requirements to test cases to defects
  • Measuring and steering
  • Continuous improvement
    • tracking retrospectives
    • tweak process "in-flight"

Tycho - still good, bad or ugly?

A presentation by Max Rydahl Andersen of Red Hat describing the experience of migrating JBoss Tools and Developer Studio to Tycho.
  • Migrated from PDE ant build
  • Already had Tycho-compatible project structure, did very little other than modify POM's.
  • They have many, many modules.
  • Can use maven-versions-plugin config per module but found that using one at parent causes updates to all manifests, removing the need to do it by hand.
  • They used target definitions to lock down a repeatable build and not get broken by public p2 repos updates.
  • tycho-source-feature-plugin can generate source feature. A similar one exists for plugins.
  • The migration forced them to better maintain their OSGi artifacts, things that PDE build ignored now produce warnings or errors.
  • Speed is an issue for them which they plan to address by more granular modularization.  Rumor has it there will be some speed increase with 0.15.

R4E Code Reviews for Eclipse


R4E, or Reviews 4 Eclipse, is an Eclipse incubation project that provides tooling for doing code reviews within the Eclipse IDE.  While it is an incubation project, it is already stable enough for everyday use.

R4E supports Mylyn integration and task creation (not sure if this allows for JIRA interaction).  Reviews of EMF models, using an EMF comparison tool were recently added.

Summary of how it works:

  • Configure source repository (currently SVN or Git) connection
  • Setup review tags for common review items
  • Developer:
    • Creates a review and specifies what needs to be reviewed
    • Commits code and links the revision to the review
  • Reviewer:
    • Opens the review and views the source code changes in a difference viewer
    • Annotates differences with comments, if necessary
    • Sends review back, or marks it as complete

All this is viewed from a tree viewer that has support for various filters and selectors.  The property view can be used to

Review data is stored in an XML file (EMF persisted model.)  To share this within a team, this file must be placed in a shared location, such as a network share or source repository; it wasn't clear how conflicts are managed when multiple users are performing reviews at the same time.

The tool looks promising, and would be worth a try on a small scale before introducing it to a team.  It would be good to see how well this can be used in conjunction with JIRA, which has its own life-cycle management that includes code reviews.


A Gentle Introduction to p2

A presentation by Ian Bull, committer on the p2 project, on why it was needed and some of its artifacts in a product.
  • Old style unzipping to plugins folder
    • No dependency resolution
    • Not provisioning
    • Some black magic
  • p2 manages install, has transactional capabilities and resolver
  • SimpleConfigurator 
    • reads bundles.info
    • installs and sets start level of bundles
    • do not edit!
  • Config.ini 
    • used by OSGi
    • it can be edited but is shared resource
  • p2 folder
    • cache
    • profile registry, aka agent
  • Multiple profiles allows one bundle-pool to manage multiple eclipse installs - bundle-pooling
  • Each profile has one state plus historical states, one of which it can revert to.
  • Current install is most recent profile folder in p2 folder
  • Should use director application to affect a profile install
  • Eclipse installer application exists but rarely used
  • Can't yet read OBR repos.
  • It isn't p2 that provides the poor message for package-uses conflicts but rather the OSGi resolver.

GEF: Past, present, and future


The initial implementation of GEF was done under IBM.  Since the 3.0 release, all updates have retained API compatibility.  Since GEF is relied upon by many developers, the API compatibility has been a strong point for GEF.

Several features have been added since this release:

  • Zest, GEF, and Draw2D were separated
  • Bug fixes
  • Support for display of non-visible sections of a container
  • Zest 2 started in 2010
    • Dot 4 Zest
    • Cloudio (web cloud visualization like)

However, of the many bugs and feature requests, a lot of them can't be fixed without breaking the API.  To facilitate a platform for resolving these issues, GEF 4 was started in 2011.  GEF 4 will have:

  • API changes and new features
  • A completely re-written precision geometry API
    • Double precision 
    • Geometry spaces for special purposes
    • Intersection and containment tests on all geometries
  • A unification of GEF 4 and Zest 2
  • org.eclipse.gef4 namespace to facilitate applications using the old and new (package name isn't final)

Other topics that will be addressed/looked at in GEF 4:

  • Componentisation
  • Support for the E4 application model
  • Rotation and other transformations
  • B-spline connectors (multiple curves)
  • SWT widgets in graph
  • Multi-touch gestures
  • Revision of command framework to make use of the platform standards
  • Better connection handling
  • API restructuring and clean-up
Having used GEF extensively, I have enjoyed the flexibility of the API and am impressed that it still remains fairly powerful after 8 years since the 3.0 release.  However, there are aspects of the API that were difficult to grasp and extend, or that required workarounds for practical use (e.g., adapting commands to the platform operation history).  Also, since most graphs don't require extensive editing support, it will be interesting to see how Zest and GEF and integrated into one framework.

The Eclipse 4 Application Platform explained

"The Eclipse 4 Application Platform is the core runtime framework the next generation of the Eclipse SDK is built upon."
How does Eclipse 4.x differ from 3.x architecturally? 
  • The 3.x ui.workbench has been replaced by the 4.x ui.workbench and the Eclipse 4.x App Platform
PDE PDE
JDT JDT
3.x 4.x

Eclipse 4.x App Platform
Equinox Equinox
  • 90% similar code base between Eclipse 3.x and 4.x SDK
The Eclipse 4.x Application Platform is composed of the following:
  • Application Services
  • Dependency Injection
  • Workbench model
Programming now POJOs with annotation
  • This means that there is no such thing as a View or an Editor anymore, just Parts.
What is subject of injection:
  • IEclipseContext-hierarchy
  • preferences
  • OSGI stuff
 Application model comparable to DOM
  • allows runtime modification
Resources should come from an IResourcePool
  • manages disposal
Locale support
  • @Translation replaces resource bundles / NLS. Can be externalized, he used Google translation service.
Dependency Injection
  • DI of OSGi service does not yet handle dynamics.
  • DI does make debugging more difficult...trade-off for the advantages of DI

Get Ready to Fight your Technical Debt, with Tycho, Sonar, and Jacoco

A discussion of the benefits of using Sonar and Jacoco in your build process.
  • Hudson - continuous integration
  • Tycho - builds Eclipse plugins using Maven
  • Sonar - code quality and metrics reporting. Suitable for devs and manager. Install as a Hudson plugin referencing a sonar server.
  • Jacoco - code coverage
    • jacoco-maven-plugin 
    • can use by setting JVM arg 
    • adds Code Coverage section to Sonar report 
  • A bug exists to allow use of Jacoco in Hudson without Sonar
  • Much easier to set up than emma or eclemma.
  • We really should try this.

CSS and E4 Application Platform

One of the main goals of E4 when it was started was to plug-ins easy and to provide better control over the look and feel of applications.  The approach to achieve the latter is to use CSS to provide the look and feel of an application, and to separate UI design from implementation details.

SWT is a good candidate for CSS styling since there is a strong correlation between SWT and HTML:
  • SWT interfaces are hierarchical, with components nested within other components, just like HTML
  • Components have classes which can be used as selectors, just like mark-up tags in HTML
SWT components can be selected in CSS using the following:
  • Based on ID and class, which can be defaults or specified programmatically
  • Using compound selectors
  • Child selectors (e.g., "all text widgets within a particular composite")
  • Descendant selector
  • Attributes (exact and partial value)
  • Pseudo classes, such as dialogs or windows
A few utilities are available to help with the development of CSS for an application:
  • CSS Spy, similar to Plug-in spy, can be used to test selectors, navigate all UI elements in a tree, see the effective CSS for a widget, and modify the styling of a widget on the fly
  • CSS Scratch Pad is used for writing new CSS rules on the fly and seeing the result with a single click.
There is support for custom properties used in CSS, and extension points are available for further customisations.  IDs can be set on widgets, then easily referred to within CSS without having to specify a complicated selector.

A few things to be careful with in the current implementation:
  • Certain widgets do things differently, so different approaches to styling are necessary (e.g., background modes)
  • Have to be careful when specifying selectors so that other parts of an application are not styled by accident
  • No border style in CSS
  • No variables or inheritance in CSS (though there are other utilities that can faciliate this)
  • Label providers will need custom properties to be set for styling
  • Only pixels right now
Making good UIs is hard, and I think this is a good approach that will make it easier to experiment with different layouts and styles without having to continuously restart an application to see the effect of UI changes.  Hopefully more improvements are made in the label provider front as well as support for adding styling to custom components (i.e., custom drawing, such as Draw2d/GEF graphs).

Liberate Your Components with OSGi Services

A presentation by Alasdair Nottingham, an IBM WAS developer describing their migration to OSGi. He described a Modularity Maturity Model covering the spectrum from no modularization through fully modularized. He described their upcoming Liberty profile as highly modular, after approximately 5-7 years work. It uses Declarative Services, Configuration Admin service, and Blueprint (Spring-based DI). He had a couple pointers for anyone making the same transition:
  • Do not use a static class  to provide OSGi services. We've heard about this as an unattractive alternative to ServiceTracker use. He didn't describe what he thought should be used.
  • An OSGi bundle is active when its activator#start returns - it may not yet have ready services.
  • No service versioning yet.


The Web Platform Is the The Web Platform Is the Past, Present, and Future Past, Present, and Future

Today's keynote was from Alex Russell of Google. Author of the Dojo JavaScript framework and more recently tasked with improving the web as a platform for Google products. He had a good perspective on the topic and presented his observations on what he thought need to questions that needs to be worked on, including:
  • JavaScript is slow
  • No component system
  • No data model
  • Testing web apps difficult
  • Its reach - availability
  • Its language - how rich an API
  • Its contract - how long will it run
He went on to describe the compelling aspects of the web that are in its favor:
  • Web platform common amongst OS's
  • View source - steal code
  • Fault tolerance - error recovery and feedback
  • High level by default - rendering under hood, no paint()
  • Declarative forms
  • Competition & standards
  • Magic web elements, e.g. <a> or anchor tag. Says little about what it will do, is flexible and just works without exposing internals.

What's new in the OSGi Enterprise Release 5.0


Theme is the application and managing an application

  • OSGi R5 available at the end of June 2012
  • New application support services:
    • Subsystem service
      • Aggregation of resources: 
        • feature (no scoping, all shared)
        • composite (imports and exports)
        • application (only imports)
      • Declarative scoping and sharing policies
      • Dependencies can be on external resources
      • Subsystem archive: *.esa
    • Repository service
      • Local or remote repositories and resources
      • Can be used when resolving during resolve operations (resolver service)
      • Accesses OSGi Bundle Repository (OBR)
      • Implementations can be based on other repository types (e.g., maven, p2, etc.)
    • Resolver service
      • Find dependencies for provisioning
      • Perform 'what if' scenarios to test for things like conflicts between different versions of the same bundle/package
  • Other new features:
    • Service loader mediator to mediate between Java service loader and OSGi service loader.  Supports plug-ins using Java service loader without having to change code.
    • Common namespaces to define capability categories
      • Services (provide capability)
      • Extenders (require capability)
      • Contracts (states the need of a capability)
    • Improved JMX for remote management
      • Bundle wiring API
      • Several API updates
    • Configuration administration
      • Transactional, coordinated updates of bundles
      • Multiple bundles can share the same configuration
    • Build time annotations that can be used to generate component declarations at run-time.  Useful for better build tools.
Many of these improvements won't be visible to Eclipse RCP developers, since several of these things have already been developed within Eclipse (e.g., the concept of an application or product, remote repositories, etc.)  However my hopes are that future versions of RCP (i.e., E4) will adopt these features so that both client and server OSGi applications can benefit from a larger pool of tooling.

Continuous delivery: From 248 to 98 to 1

This presentation was about Atlassian's experience with adopting an Agile development life cycle.

When they started the process they wondered about how they compared to other companies out there.  Were they behind the pack?  Could they improve their pace and process to be better?

Their goal was to be able to release more often to reap all of the well known benefits of the Agile process.  To get there they took incremental steps, with the ultimate goal of being able to have a deployed product every day.  To chart their progress, they created a graph to be visually motivated by the improvements.

Initial approach:
  • Identify the hard parts of their deployment and find ways to do them more often via automation
  • Measure each aspect of their process and remove waste
  • Evaluate the entire system when looking for optimisations
  • Implement tooling to monitor processes involved with deployment
  • To lower costs, build their own rack for deployments instead of relying on relatively expensive third party services
How they changed their process and culture:
  • Couple the technical details with the products
  • Decouple the releases of components.  Since most of their applications were web-based, this was easier to do, since it just mean a new deployment on a server somewhere
  • Focus on build things in smaller chunks so that they can be released faster and incrementally
  • Reduce branching of products for new or major features; instead they used hidden features that could be enabled as experimental components for testing or beta testing
What they learned about their architectural decisions when adopting Agile:

  • It's better to stay away from the mega-application that does everything, and instead focus on smaller components that can work together
  • Make new features opt-in and backwards compatible so that clients can adopt them when they're ready, and go back if things don't work the way they expected
  • Have the ability to host plug-ins remotely so that they can be upgraded separately without breaking other things
With Agile, they were able to start a program where people could pay for early beta versions.  This gave them the benefit of both early feedback and an early revenue stream.  They found that users were happy to be able to get access to the new features instead of having to wait months for a release.

Monday, March 26, 2012

Building Eclipse plugins and RCP applications with Tycho

An entry-level session but I was able to pick up some tidbits. I also now believe we're using it correctly.

Some highlights:
  • Tycho is a p2 artifact/metadata consumer and producer.
  • Maven build frontend with a p2 repository backend.
  • 0.14.1 latest.
  • We created an e4 application, modified the wizard-provided AboutHandler, converted to Maven project with package type "eclipse-plugin", added test fragment with package type "eclipse-test-plugin" and AboutHandlerTest.
  • Tycho runs all tests in OSGi runtime, same as "Run JUnit Plugin Test".
  • Test class names must have suffix "Test"! Otherwise ignored.
  • maven-release plugin not well supported - too many assumptions would have to be made about SCM and its structure so there are no plans to implement it. There's a tycho BOF on Wednesday night where I hope to hear how others are, if at all, releasing.
  • component.xml set with version 0.0.0 doesn't need updating! We currently use an ant script to keep it updated unnecessarily I believe.
  • We created a .product file and simply added it to our tychodemo.repository project, next to its category.xml.
  • Add start level for plugins on product's configuration tab. This is necessary when running through maven though it's automatic when running product through PDE (launch product).
    • add org.eclipse.e4.rcp and org.eclipse.rcp features.
    • product editor "add required" and "validate" button useful
  • Repository build then builds site and includes unevaluated .product file. Need to use tycho-p2-director-plugin with <goal>materialize-products</goal>
  • Features' build.properties - add "root=somethingToAddToRootOfApplication" to add, e.g. JRE or other resources at root of built product.

What every Eclipse Developer should know about EMF

A good, albeit at times a little rushed, introduction to EMF, the Eclipse Modeling Framework. Concentrated on ECore through generated Java. Other modeling tools can do all kinds of transformations.

Some highlights:
  • Some history - 10 years old, IBM MOF was thought to be too complicated so a subset was spawned as EMF.
  • Generates Java classes and adapters.
  • Can be used when data is to be stored or model displayed/modified in a UI, i.e. data binding.
  • Has a small development cost with large return, i.e. a few lines of ECore model can generate many times that number in lines of Java code (assuming you know how to use it).
  • Tutorial example was a modeled bowling league, players, tournaments and games. Each was modeled and graphed to show relationships via UML.
  • ECore model: EPackage, EClass, EAttriibute, EReference
  • EDapt - facilitates EMF data migration, i.e. migrate existing data to accommodate a changed ECore model.
  • Never modify generated factory classes! Always modify the ECore model and regenerate the Java classes.
  • Rich API to use model elements with or without instances, e.g.
    • BowlingFactory.eINSTANCE.createMatchup()
    • assertEquals(matchup, game.eContainer());
    • EMF utilities: EcoreUtil#getRoot, #copy, etc
  • All modeled objects are contained in some container, otherwise removed. A modeled instance can be queried for its container even though it may not have a getter for it.
  • ECore Commands allow programmatic changing of the model instances but use an undo/redo stack. Useful for test cases.
  • http://eclipsesource.com/emftutorial will soon have an updated version of the tutorial.
  • In my opinion, this could work very well for the right problem but could also become another layer to maintain and train up on if overused.
  • Eclipse 4 will include an EMF-modeled Workbench which we probably should get to know.

Creating tools to simplify your application development: The Chrome App example


This tutorial demonstrated using Acceleo to generate JavaScript from an EMF model instance.  A simple meta-model was created for a simple web application.  Acceleo modules (text generators) where created to convert an instance of the meta-model into JavaScript code for a web application.

Acceleo was also used to generate an Eclipse editor to be used by end-users to create instances of the meta-model and generate the web application.

While the example only generated JavaScript, Acceleo is a model to text (M2T) generator, and can output to any text-based format (e.g., Java, XML, etc.)  In addition to instances of meta-models, Acceleo can transform "regular" models too.

Acceleo has an impressive tool set for developing modules and editors, including:
  • a rich editor that includes excellent content assist, navigation, refactoring, etc.
  • a debugger for stepping through generation steps
  • a profiler for improving performance when generating large models or performing complicated transformations
  • a view for identifying which lines in a generator generated a particular section of output text
  • generation of customisable editors that can be deployed to users (or other developers) for editing an instance of your model and generating text or code using your modules
Acceleo also has the ability to specify regions of generated text that shouldn't be replaced when regenerated.  This is necessary for those special cases where a generic generator simply won't work.

The meta-model is a very interesting approach for developing solutions to problems that tend to repeat themselves, such as models for common components such as table viewers, DTOs, or POJOs (with or without annotations).