Archive by Author
Lightbend Lagom Microservices API

Lightbend Lagom Microservices with Java 8

There’s currently a lot of buzz around the notion of composing complex systems of reactive microservices. Certainly it’s not an entirely new idea, I’ve been building solutions along these design principles for many years but now we have been seeing quite mature tooling based on RESTful architectures emerge in the shape of Spring Boot and Typesafe Play.

Lately I’ve been doing a lot of work on Spring Boot. I love it but it’s not reactive out-of-the box and Play was great but the Scala version was always the superior option. While I personally love Scala many of my clients don’t like developers using Scala on their projects as they worry about downstream support and the higher cost of Scala developers.

Lagom Microservice Architecture


Well it seems that Typesafe, now re-branded as Lightbend also worried about this management mistrust of Scala and decided that they needed to create a Java 8 based api to replace Play so that they could capture the Java market. It’s interesting to note that there is still Scala under the covers and it is composed of some of their mature projects such as Akka to do the donkey-work but exposes reactive features through CompletableFuture and lambdas.

I’ve only watched the introductory videos and skimmed the documentation so far but it looks promising and I’m certainly going to devote some time to learning much more about this product. I’ll post more thoughts as I go.

Comments ( 0 )
Spring Boot with JSF and Primefaces

Spring Boot with JSF/Primefaces

I’m a big fan of Spring Boot. If you need to bootstrap a Java Enterprise application in a short space of time its an excellent project to get you moving and within a few lines you can expose or consume RESTful services, initiate an embedded Tomcat servlet container and auto-configure your way into the whole Spring ecosystem with barely a line of XML in site (yes, there’s still the Maven POM).

This leads us to a new application design paradigm, that of the micro-service or application. Each web app in a self-contained jar, its own embedded web-server with its own configuration running in its own JVM instance sitting behind a web proxy presenting a suite or cloud of services to the outside world seemingly as a single product.

You can of course use Spring Web with Thymeleaf or simple JSP with AngularJS but I wanted to demonstrate how to build a web interface to such a service using JSF and Primefaces and have posted a fully functional demo application to GitHub:

This an Eclipse Maven Spring Boot project, get it built and running and navigate to http://localhost:9090 and have a look at its behaviour. The data is also exposed as a RESTful service using Spring Web’s @RestController at http://localhost:9090/service/books/ and individual items on http://localhost:9090/service/book/0/.

The embedded Tomcat container has very little functionality of its own, JSF capabilities are bestowed by adding the appropriate JSF and Primefaces dependencies to the Maven POM followed by additional annotated bean definitions to augment the Spring Boot auto-configuration. Although Tomcat is using java config it still requires the presence of a web.xml file and a faces-config.xml file in the webapp/WEB-INF directory. Any configuration that would normally be done in the web.xml file is done in a class that implements ServletContextInitializer such as our class Initializer in the example which must in turn be announced to the Spring Boot SpringApplicationBuilder during bootstrap.

We now have two different types of containers active, one Spring container and a J2EE Servlet container (Tomcat) and really its better to keep them separate in your mind and technically but if we are going to transfer values between them we need some sort of interop bridge. Spring provides this through expression language via a SpringBeanFacesELResolver which is referenced in your faces-config.xml and injected by Spring for you. Now you can access Spring beans in your managed faces bean, passing vales both ways. See class BookModel.

Everything else in the project is simply a demonstration of the power of Spring Boot and how easy it is to add features thanks to auto configuration. The project includes:

  • Spring Data JPA
  • Spring Rest
  • Spring Actuator
  • Jackson Repository Populators
  • URL Rewriting

Details of all of these can be found in the Spring Boot documentation except the last item, URL Rewriting. The one thing I hate about JSF is the ugly URL schema it leaves you with. This can be remedied by adding OcpSoft’s Rewrite Project to your project. I’m bringing this up now because once you’ve added the dependency to your POM the configuration requires a little special Spring Boot configuration voodoo too.

If you recall I mentioned that the web.xml file is not loaded by the embedded Tomcat server and that all configuration is done through java config, well that includes activating the RewriteFilter bean. This is performed in the class that extends SpringBootServletInitializer:

From here on you just need to define a HttpConfigurationProvider and annotate it with the @RewriteConfiguration annotation as per the documentation.

See also: The AngularFaces Project See Also: JSF on Spring Boot (Understanding Scopes)

Comments ( 15 )
Apache Accumulo

Accumulo on Hortonworks Sandbox

Accumulo is not included in the Ambari installation so has to be manually installed. If you want to do some development with it the best starting place to get an instance up and running quickly is the Hortonworks Sandbox, however due to differences in installation procedures getting this working isn’t quite as straightforward as it could be.

Here are some notes on the procedure to help you on your way.


Download the Hortonworks Sandbox and start it in your virtual machine manager, I’m using VirtualBox here. Networking settings are quite important too, I set this to NAT so that the VM runs on a network and management web pages are accessed on your host on the address. This keeps everything simple and external repos can be accessed through the host internet connection.

Go to the Ambari management page, login is admin/admin and verify that the processes that we need are up and running. That will be HDFS, MapReduce2, YARN and Zookeeper; I also like to start the Ambari Metrics and collector so that I can see the activity but its not required.


  • Log in via ssh to the sandbox, login root/hadoop.
  • Accumulo is installed under (version numbers may differ), /usr/hdp/
  • Copy a configuration example set to the root config directory, select a configuration according to your memory constraints but they should always be a standalone set. e.g.
  • Edit the file and set the following variables accordingly.
    uncomment the line which reads:
  • Edit the file accumulo-site.xml and modify the value tags as below to hadoop, this is very important so that accumulo can interact with zookeeper.
  • Now we have to change the accumulo user properties, edit /etc/password and change:
    to Note that group 501 in this case is the hadoop group.
  • Create the home directory (need to su – hdfs to run the hadoop commands)
  • Change permissions and ownership
  • Now you are ready to initialize accumulo, this step writes the configuration information into zookeeper.
  • You should enter that instance name, which can be anything you like and the secret which must be hadoop
  • You are now ready to start accumulo

Congratulations, you have successfully installed and started accumulo. You can now monitor your instance at

Accumulo Overview Page

Accumulo Overview Page


If you see this exception during start-up:

This indicates that accumulo doesn’t have sufficient permissions to write into zookeeper. Check that you have configured all the file and user permissions correctly but above all verify that the secret in the accumulo-site.xml config file matches the value you entered at the init stage. It is perfectly safe to set this secret value again using:

You will be prompted for the original value and the new value that will get inserted into zookeeper.

Comments ( 0 )
Git For Windows

Git For Windows

Source control is at the heart of every developer’s workflow and the chances are in 2015 that SCM of choice will be git. Nevertheless while most of us develop on the windows platform somehow we ended-up with git releases lagging behind the Linux versions quite substantially, you can see this on the official site still at where the download is marooned at 1.9.5 whilst the Linux version powers ahead on version 2.4.1.

Fortunately help is at hand and that is in the shape of Git for Windows which not only keeps up with the Linux version but comes with a very nice 64bit bash emulator and as I understand it is due to replace the original windows git as the de facto git release for the windows platform in a matter of months.

So far I’m loving it.

Intellij Users: Intellij uses the git installed on your computer as opposed to its own and will be expecting the classic location. Go to settings/Version Control/Git and change the Path to C:\Program Files\Git\cmd\git.exe


Comments ( 0 )
OpenCl Compute

Getting Started with OpenCL

Probably the most amazing thing about OpenCL is its heterogeneous nature. An OpenCL kernel can run on just about any compute device in your computer, the CPU, the GPU or even a FPGA and it can all be orchestrated from the host with ease.

As you may be aware, 3rd generation Intel Core (and later) processors include an integrated graphics component and in the  HD400 and later chips this compute power is not to be sniffed at and certainly worth exploiting however its not entirely clear how you access it. If like me you have a discrete graphics card you may be wondering as I did why the Intel GPU is not accessible.

Here’s what to do.

Boot your computer into the BIOS settings and look for a section probably entitled something like “System Agent”, under this menu :

  • “Initiate Graphic Adapter” – set this to PCIe/PCI
  • “iGPU Multi-Monitor” – set this to Enabled

Save your settings and re-boot.

Now visit the Intel website and download the appropriate graphics driver for your CPU, install it and re-boot once more, then when you open your device panel you can see the integrated Intel graphics device like this :

Graphics Device List

We’re ready to start programming.

Next you are going to need an OpenCL SDK so that you have the headers you need to build an OpenCL program (the drivers already have a run-time). It doesn’t really matter who’s you use, in my case I downloaded the Nvidia tools which are part of the CUDA SDK. Currently the download is here but may move at a later date.

Once installed you will need to set-up your project to access the SDK. In Visual Studio 2013 (12 is the same) select the property manager tab and select your build target, in my case I select “Debug | x64” then double-click “Microsoft.Cpp.x64.user” so that you only modify properties for this project. Now you have the property dialog open select “VC++ Directories” and enter :

  • Include Directories – $(CUDA_PATH)\include;$(IncludePath)
  • Library Directories – $(CUDA_PATH)\lib\x64;$(LibraryPath)

The CUDA installer has conveniently created an environment variable called CUDA_PATH to make this nice and clean.

Now go to the “Linker” then “General” section and update :

  • Additional Library Directories – $(CUDA_LIB_PATH);%(AdditionalLibraryDirectories)

Then “Linker”, “Input” and update :

  • Additional Dependencies – OpenCL.lib;%(AdditionalDependencies)

Hit OK and we’re ready to go.

This is a little program to look for compute devices on your system and print out their capabilities :

This gives us output like this :


Comments ( 0 )
Screen-Shot Template Code

21 Years Of Software

Oakdale Software is twenty-one years old!

It seems like only yesterday I was filling in those articles of incorporation and greeting my first clients. How different the world of software was then.

I’ll raise a glass to the next 21 years – I hope they’re as exciting.

Comments ( 0 )