Freelance Java Consultant

Join the Pact

At my last couple of clients we started to use API testing to separate the individual components. I’ve looked at a number of frameworks that try to help you use “Consumer Driven Contract Testing”. My introduction to CDCs was the blog post of Martin Fowler and another one by Ian Robinson.

The obvious followup was looking at ThroughWorks Pacto. Unfortunately at that time, I ran into a language conflict. Futhermore, Thoughtworks more recently decided to suspend work on Pacto and refers to projects like Pact.

As one of my clients had a strong Spring bases, we also took a look at Spring Cloud Contract. Although it is a perfect help for building contracts, they seem to focus the contract on the provider side instead of the consumer. They suggest to define stubs for the providers behaviour which the consumers can then use to verify if they’re still inline with expected behaviour.

In the end at both clients we decided to use Pact and I’ll try to show you the steps needed to get this working. First step is to let the consumer of the API define what pieces of the API are being used. The fluent interface makes it reasonable intuitive how to define behaviour.

public class ContractTest {
    @Pact(consumer = "consumer")
    public RequestResponsePact createFragment(PactDslWithProvider builder) {
        return builder
                .given("Known pets")
                .uponReceiving("A request for an unknown pet")
                    .path("/pets/" + UNKNOWN_PET_ID)
                    .body("Pet not found")
                .uponReceiving("A request for the first page of pets")
                    .path("/pets/" + KNOWN_PET_ID)
                    .body(LambdaDsl.newJsonBody(a -> {
                        a.object("category", c -> {
                        a.array("photoUrls", urls -> {});
                        a.array("tags", tags -> {
                            tags.object(tag -> {

To verify if the consumer conforms to this pact, we can write a unit test that calls the client code.

@PactVerification(value="petstore_api", fragment="createFragment")
public void it_should_find_pet_by_id() {
    PetService petService = new PetService(petstoreApi.getUrl());

We’ll need to specify which provider we want to cover. We do so by providing a JUnit @Rule as a field.

public PactProviderRuleMk2 petstoreApi = new PactProviderRuleMk2("petstore_api", PactSpecVersion.V3, this);

Of course we’ll need to add a dependency for it.


And there you have it. This should give you green lights. You have tested your service API by defining the behaviour you’re expecting of a remote API. The next step is to publish this contract to a central repository so all providers can verify if they are and remain compatible with their consumers. The creators of Pact have provided us with a specific repository that not only accepts and stores these contracts, they are also browsable and you can find interesting dependencies between your services. For a first trial you can easily start one by running this docker-compose config.

version: '3'
    image: dius/pact-broker
      - "4568:80"
    image: mysql
      - "3306"

There is a maven plugin available to let you upload the contracts. Add this to your project to upload it as part of your build.


So far, so good. This was all straight forward to figure out and to get to work. For me the problem started when I tried to verify this contract at the providers side. There are a number of Maven plugins available for the provider tests. It’s up to you to find the one most suitable to your situation. I started with the JUnit approach.


Having the dependency in place, I could start my test class. In fact you’re not really writing a test here, but you’re defining the behaviour of your service. For the test to be able to verify this behaviour, you will have to make sure you have an API to test against. First order business therefore, is to start your API and make it available at a known location.

private static ConfigurableApplicationContext context;

public static void startService() {
    context = new SpringApplicationBuilder().profiles("pact-test").sources(Application.class, PactTest.class).run();

public final Target target = new HttpTarget(8084);

public static void kill() {

Obviously you’ll need some reference to the central contract repository. We can annotate the class to make it work.

public class PactTest {

All that’s left is to make sure that your service acts like it is in the state the consumer is expecting. For this example it means it recognises the pet IDs that should and should not be known. The specific “magic” ids are defined inside the contract and are passed into your state definition as arguments.

public PetRepository petRepository() {
    return Mockito.mock(PetRepository.class);

@State("Known pets")
public void knownPets(Map<String, Object> data) {
    PetRepository petRepository = context.getBean("petRepository", PetRepository.class);
    String knownPetId = data.get("KNOWN_PET_ID").toString();

As you can see I choose to mock my repositories to make sure I capture as much behaviour as possible. This way I can assert marshalling techniques, endpoint mapping and the potential merging of objects.

And there you have it. You can find the code at GitHub.

Dotting the i and crossing the t for your website

Recently I did an overhaul of my website, which was long over due. After spending some time creating a new WordPress theme and adding the first content, I moved on to make sure it would show in Googles search results. Following the advice given by Google I changed my theme for the better. Simple and small changes like a consistent, but dynamic title on the different pages.

Content with the setup I let it be to start focussing on some content. However, a good friend of mine suggested the observatory project by Mozilla. Being familiar with OWASP I loved to have a tool that independently would scan my website and give advise on some improvement points.

Obviously most of the points were to remove some shortcuts taken during development:

  1. No inline styling; use the stylesheet, that’s what its for!
  2. No inline scripts; move them to a separate file.

Others are simple additions to your site that help making the internet a little bit more secure:

  1. Strict protocol usage. If you support `https://`, which you should, make sure all your visitors know this and will move to use only `https://`. Besides configuring WordPress to use the right URL for your site, adding the `Strict-Transport-Security` header to your responses will make sure your visitors won’t forget it.
  2. Don’t allow your site to be wrapped inside a frame. Add the `X-Frame-Options` header to prevent this.

I moved on to work on the `Content-Security-Policy`, which is

an HTTP header that allows site operators fine-grained control over where resources on their site can be loaded from. The use of this header is the best method to prevent cross-site scripting (XSS) vulnerabilities.

In short your website should provide a strict list of external resources that may be loaded by your website. Building this list of external resources was a simple but tedious exercise. You can even cover those small scripts that WordPress automatically adds to your pages by calculating a SHA hash value for it.

All went fine until I wanted to enable Subresource Integrity (SRI). This recent addition to W3C standards allows you to provide an integrity tag on any resource your using. Specifically useful when your using CDNs to provide commons resources, but you want to make sure your website won’t be compromised when the CDN is. A precondition for this all to work is that the CDN must allow Cross Origin Resource Sharing (CORS) for that resource. You would think that all third party script providers would have this enabled by default, as this is kind of their main business and fortunately, most have already enabled this. Hence it was no trouble enabling SRI for Bootstrap or JQuery. My problem started when I wanted to do the same for Google Analytics.

Obviously I’d like to get some statistics on how often my site gets visited and Google Analytics seems like the simple choice.
I started my efforts by calculating a hash value for the resource using the following command:
> curl -s |\
openssl dgst -sha384 -binary |\
openssl base64 -A

And I added the result as a integrity attribute to the script tag. Together with the `crossorigin` attribute set to `anonymous` this should protect the integrity of my website’s (Sub)Resource.

Checking my website however, showed me something completely different. In the console a big error drew my attention:
Subresource Integrity: The resource '' has an integrity attribute, but the resource requires the request to be CORS enabled to check the integrity, and it is not. The resource has been blocked because the integrity cannot be enforced.

Somehow Google has forgotten to change the `Access-Control-Allow-Origin` header for both the new as well as the old

I’ve asked around on StackOverflow and on the analytics help forum. I even filed a feedback form on the analytics website, but until now no solution has been found how to combine Google’s gtag.js and SubResource Integrity.

Posible fixes:

  • Hosting the file your self; Although possible, Google explicitly discourage this.
  • Get Google to change the CORS header to allow cross origin resource sharing.
  • Get Google to host the file on a proper CDN.

What is Lorem Ipsum?

During development and design you often need some random text to show the affects of the design. This post is no exception, except that instead of putting Lorem Ipsum lines, I wanted to reference the origin. So here it goes. Shamelessly copied from


The largest blockchain hackathon in the world

From April 5 till April 8 I participated a hackathon to change the world. 63 teams gathered together in Groningen to build and show how Blockchain technology can and will change the world as we know it.