MQTT in Clojure with the core.async library

Last week, David Nolen wrote a wonderful article on Clojure’s new core.async library and how it eliminates ‘callback hell’ in the browser. In a recent podcast, Rich Hickey stresses that callback hell isn’t just a problem for JavaScript developers – Java developers, who are increasingly using async APIs, have their share of pain. In this article I’ll try to explain how core.async can help them too.

To provide some background, I’m working on a client project which is sourcing MQTT (MQ Telemetry Transport) events from sensors and processing them with Storm. If you haven’t heard of MQTT, it’s the protocol driving the ‘Internet of Things’. For my test harness I’m publishing test messages using a Java-based MQTT API client-library, Eclipse Paho.

When publishing messages using this async API to a broker, the broker responds by acknowledging delivery. Clients can choose to resend messages that may have been lost in transit. This is important, because MQTT is designed to work in harse conditions where the network may be unreliable.

How does this work?

A device using this Java API connects to a broker by instantiating an instance of MqttAsyncClient with the url of the broker and a client id.

client = new MqttAsyncClient(
           "Malcolm's nose");

To publish a message, the device calls the publish method with a topic name and a message (some bytes). The method returns a token so that the client can track the delivery of the message.

IMqttDeliveryToken token = publish(String topic, MqttMessage message);

The tracking is achieved by registering a callback on the client.

client.setCallback(MqttCallback callback);

The callback interface looks like this :-

interface MqttCallback {
     void deliveryComplete(IMqttDeliveryToken token);

When a message has been delivered successfully, the token is used to signal this through the MqttCallback interface. This is pretty simple and not unlike many other async APIs.

Let’s imagine that messages usually get through after 10 seconds but if we don’t get an acknowledgement we should send the message again. How do we write this?

Normally this would involve keeping track of each message in a data table, along with the time it was sent. Periodically we would check the table for any entries that were sent more than 10 seconds ago but where no acknowledgement had been received. This could get tricky, and we’d have to take care not to cause any race conditions between the threads publishing the messages and the thread doing the checks. If we ran the checks too infrequently, we may delay the resend of an important message. If we ran too often, we risk contention and impacting the overall performance of the system.

However, with Clojure’s core.async library, life has got a whole lot easier.

To show this, let’s switch to writing in Clojure. We publish messages via the Java API in the usual way

(.publish client "sneezing" (.getBytes "achoo! [80 m/s]"))

This returns our delivery token.

For each token we can create a new core.async channel (channels, unlike threads, are super light-weight). We’ll need to store each token with an associated channel. Let’s use an atom for this.

(def tok->chan (atom {}))

Here’s how we associate the token with a channel in the map.

(let [tok (mqtt-publish client "sneezing" "achoo! [80 m/s]")
      c (chan)]
        (swap! tok->chan assoc tok c))

Now, the point of creating a channel is so our callback interface can handle the delivery acknowledgements by simply placing something on the channel.

(reify MqttCallback
      (deliveryComplete [_ tok]
        (when-let 1
          (go (>! c :arrived))
          (swap! tok->chan dissoc tok))))

When an acknowledgement is received, the token is used as a key to look up the channel. The keyword :arrived is then ‘put’ on the channel, and the channel is removed from the map, as it is no longer needed. Note the put operator is >! (which must occur within a go block, don’t worry about that yet).

Now, here’s the neat trick. We can code the message retry logic in the same block of code that publishes the original message.

Let’s wrap the original message publishing code in loop, so each message is retried after 10 seconds.

  (loop []
      (let [tok (mqtt-publish client
                    "sneezing" "achoo! [80 m/s]")
              c (chan)]
          (swap! tok->chan assoc tok c)
          (when-not (alts! c (timeout 10000))

The last expression uses alts! to select over 2 channels. The first channel, c is the one we’re hoping to take the :arrived keyword from. Competing in this race is a second channel, created by the timeout function, which will close (returning nil) in 10 seconds. The winner of this race tells us whether the message arrived ok (if we ‘take’ :arrived from c) or nil (if the 10 second countdown expires). If we get a nil, then we go to the top of the loop and try again. Simple. Magic. Amazing.

Emacs Demystified

There’s a discussion on the Clojure mailing list about which development environment is most suitable for beginners who want to get started with Clojure.

If you’re already an expert IDE user for other work, such as Java development, it makes sense to try Clojure out in environment you’re used to. However, what should a beginner do if they aren’t already familiar with an IDE? If this is you, my strong advice is that you learn Emacs.

Emacs optimises the tail

Most tools try to optimise the first few hours of use by giving access to all the features via menu-bars and icons. These certainly make learning the tool easier. In fact, the Windows operating system seems to me to be designed to optimise a person’s first experience with a computer.

However, once you are familiar with how to use a tool most users find that using keyboard shortcuts makes them more productive, just as many computer users discover that they get more done with a command line. Duplicating access to a feature via a menu and/or an icon only adds to the visual clutter which, detracting from the overall experience. And when you’re trying to focus on creating content, visual noise can be a distraction.

Emacs is different. In Emacs, the keyboard shortcut is the primary way of accessing a feature. Many features aren’t even available via the menu and toolbar. For the sake of reducing visual clutter to a minimum I recommend you toggle off even the basic menu and toolbar that Emacs
displays when you first start it (see below for how).

So rather than optimise the first few hours of use, Emacs makes the subsequent years as productive as possible.

Once you’re familiar with a tool it’s easy to forget how you felt when you started out. I remember being a little overwhelmed the first time I tried IntelliJ, I couldn’t work out how to view 2 files side by side. I also remember finding Eclipse overwhelming, I was confused by its views and perspectives.


Learning any complex tool like an IDE or Emacs is not a trivial undertaking. But scrambling up the initial learning curve with Emacs is really not all that hard. Here’s how :-

Get Emacs.

Get a recent copy, version 24 or above. If you’re on a Mac, avoid Aquamacs for the time being.

Start Emacs.

Read the text in front of you. Start the tutorial. Run through the tutorial, have a break, come back and do it again. This is really important. Don’t try to do anything complex (like configure a Clojure environment) until you’re comfortable with the basics. Don’t try to learn everything in one go. Emacs has a long learning curve but it doesn’t have to be steep. Nobody knows all of Emacs and you don’t have to know much of it to start with. Gradually you will pick up new tips and tricks from other Emacs users and your own curiosity.

Minimise the visual clutter.

The tutorial will have taught you what M-x means.

 M-x cust-var<RET>
 Customize variable: menu-bar-mode
 (click Toggle, then State - 'Save for Future Sessions')

 M-x cust-var<RET>
 Customize variable: tool-bar-mode
 (click Toggle, then State - 'Save for Future Sessions')

 M-x cust-var<RET>
 Customize variable: scroll-bar-mode
 (click Toggle, then State - 'Save for Future Sessions')

Kill the caps

Rebind your useless CAPS-LOCK key to a Control key. On a Mac, you can
do that by going to Sytsem Preferences/Keyboard/Modifiers. GNOME and KDE
users can also do this via settings. For raw X Windows, I find it’s best
to add the following to your X conf :

Section "InputClass"
    Identifier  "Keyboard Defaults"
    MatchIsKeyboard "yes"
    Option  "XkbOptions" "ctrl:nocaps"

Customize your Clojure environment.

There are plenty of tutorials on this. Find a recent one and follow it. Most advice is to use
nRepl. Personally I find nRepl to be still lacking in certain useful features so I run the original swank instead. In my lein user profile I add the swank plugin.


{:user {:plugins [
                  [lein-swank "1.4.4"]

Then I run up an Emacs shell

M-x eshell

Then I run the swank listener via lein to start my JVM with my project’s classpath

$ cd ~/src/myproject
$ lein swank

Then I connect to localhost and port 4005 (ie. the defaults).

M-x slime-connect

Then I locate a Clojure file and compile it with C-c C-k.

Install and learn paredit.

Many people recommend using an Emacs ‘starter kit’ which provide various bells and whistles intended for certain user groups (eg. Clojure developers). You’ll find many examples on github. My advice is to avoid these until you are more comfortable with the default Emacs environment. Personally I don’t use a starter kit and find that Emacs 24 has ironed out plenty of the wrinkles that these starter kits were created to address.

IDE strengths are not tuned to Clojure development

For Java developers there are a number of features that tip the balance
firmly in favour of using an IDE :-

  1. Java, being both statically types and verbose lends itself to
    code-completion facilities in the editor.

  2. Improving the design of object oriented code demands heavy
    refactoring, where a good IDE can greatly help.

  3. Java’s requirements on file organisation (one file per public class)
    means that large project can run into thousands of individual files,
    which are more easily accessed using tree controls characteristic of

All these benefits are significantly less useful for Clojure
development, so the balance tips in back in favour of Emacs which excels
at text editing. As an expert user in multiple IDEs and Emacs I can
attest that the text editing in IDEs feels awkward and even cumbersome
by Emacs standards.

A sound investment

Commiting the time to learn Emacs is a very sound investment. I’ve been using Emacs myself since 1991. The initial learning curve was a few days, which is a tiny fraction of that 20 years. Many tools have come and gone since 1991, including Windows 3.1, Lotus Notes, Visual Age for Java. Emacs predates all these and it’s very much the same tool now (at version 24) as it was then (at version 18). I fully expect to be using Emacs for another 20 years.

That would be a big payback if I only used Emacs for development. But I can use Emacs for lots of other tasks. Right now I’m writing this blog in Emacs using a mode specially designed for creating Markdown formatted articles (which this is one). The same keybindings apply and much of the feature set is just as relevant as when I’m developing :-

  • slick navigation
  • text manipulation
  • bookmarks
  • git integration
  • kill-rings
  • macros
  • narrowing to region
  • grep-find

I don’t have to work hard to remember all the keybindings associated with these features, they are burnt into my muscle memory. If you haven’t yet learned to properly touch-type, I also recommend you add that to your ‘to-do’ list. Emacs is kind to touch-typists. Like vi, Emacs allows you to navigate without moving your hands over to the arrow keys and back.

The sixty-million-dollar mistake

The sixty-million-dollar mistake

Consider the following Java code :-

public class MoneyAmount {

    private double value;
    private String currency;

    public void getValue() {
        return value;

    public void setValue(double value) {
        this.value = value;

    public void getCurrency() {
        return currency;

    public void setCurrency(String currency) {
        String validCcy = CurrencyManager.lookupCurrency(currency);
        if (validCcy != null) {
            this.currency = validCcy;

public static void main(String[] args) {
    MoneyAmount amt = new MoneyAmount();
    for (row : spreadsheet) {

So far so good. All the unit tests pass, let’s deploy it into

Everything goes well for many years. Then a small error occurs in the
input data :-

40000 USD
60000000 JPy

What happens?

Ouch! We’re going to book 60,000,000 USD instead of 60,000,000
Yen. That’s a significant mistake.

You may think that I’ve contrived this example (I couldn’t possibly
confirm or deny that on this blog) but it does show how subtle a
horrifying coding mistake can be.

Why did we make that $60M mistake?

  1. The developer shouldn’t have re-used the instance of MoneyAmount.
  2. The developer should have checked for the null case returning from CurrencyManager.lookupCurrency.
  3. The developer should have avoided using setter methods.

Let’s direct our questioning at a deeper level.

Why did the developer make these mistakes?

  1. The developer didn’t receive adequate education.
  2. Due to tight deadlines, the developer was not given enough time to think about the edge cases.
  3. The developer’s code wasn’t properly reviewed by a code-reviewer.
  4. There weren’t enough tests in the test suite.

These are all potential causes, but it’s not easy to see which one is
relevant and take corrective action.

  1. What constitutes ‘adequate’ education for developers?
  2. How do we manage projects so that estimates are accurate and there are no slippages.
  3. How do we enforce proper code-review and how much review is necessary?
  4. What constitutes adequate testing?

Many people are involved in discovering better answers to these
questions working within, as well as supplementing, their existing

I think these approaches are mostly ineffectual, at some
level. Otherwise I wouldn’t have contemplated bringing in a new
programming language into the bank. In the case of Clojure, it has meant
building a whole new toolset around it (including the IDE where most of
the developers on my team have moved to Emacs).

Mere tools?

Despite the disruption, I’ve been very encouraged by the effectiveness
that a modern tool like Clojure has on software quality. While I
consider a programming language as a tool, that may unfairly diminish
its importance. As an industry we are continually discovering better
ways of creating software and new programming languages are still the
main vessels in which we distill that experience
. Those who adopt a
modern programming language inherit the wealth of a generation’s

I began this article by presenting a real coding mistake. My experience
with Clojure means that nowadays I approach the issue differently. Why
did the developer make these mistakes?

  1. The tool encouraged it.

Ultimately, delivering reliable software to our users is more important
than our development tools, however much we’ve grown used to them. If
our tools are letting us make these expensive mistakes then we need to
consider their replacement.

Triple-witching day

Yesterday evening I was on a conference call with some colleagues. We are all trying to complete the UAT testing for a major project deployment next month.

Today is a triple-witching day (TL;DR: triple-witching day happens 4 times a year when many bank staff go home late from work). Unfortunately the test-managers want to use today to test the operations on a triple-witching day – trouble is I didn’t realise until now the implications for the system we are contributing. We’ll need to write and apply a fix to support triple-witching day.

So please join me for this special-edition ‘live’ blogging (on the train down to London) where I think about what we can do.

We have a scheduler in the system that schedules a particular job for 1730 in New York. We’ll have to disable that job today, and write up a procedure guide for what our operations staff should do every triple-witching day. But this project is designed to deliver cost-savings to the bank through automation, so it seems counter to the aims of the project to expand the number of manual processes.

Fortunately our scheduler (which we call ‘chime’) is written in Clojure. It is built on clj-time which in turn brings in the excellent Joda-Time library.

Our scheduler is designed to exploit Clojure’s lazy sequences. Our functions return sequences of infinitely re-occuring events. We can then use sequence operators such as map, filter and take. Then we build schedules by attaching these events to functions that get called at the right time.

We already have a function that returns all the days on which banking business happens in a given city (everyday excluding weekends and holidays). That function calls out to a C++ library which embeds official market information about holidays so we won’t cover that here.

For New York, we call it like this :-

(business-days "NY")

This returns a sequence of all the business days from today (which is implied).

We can override ‘today’ to be something else, with our as-of macro, which binds a dynamic var which usually defaults to the current time.

(def ^:dynamic ^DateTime *now* nil)

(defmacro as-of [^Object from & body]
  `(with-bindings {(var *now*) (DateTime. ~from)}

(as-of "2010-10-20"
  (business-days "NY"))

What I want to do is build a function that returns a sequence of business days that are triple-witching-days, so I can build a special schedule for days like today.

Triple witching days are the third Friday of months of March, June, September and December (with the exception of Good Friday, which is a holiday, so the triple-witching day occurs on the prior Thursday).

Joda-Time doesn’t have the notion of ‘third Friday in the month’. Let’s first build a predicate in clj-time for what that means.

In London, our Clojure Dojos happen on the last Tuesday of the month.

(defn dojo-day? [day]
  (and (= DateTimeConstants/TUESDAY (.getDayOfWeek day))
       (not= (.getMonthOfYear day) (.getMonthOfYear (.plusWeeks day 1)))))

The first constraint checks that it’s a Tuesday. The second term tests whether the following Tuesday is in the same month. If not, then it must be the last Tuesday in this month, so it’s a Dojo-day!

We can easily generate the sequence of Dojo days from today. We start by generating a list of all the days from today :-

(defn ^LocalDate today []
  (LocalDate. *now*))

(defn days-from-today []
  (iterate #(.plusDays ^LocalDate % 1) (today)))

Now we just add a filter.

(take 6 (filter dojo-day? (days-from-today)))

which returns :-

(#<LocalDate 2012-09-25> 
 #<LocalDate 2012-10-30> 
 #<LocalDate 2012-11-27> 
 #<LocalDate 2012-12-25> 
 #<LocalDate 2013-01-29> 
 #<LocalDate 2013-02-26>)

I’m not sure we’ll have a Clojure Dojo on Xmas day this year but who you never know with those guys…

So back to work. With the predicate that tests for the Dojo day we can write a variation that tests for the last Friday in a month :-

(defn third-friday? [day]
  (and (= DateTimeConstants/FRIDAY (.getDayOfWeek day))
       (= (.getMonthOfYear day) (.getMonthOfYear (.minusWeeks day 2)))
       (not= (.getMonthOfYear day) (.getMonthOfYear (.minusWeeks day 3)))))

This is based on a similar principle. If it’s the third Friday of a month than the Friday 2 weeks before must be in the same month, but the Friday three weeks before must be in the previous month.

We can now build in the months in which triple-witching days occur.

(defn triple-witching-day? [day]
   (third-friday? day)
   (#{3 6 9 12} (.getMonthOfYear day))))

We not quite done yet because we haven’t considered what happens on Good Friday. We mustn’t fall into the trap of considering the third Thursday of the month – it still has to be the day before the third Friday.

(defn triple-witching-day? [day]
  (or (and
       (third-friday? day)
       (#{3 6 9 12} (.getMonthOfYear day))
       (business-day? "NY" day))
      (let [fri (.plusDays day 1)]
         (not (business-day? "NY" fri))
         (third-friday? fri)
         (#{3 6 9 12} (.getMonthOfYear fri))))))

Now our 2 schedules can be built as follows :-

(filter triple-witching-day? (business-days))
(filter (comp not triple-witching-day?) (business-days))

Note I prefer the (comp not f) style to writing #(not (f %)).

Now I’m ready to patch our UAT system when I get in. Still a few minutes left on the train for a few observations…

  1. In an ideal world you’d never have these awkward situations when you have to think how to patch a system with new functionality as soon as possible. But banks are not ideal worlds and these things do crop up. The Agile manifesto values ‘responding to change, over working to a plan’ for days like today.

  2. I’ve found hacking many fixes into a system usually degrades the architecture quickly. In Java systems these fixes can create dependency cycles. In Clojure systems the architecture is protected from cycles occurring because a Clojure system won’t even start if you have them between namespaces, and inside namespaces as long as you avoid using declare – this is a topic for a whole new blog article!

Well, my train is now slowing down so I have to post this.

(If you spot any flaws with my algorithms, please tell me soon!!!!)

Wish us luck!

The UK Government should take a look at Clojure

Today saw the launch of a UK Government White Paper on Open Data.

In his forward, Francis Maude writes :-

“Data is the 21st century’s new raw material…”

At EuroClojure this year I spoke about the numerous strengths of Clojure as an
enabling technology to release trapped data and onto the web. I think
this is one of Clojure’s best kept secrets.

Then there are examples of Clojure’s strengths in processing data in
order to find new insights.

And at last Tuesday’s London Clojure Dojo I witnessed a powerful example
of how Clojure can consume and leverage data on the web.

Perhaps to mark the centenary of Alan Turing’s birth, 4 fellow
Clojurians chose to create a chatbot that could engage humans in
conversation. In the space of an hour-and-a-half they had created a demo
from nothing. The code, written using LightTable, showed how they had
‘cheated’ by using the user’s input to drive a search against Twitter,
processing the JSON results and printing back a response. (I, for one,
was totally taken in at first however).

The impressive thing for me was not the ‘fake AI’ but the speed at which four
developers could craft something that sourced data from the web and
manipulated it in such an effortless, almost nonchalent, manner.

I commented after the demonstration that I strongly felt this is how
programming will be in 10 years time. Future programmers just won’t
understand why things were seemingly so difficult for us today – the
industry is awash with chatter about efficient serialization formats,
parsers, DSLs, middleware, integration technologies – and yet these guys
made it all look so easy. Once you’ve mastered Clojure, of course, it

Notes on ‘rethinking web services’

I recently read a good article by Matt Aimonetti on creating web services. He makes some particular good points.

Web APIs are increasingly exposing raw data

“But web APIs are also more and more used to expose raw data to other software. The challenge though, is that we haven’t changed the way we write web applications to adapt to this new way of providing data.

Lately more and more applications provide their raw data to the outside world via web APIs. ”

Matt observes that developers are increasingly focussed on the data. This could be for the reason that Open Data is becoming more established (eg. or perhaps because such web APIs are better designed this way and a selection mechanism is driving the surviving population of APIs on the web.

I agree too our current tools and methods could be out of date. Perhaps even our language needs updating: how accurate is it call these things ‘web APIs’ or ‘web services’?


Matt makes another excellent point about documentation. Users of this exposed data may not have access to or understanding of your source code. They may not even be developers. So explicit documentation is important, and documentation is invariably more accurate if generated.

“For that I use a Domain Specific Language (DSL) which is a fancy way to say that I have some specific code allowing me to explicitly define my web services. These services exist as objects that can then be used to process requests but also to validate inputs, and even more importantly to generate documentation.”

Here’s an example of his DSL, which I’ve combined with the code for this implementation.

describe_service "hello_world" do |service|
  service.formats   :json
  service.http_verb :get
  service.disable_auth # on by default

  service.param.string  :name, :default => 'World', :doc => "The name of the person to greet."

  service.response do |response|
    response.object do |obj|
      obj.string :message, :doc => "The greeting message sent back. Defaults to 'World'"
      obj.datetime :at, :doc => "The timestamp of when the message was dispatched"

  service.documentation do |doc|
    doc.overall "This service provides a simple hello world implementation example."
    doc.example "<code>curl -I 'http://localhost:9292/hello_world?name=Matt'</code>"

  # the returned value is used as the response body, all the Sinatra helpers
  # are available.
  service.implementation do
    {:message => "Hello #{params[:name]}", :at =>}.to_json

What struck me about this is how similar it is to Philipp Meier’s compojure-rest for Clojure. For example, I’ve translated his example into compojure-rest’s defresource syntax :-


  ^{:doc "This service provides a simple hello world implementation example."
    :example "<code>curl -I 'http://localhost:9292/hello_world?name=Malcolm'</code>"
    :params ["name" "The name of the person to greet."]} 


  :available-media-types ["application/json"]
  :method-allowed? (request-method-in :get)
  :authorized? true

  :handle-ok (fn [context]
               {:message (format "Hello %s" 
                           (or (get-in context [:request :params "name"]) 
                               "World")) ; default
                :datetime (java.util.Date.)}))

In compojure-rest, we should be able to add test metadata to the defresource which, along with the map declaring the service itself, could be used to generate documentation, examples and tests.

Coding a transactional database using Clojure’s ‘reader literals’.

I want to show how you can create a simple persistent database by utilising reader literals in Clojure 1.4. You can copy these lines into a Clojure program if you like – the only dependency you require is

Let’s start by defining a transaction as an update to some state using a protocol :-

(defprotocol Transaction (update [_ state]))

Here’s some example transactions we want to perform. For this article let’s imagine we’re creating a database of user accounts. We’ll just create two transaction types- creating users and deleting them. Let’s record the name and email address, plus the date they joined or left.

(defrecord CreateUser [name email start-date]
  (update [this db]
    (update-in db [:users] conj
      {:name name :email email :start-date start-date})))

(defrecord DeleteUser [email remove-date]
  (update [this db]
    (update-in db [:users]
      (fn [coll] (remove (fn [r] (= (:email r) email)) coll)))))

Data is important. We’ll want to keep records of these transactions.

(defprotocol TransactionLog (record [_ tx]))

Here’s an implementation that uses an agent to write the records to a print-stream.

(defrecord DefaultTransactionLog [ag]
  (record [this tx]
    (send-off ag
      (fn [out]
        (binding [*out* out *flush-on-newline* true]
          (io! (prn tx)) out)))))

The agent will hold the print-stream. Agents are good for I/O because messages to them only get delivered if the transaction completes its update succesfully. If a retry is needed the message doesn’t get delivered. We let the Clojure prn function figure out how to serialize it. It’s optional, but I’ve wrapped the actual print statement in an io! wrapper as a safety check.

Having written the records we’ll need some way of reading them back. We’ll just use the standard Clojure reader for this. This function returns a list of all the transactions in a file.

(defn read-transactions [f]
  (if-not (.exists f) []
    (let [rdr ( ( f))]
      (take-while identity
          (fn [] (read rdr false nil)))))))

Now we move on to the database state. Let’s decorate the transaction log with something that will hold this state. As transactions are added to it they will update the in-memory state and be recorded on disk. This can support the TransactionLog protocol as well. The state will be a Clojure ref. The delegate will be a backing transaction log.

(defrecord Database [state delegate]
  (record [this tx]
     (alter state (partial update tx))
     (record delegate tx))))

Now we’re ready to create our database with the Clojure ref and backing transaction log. We’ll initialize the ref with a result of updating an empty map with a series of transactions read from our file.

(def db 
     (ref (reduce (fn [db tx] (update tx db)) {}
                      ( "my.db.clj"))))
       (agent (
                ( "my.db.clj")
                :append true)))))

Remember those transactions we coded earlier? Let’s add some convenience functions that will create and apply these transactions :-

(defn create-user [name email]
    (record db (CreateUser. name email (java.util.Date.))))

(defn delete-user [email]
    (record db (DeleteUser. email (java.util.Date.))))

Notice how we’re creating java.util.Date instances.

Now, let’s test it.

(create-user "Bob" "")
(create-user "Alice" "")
(create-user "Carol" "")
(delete-user "")

What’s the state of the database?

(deref (:state db))    

You should see something like this :-

  ({:name "Carol",
    :email "",
    :start-date #inst "2012-04-19T16:00:35.903-00:00"}
   {:name "Bob",
    :email "",
    :start-date #inst "2012-04-19T16:00:35.901-00:00"})}

What’s in our database file? Below is the raw content. Unlike some database files it’s quite easy to read and even possible to edit by hand if the need arises.

#blog.CreateUser{:name "Bob", :email 
"", :start-date #inst 
#blog.CreateUser{:name "Alice", :email 
"", :start-date #inst 
#blog.CreateUser{:name "Carol", :email 
"", :start-date #inst 
#blog.DeleteUser{:email "", 
:remove-date #inst 

Note, unless you aren’t using Clojure 1.4 or above you won’t see the #inst tags!

Notice how Clojure is writing out and reading back the java.util.Date instance. You can also register your own Java types to it through the *data-readers* dynamic binding (more details in the Clojure 1.4 release notes). This makes things really easy to create some fairly complex atomic transactions.

Now try recompiling – you should find the database is restored. You can also try removing the user. The transaction log will store both the create-user event and the delete-user event, both with timings.

That’s around 30 lines of code to code a transactional database, and a few more lines to code some custom transactions.

You don’t have to limit yourself to a map as your state. At Deutsche Bank we’ve used a similar design but with RDF triple-stores that are queryable in a datalog syntax. A key benefit with using Clojure’s persistent data-structures for simple in-memory datastores is that you don’t have to worry about locks or concurrent updates.

Reflections on a real-world Clojure application (take 2)

Last night I gave a talk at the London Clojure Users Group (LCUG) about a ‘real-world’ (16K lines-of-code) application we built in less than a year with Clojure at Deutsche Bank. I really enjoyed the event and thanks to SkillsMatter who were fantastic hosts.

There were a lot of questions during the Q&A at the end which I did my best to answer at the time. Now I’ve had some more thinking time I’d like to add a few extra comments.

If you couldn’t attend the talk you can catch it here.

Below is the original presentation in blog form (thank you Markdown!). My extra comments can be found in the epilogue – feel free to ask further questions in the comments area.

Reflections on a real-world Clojure application-


  • Java background, especially early J2EE circa 1999-2002
  • Test Driven Development – ran 20 courses
  • Mastering TDD helped me to write Java using values rather than objects
  • Began to write Java in a more functional way – but much more verbose!!
  • Started using Clojure at work for user web interfaces in November 2009
  • Began to attend Clojure Dojos in London
  • February 2011 – Clojure used extensively on a new application, now 16K LOCs!

The ‘main’ function

Developer bootstrap

For developers

$ mvn dependency:copy-dependencies
$ ./run

which does this :-

echo "Starting Fandango run script..."

export PATH=$PATH:target/bin

# Set debug to nil to disable JVM debugging.

java -cp ${classpath} clojure.main ${main}

Then slime in with Emacs!

(Let’s look at configuration in more detail)


Requirements of a configuration system

  • Flexibility – we should be able to add configuration where we need it
  • Distributed ownership – we shouldn’t have to know the live passwords
  • Source agnostic – we’d like to be able to use local files and centralised storage.


  • Java properties files
  • XML – tree based, schemas enforces structure rather than value
  • Databases – records for configuration are too diverse
  • RDF – graph based, queryable

Clojure as configuration?

“Protocols and file formats that are Turing-complete input languages are the worst offenders, because for them, recognizing valid or expected inputs is UNDECIDABLE: no amount of programming or testing will get it right… A Turing-complete input language destroys security for generations of users. Avoid Turing-complete input languages! ” — Corey Doctorow


Be careful if you choose Clojure as your configuration format!!

‘Open Data’

All our data (application & environment configuration, report definitions, user details & entitlements, etc.) are stored as RDF statements

  • The cat sat on the mat
    • Subject: the cat
    • Predicate (also known as property): sat on
    • Object: the mat
  • Relations are at an individual level rather than at a set (ie. table) level.

  • More intro to RDF here:

Our configuration system

  • RDF files (mostly Turtle format)
  • SPARQL queries
  • Uses a dynamic var: (with-config ...)
  • Delays to avoid unnecessary queries


create-assocations :-

(defn create-associations [model]
     [:proc cmdb/host :host]
     [:proc cmdb/install-dir (as-uri (format "file://%s" (or (System/getenv "FANDANGO_INSTALL_DIR")
                                                             (System/getProperty "user.dir"))))]
     [:host a cmdb/Host]
     [:host cmdb/hostname (get-hostname)]
     [:proc cmdb/userid (System/getProperty "")]
     [:proc ["" "dataDirectory"] :data-dir]
     [:proc ["" "logDirectory"] :log-dir]
     [:proc ["" "workDirectory"] :work-dir]
     [:proc ["" "pidDirectory"] :pid-dir]
     [:optional [:proc cmdb/source-dir :source-dir]]



All users are given FOAF ‘profiles’, with added VCARD and other statements.

Given these prefixes

@prefix foaf: &lt;> .
@prefix rdfs: &lt;> .

This statement (in the configuration) gives all users a ‘Guest’ role.

foaf:Person rdfs:subClassOf &lt;Guest> .


Statements are then added to create users, request roles, approve or reject roles

Creating a user

&lt;events/5afcf604-16c0-4cab-a6d1-656ed3f3420c> &lt;time> "2011-12-25T12:00Z"^^&lt;> .
&lt;events/5afcf604-16c0-4cab-a6d1-656ed3f3420c> rdfs:type &lt;CreateUser> .
&lt;events/5afcf604-16c0-4cab-a6d1-656ed3f3420c> &lt;eventfor> &lt;users/> .

Request a role

&lt;events/b5bed531-a324-4aec-9ace-2785c65a19b7> &lt;time> "2011-12-25T14:00Z"^^&lt;> .
&lt;events/b5bed531-a324-4aec-9ace-2785c65a19b7> rdfs:type &lt;RequestRole> .
&lt;events/b5bed531-a324-4aec-9ace-2785c65a19b7> &lt;role> &lt;Administrator> .
&lt;events/b5bed531-a324-4aec-9ace-2785c65a19b7> &lt;eventfor> &lt;users/> .

Language integrated query

Data can be queried directly from Clojure

(defn get-approved-roles-for-user [user]
   [(get-combined-model) (config/get-config-model)]
   [:approval a events-ns/RoleApproved]
   [:approval events-ns/time :approval-time]
   [:approval roles/approver :approver]
   [:approver foaf/name :approver-name]
   [:optional [:approver foaf/homepage :approver-homepage]]
   [:approval roles/cause :request]
   [:request roles/requester user]
   [:request events-ns/time :request-time]
   [:request roles/role :role]
   [:role rdfs/label :role-name]))


Releasing to production

$ git clone
$ git verify-tag 4.5.0
$ git checkout tags/4.5.0
$ make release

Derive the version from git!

GNU Make incantation …

describe := $(subst -, ,$(shell git describe --tags --long HEAD))
version := $(word 1,$(subst -, ,$(describe)))
release := $(shell expr 1 + $(word 2,$(describe)))

And generate the pom.xml – ie. in Make :-

pom.xml:    pom.template.xml
            cat $< | sed -e "s/@VERSION@/$(version)/g" >$@

mvn dependency:copy-dependencies
cp -r src/ dest/

We use RPM but the principle of copying the source and dependency jars over is the same.


Installation is easy

$ rpm --dbpath /opt/privatedb -Uvh fandango-4.5.0-1-x86_64.rpm

Production bootstrap

$ fandango start

A lot more complex than the developer bootstrap.

  • Init script (from Java Service Wrapper – enhanced with roqet to read environment variables from configuration)
  • Init script generates the wrapper.conf, then calls Java Service Wrapper native executable
  • Native binary spawns JVM with 2 args clojure.main boot.clj
  • boot.clj sets up a classloader which pulls in the dependency jars
  • boot.clj hands off to main.clj, rest is as the developer bootstrap.

But source code is still copied onto the system as is.


Getting started

Logging is important because it’s what everyone expects to find.

These will get you started :-


However, as your application grows you will eventually need a more sophisticated logging system. We use Log4J and configure it with clj-logging-config.

You’ll need the following packages to do this :-”

(use '
(use 'clj-logging-config/log4j)


  [:root {:level :debug 
          :out (io/file workdir "job.log")}]


For using the NDC and MDC of Log4J.

  [:root {:pattern "%d [%p] (for Customer %X{customer}) %m%n"}]

   (with-logging-context {"customer" "John Smith"}


The Good

  • Retain the JVM
  • No class files, yippee!
  • Sliming in! EDD: Eval Driven Development!
  • Separation of value, identity, state: State is a timeline of changing values.
  • Learning time – even our DBA is now comfortable with Clojure.

The Bad

  • People are justifiably afraid of new things
  • Tooling (for those not comfortable with Emacs)
  • Java interop can bite you

The Ugly

  • Stack traces
  • Debugging

Quality versus value

“Value is what you are trying to produce, and quality is only one aspect of it, intermixed with cost, features, and other factors.” — John Carmack,

cf. ‘Agile’ absolutes

  • Always write the tests first
  • Tests should always pass
  • Always fix the build before working on new features
  • Integrate continuously
  • Refactor prior to adding new features
  • Consistent code style
Our experience of Git + Clojure is prompting us to question certain assumptions.

More info

Q & A

Over to you…


Many of the questions related to the RDF portion of my presentation. There were a lot of others, I can’t remember all now.

How big is your team and how did it grow?

We started with 2 developers and grew to 4. Forcing Clojure on developers is unwise. I know that was tried somewhere else and most developers only used the Java interop!

Why do you use RDF for configuration rather than XML or JSON or even Clojure itself?

JSON is certainly more conventional as a configuration format (or XML in the Java world)
There isn’t a strong reason not to use Clojure itself (I had a slide warning of the dangers of Turing complete input languages but the point stands nevertheless). I don’t think my answer was very good last night so here are some advantages of RDF :-

  • Meaning – RDF allows you to make logic set-based statements to classes of what are otherwise straight name/values pairs.
  • Metadata – RDF allows you to make statements about statements. You can use metadata to label configuration values, add annotations (in multiple languages if you like), or constrain the values to some valid range or set, or say something about the nature of the property. You can do this in a very limited way with XML (perhaps with attributes) but with JSON there’s nothing built-in or idiomatic.
  • Mergeability – RDF allows you to source statements from a wide variety of sources and merge the models together, whereas there’s nothing built-in or idiomatic in XML or JSON. In tree formats config statements have to group inside each other in a single hierarchy – designing this hierarchy is a job in itself. Graphs are more flexible since nodes can exist in multiple hierarchies if needs be.
  • Inference – in RDF, having some data allows you to infer other data which you would otherwise have to make explicitly. This has the potential to reduce data discrepancies. For example, given a database name, listener host and port you can ‘infer’ a database connection string.

That said, I’m not really pushing RDF as a config format. We took a gamble on it and it paid off in our case. Other projects are different. JSON is a great format that enables fast and simple data exchange (when you control both ends).

I also suggested that a domain model is more valuable for persistent data than for transient data structures. Object oriented languages encourage you to design the domain model internal to a program. But in my view there is more value in a domain model you can communicate between systems, and keep for longer periods, than in a domain model that you can only use privately (ie. in a single memory address space) and only while your application is running. This is the exact opposite of designing domain models in Java/C# classes and serializing out to a database or JSON/XML files, hence the need to illustrate with a real-world example (in this case, configuration).

What other Clojure frameworks do you use?

  • Compojure/Ring/Hiccup/Clout for web pages.
  • Plugboard for REST but the intention is to move towards something like compojure-rest
  • Swank – couldn’t manage without it!

It’s a surprise to me how much we manage to do with just the standard Clojure libraries.

Do you think functionality rises linearly or exponentially with lines of code?

I thought this was a great question because it points to the huge amount of algorithmic re-use that we enjoy in Clojure.

Did you have a specific business problem that led you to Clojure?

Honestly, no. In my case it was a growing frustration with large Java systems. But since we’ve been using Clojure in our team there have been a number of business problems that have cropped up that are ideally suited to Clojure. Certainly in my industry (banking) the business is built on mathematical functions and data transformations for which functional languages like Clojure are ideal.

Do you think Clojure be around in 5 years time?

This final question was asked by someone sitting in the front row. I don’t think they would have asked this if they’d seen how many people were in the room! Clojure is building momentum, at least in London, and as I said in my talk I think it’s beyond the point of critical mass now.

But on reflection I think it’s an important question. Why should anyone invest a lot of time in learning something that isn’t going to be around in a few years? However, technology is always about betting on certain horses (VHS or Betamax?) and you can never be 100% certain. LISP is a good bet though, it’s survived over 50 years and people keep rediscovering it. So even if Clojure doesn’t survive, I’m confident the knowledge you get from learning it will remain relevant.

Transitive relations in core.logic

Inspired by some recent blogs I’ve been playing with core.logic.

In our use RDF at work we have been developing on Jena, a Java library for processing and querying RDF triples. Overall, Jena is an excellent library and has powerful inference capabilities. While Clojure has great interop support for Java sometimes you find some mismatches between the two paradigms.

For example, Jena being a Java library supports concurrency via read/write locks rather than Clojure’s Software Transactional Memory. In practise this mismatch disrupts Clojure’s preference for lazy sequences – results have to be read in entirety to avoid concurrent modification errors. Jena’s concurrency model is also confusing to use, and we’ve had to fix a number of deadlocks in our code which have been hard to diagnose. Since we only use a small subset of OWL logical predicates in our dataset it may be possible to replace Jena with something that worked better with Clojure.

Both the RDFS and OWL inference engines in Jena work by extending the graph with additional links called ‘entailments’ which reflect the possible inferences that can be made, using both forward and backward chaining strategies. Creating an inference graph can be expensive and not something you want to do on every query. However, if your data model isn’t static you have to re-create the inference graph every time it changes. While there are inference engines that support such incremental updates efficiently, I was intrigued by core.logic to see if it was possible extending the data model and rather to put inferencing in the relations themselves.

Here’s an example. RDFS declares a subclass predicate which declares one class to be a subclass of another. Any instances of the subclass are, by inference, also instances of the superclass.

We can declare this relation in core.logic like so.

(defrel subclass-of p1 p2)

Here are some facts to illustrate the example.

(fact subclass-of "Royal Bengal Tiger" "Tiger")
(fact subclass-of "Tiger" "Cat")
(fact subclass-of "Cat" "Mammal")
(fact subclass-of "Mammal" "Animal")

Now we can get all the graph links with this query.

(run* [q]
      (fresh [a b]
             (subclass-of a b)
             (== q [a b])))

But rdfs:subclass-of is a owl:transitive. Never mind, we can transform the relation to act transitively. Here’s the transformer function :-

(defn transitive [r]
  (letfn [(t [p1 p2]
            (fresh [between]
                    ((r p1 p2))
                    ((r p1 between)
                     (t between p2)))))]

Now we can use it to get all the entailments as well.

;; This transforms the relation into a transitive relation
(run* [q]
      (fresh [a b]
             ((transitive subclass-of) a b)
             (== q [a b])))

Scalable sorting

Sorting data is an important step in gathering data together for aggregation and reporting purposes.

Most Clojure sort algorithms you’ll find on the web assume you can fit all the items to sort into the memory of a single JVM instance. For example, many merge sorts will use partition to split the data set into 2 halves.

Where I work we frequently have to deal with large volumes of data where the data sets we deal with are far too large for these algos. Instead we have to find alternatives.

Here’s our approach to sorting large data. Although we did spend some time researching the problem, there’s bound to be more better algorithms out there. Please let me know if you find one!

Divide and conquer

To sort a large dataset we break it into smaller ‘bite-sized’ chunks which we can sort in memory, and write these sorted chunks to disk. Then we can run a lazy merge sort against these chunks. Although we currently run this on one node the nature of merge sort is that it can scale-out recursively and is compatible with map-reduce. In production use we’ve found that even on a single node huge datasets need a small degree of recursion just to avoid having too many chunks and running out of file handles. However, we’ll ignore recursion for the purposes of this discussion (we can always add it later).

A simple merge-sort function

First, let me print our merge-sort function minus the requisite doc-strings and comments.

(defn merge-sort
  ([^java.util.Comparator comp colls]
     (letfn [(next-item [[_ colls]]
               (if (nil? colls)
                 [:end nil]
                 (let [[[yield & p] & q] 
                       (sort-by first comp colls)]
                   [yield (if p (cons p q) q)])))]
           (->> colls
                (vector :begin) 
                (iterate next-item)
                (drop 1)
                (map first)
                (take-while (partial not= :end)))))
     (merge-sort compare colls)))

Firstly, note the main function overload takes 2 arguments. The first is the Java comparator we’ll use to sort the data, the second is a collection of sorted sequences. These sequences don’t have to be in memory, they are lazy and their elements can be read in as we need them (from disk, network, etc.). However, we do assume the sequences have already been sorted with the same Java comparator given in the first argument. At the lowest level of recursion these sequences will be small enough to be sortable in memory and written out to disk – this is fairly trivial so we won’t cover it here.

We define a function called next-item which takes the collection sequences and returns a pair:

  • the first value is the next result in the merge-sorted sequence. This is called the yield.
  • the second value is the same collection of sequences we started with minus the yielded value. This is called the remainder and is used to calculate the next result.

Here’s the next-item function (first draft) :-

(fn [colls]
    (let [[[yield & p] & q] 
          (sort-by first comp colls)]
        [yield (cons p q))]))

The first job of the function is to sort the collections of sequences by their first values.

(sort-by first comp colls)

Here, comp is just the Java comparator that dictates the order of the sort.

Now we destructure. Since the item we want is the first item of the first sequence, we destructure like this :-

[[yield & p] & q]
  • yield is the item we are saying is the next item in the merged sorted sequence.
  • p is the rest of the sequence which contained yield.
  • q is the collection of all the other sequences.

(A nice ‘by-product’ of the sort-by is that q is already sorted. Even if the contents of p demand that we sort again, it won’t be an expensive operation. In fact, my own tests have shown that little or no performance improvement can be made by avoiding the use of Java’s sort by moving the head collection down the list – Java’s sort is very efficient for sets that are either ‘already sorted’ or ‘almost sorted’.)

Now we need to form our pair…

The first item is yield. That’s easy.

The second item is the collection of sequences with yield removed. That’s p joined onto the head of q, so we use cons.

(cons p q)

Remember that q is a collection of sequences sorted by ‘first item’. We don’t know that (cons p q) is sorted according to first item. It’s likely that the next result is in p or in (first q). However, we defer this sort operation to the next iteration, because re-sorting is the first thing we do in each iteration.

By the way, if p is nil then p is exhausted and we’ll just take q, hence this tweak :-

(if p (cons p q) q)

Putting this altogether, our final next-item function is as follows :-

(fn [colls]
    (let [[[yield & p] & q] 
          (sort-by first comp colls)]
        [yield (if p (cons p q) q)]]))

Repeat as necessary

Each time we run the next-item function we’ll get the next result in our merged sorted sequence. Sounds like another job for Clojure’s iterate function!

Our next-item function needs to be amend slightly so that it can work iteratively – it needs to accept the same kind of structure that it produces. We amend the function signature so it can take a pair as the argument, and we destructure to ignore the first item and label the second.

(fn [[_ colls]]
    (let [[[yield & p] & q] 
          (sort-by first comp colls)]
        [yield (if p (cons p q) q)]])

We’ll just have to initiate the iteration with a fake pair.

[:begin colls]

And we’ll have to ensure the iteration can tell the caller when to stop.

[:end nil]

Let’s build that ‘stop’ into our function :-

(next-item [[_ colls]]
  (if (nil? colls) [:end nil]
    (let [[[yield & p] & q] 
          (sort-by first comp colls)]
      [yield (if p (cons p q) q)])))

Now the inner function is finished let’s move onto the main body of the merge-sort function.

 (->> colls
     (vector :begin) 
     (iterate next-item)
     (drop 1)
     (map first)
     (take-while (partial not= :end))

We start by taking the colls argument and apply the ->> threading operator to it. This means that the result of each step will be added as the last argument of the next step.

(->> colls
    ... )

As we discussed earlier, we have to compose the first ‘fake’ pair to get the first iteration to work.

(vector :begin) 

This creates [:begin colls]. We should remember this is also the first item that iterate will yield so we’ll have to drop it later.

Now let’s run our next-item function (infinitely)

(iterate next-item)

We don’t want the first ‘fake’ result – it was just to help the iterate function, let’s drop it :-

(drop 1)

We’re only interested in the yield values (the first in each pair) :-

(map first)

And we need to continue until the stop marker :-

(take-while (partial not= :end))

Finally we overload the merge-sort function to use a default comparator where it isn’t specified.


Let’s define a function that can create n collections of m size populated with random numbers.

user> (defn make-colls [n m]
  (for [i (range n)]
     (for [j (range m)]
       (rand-int 1000000)))))

Let’s evaluate now so this structure in in memory and its creation doesn’t affect our merge-sort performance measurements.

user> (def colls (make-colls 100 20000))

Let’s test it to ensure that it’s working. We expect the first collection to be sorted.

user> (take 10 (first colls))

(15 33 88 94 114 119 148 164 189 190)

However, when we merge sort, the numbers from all the collections are merged so we get a less rapidly incrementing sequence of numbers.

user> (take 10 (merge-sort colls))    

(1 1 1 2 2 5 6 7 7 8)

Actually merge-sorting is quite useful

Sort algorithms are often used as a learning guide, especially for functional languages. But how useful is sorting for tackling business problems?

It turns out we use merge-sort in multiple places in our Clojure system. The first use was for grouping of risk data so it can be collated on a trade-by-trade basis. Then we decided to rewrite an old Java-based scheduler that was expensive to maintain. We now have a lazy scheduler (we call it ‘Chime’) that uses this lazy merge-sort to merge together multiple schedules of repeating tasks into a single schedule. I’ll discuss more about Chime in a future post.