Tuesday, October 14, 2014

Distilling the Command Pattern with micro-services

One of my favorite design patterns while developing software is the use of the command pattern. I’ve found that varying the pattern’s implementation to focus on a few of the design principles has been a very beneficial exercise. I will start by discussing classic version of the command design pattern, then we'll see if we can get some of the benefits without all of the work.

The formal past

In the past, I've used the command pattern in a more formal heavyweight way. Living in a CQRS world, a request comes across from a client or another piece of the domain that will have the details needed to construct the command object. The command object is often times a class that is just a databag loaded with all of the details needed in order for the execution of the command to take place. When used this way, you can sling your commands to something that can manage your application load. Sling a command onto a queue, give that command a certain priority, and walk away. Async magic happens, Bob’s your uncle, and your work will get completed eventually.

There are some great benefits to that level of formalization of that pattern, such as:
  • the ability to “undo” the command
  • the ability to record the commands, for future replaying and/or simulation of system load
  • the ability to distribute the load of the commands across multiple workers.


Is that formalization needed for micro-services?

Sometimes all the formalization isn't needed. That level of hand waviness isn't always required though, especially when it comes to mico-services. One way to leverage this pattern in a more lightweight manner is to separate the orchestration and "readying the command" from the actual execution of the command. This separation is the secret sauce, the holy grail, and all that’s really needed to start. Micro-services don't’ always need the level of formalization that the classic implementation of the command patterns calls for. The key is decoupling the actual execution of the command from the readying of command execution.

When this line in the sand in drawn, it creates a separation that I've found to be extremely valuable. Here’s why:

Forces design of dependencies up front

When a command executes, it needs to be self-containing. At a bare minimum, it needs to have everything it needs to know how to spin up its dependencies. I've worked on a few commands that require a database connection. Assuming that you didn't serialize the command and pass it along to something else to do that work, I've liked the pattern of passing it a pointer to a database connection or handle to the service layer. With single responsibility in mind, the command shouldn't have to know how it needs to go about getting a connection. This keeps your commands focused and small. This same analogy can be applied to many of the command’s dependencies.

Tell, don’t ask.

By leveraging the command pattern, it's delegating the responsibility that the action that must happen to the command itself. The caller or invoker of the command shouldn't worry about the state of the system, what data may look like, or whether or not the command is a valid action. This logic is the responsibility of the object that is being called, not the caller. By pushing this logic into the command it eliminates the back and forth between the caller and the command.

Dependency injection

When the dependencies are defined up front, dependency injection can be leveraged. A different service, database, or endpoint can be injected into the command without any (or much) rework.

Great entry point for tests

Since out command consumes its dependencies, it also creates a fantastic point of entry for tests. Commands now be easily tested in isolation. Passing in test or mock databases is easy now! This small point alone is enough to sell me on continuing it into the future.

It’s ready to scale

If needed, you are setup to mature the command into something bigger. Because the commands are isolated, how the commands are called is still able to change. You could morph your system into a the larger more formal sense of the command pattern if you’d like to take advantage of some of the things listed above.

As Donald Knuth says in the book The Art of Computer Programming -

     “Premature optimization is the root of all evil (or at least most of it) in programming.”

I've typically pushed for creating the command without worrying about performance at the same time.  Because the command is decoupled from the orchestration of how the command is executed, it’s now easy(ish) to add concurrency or load distribution. When this is saved until the end of the command creation, you can see if scaling is even needed. You may be pleasantly surprised to find that your application is keeping up with the data load just fine, without the added complexity of concurrency.