Brandon’s talk was mostly a (great!) re-hash of Martin Fowler’s excellent article on this topic here so I won’t attempt to re-write all of what he and Martin covered, but I will instead focus on the points that seemed salient to me.
A few words of context before I start however. I work in a corporate/enterprise IT environment – i.e. my team provides IT services to (predominantly) internal customers. Our architecture is largely monolithic and inflexible, and we struggle to meet the demands of the business in terms of responsiveness or cost. We are considering moving – incrementally I expect – to a more SOA model of operating to try and address these concerns, and it’s in this light that I’ve picked out items from Brandon’s talk.
Brandon started out with a couple of comments and principles, starting first with the assertion that “most REST mistakes happen with: versioning, deployment, testing and service granularity”, which is an interesting and valuable insight, since most early discussion seems to be about the HATEOAS constraint and the implications on system design (certainly that’s where I’ve spent most time discussing RESTful topcis in the past) and not enough time has been given to these other, detailed issues.
He then moved on to make an interesting observation:
You should value choreography over orchestration
I love this comment. It says that the actual business processes emerge from interactions between services (choreography between those services), whereas orchestration is top-down design imposed upon those services, and implemented via an ESB. Although Orchestration simplifies an architecture diagram (it says there’s only one way of using the services!) it doesn’t simplify the architecture itself (you have to enforce that one way of using it, and are reliant on state and contract fulfillment across the entire architecture, at all times – including across release-boundaries).
Next up were some key lessons to apply when implementing a RESTful SOA, though I think many of these lessons are more generally applicable in any case:
1. Define logical environments for isolation – one for each need. Developers should work in isolation from one another unless it makes sense to share, or where the cost/complexity of the isolation doesn’t actually give real benefit. This isolation need not be different physical or virtual machines: services can run on different ports for example for different uses. A fixed hierarchy of one dev, one test, one UAT and one prod environment does not meet the needs of all users, and should not be slavishly followed. I think we suffer from this poor understanding of environmental isolation, enforcement of “rules” and rationing of server resources in my own corporate environment. [Aside: a useful tool for building dev environments in a repeatable way is Vagrant]
2. Use versioning only as a last resort. Versioning of RESTful services is self-evidently easier and more flexible than static or semi-static binding of those same services in code, say, but as a result people tend to reach to this as the first answer and it brings it’s own complexities, for example coordinating the configuration of a given environment, and the differently versioned dependencies of different services. Postel’s Law suggests an alternative approach to try, at least in the first instance. When you do have to version, making use of semantic versioning to raise awareness of compatibility issues is helpful.
3. Separate functional testing from integration testing and make full use of stubs, including over-the-wire stubs and stub-servers. Some modern stub-servers can record real data and replay these. Some useful tools include:
- Moco: https://github.com/dreamhead/moco
- vcr: https://github.com/vcr/vcr
- βetamax: https://github.com/robfletcher/betamax
- stubby4j: https://github.com/azagniotov/stubby4j
- mountebank: http://www.mbtest.org/
4. Treating service end-points as user-stories forces you to test appropriately, and consider service granularity at the right level. Test Driven Design (as opposed to Development) forces you to have multiple consumers (since your test client is a consumer) which really forces better decoupling of the services in the system as a whole. Using the “last known good version of other services” and running those tests is also a powerful tool, since you’re not only testing the service you are changing, but how it is used. Martin Fowler’s post has some good diagrams to illustrate this, as well as the use of development pipelines.
5. Use bounded context to control complexity – well this is a general statement of good design IMHO, though Brandon gave some very good examples of how this can go wrong and some advice about how to mitigate this. He suggested using the “value context” to simplify structure and design: just because something is called the same thing in different contexts doesn’t mean it IS the same thing, nor that other contexts/consumers all need to see the same attributes/aspects of that entity. Other groups and contexts can aument the entity/data with additional attributes which are context-specific and which can be constrained only to those contexts that need it. I think this is a little like having a “normalised” service design, in the same way you might have a normalised data structure in an RDBMS.