it doesn't matter if your service is "micro" or "oriented", if it's tightly coupled -- especially if your service is on the Web -- you're going to be stuck nursing your service (and all it's consumers) through lots of pain every time each little change happens (addresses, operations, arguments, process-flow). and that's just needless pain. needless for you and for anyone attempting to consume it.
tight coupling is trouble
tight coupling to any external component or service -- what i call a fatal dependency -- is big trouble. you don't want it. run away. how do you know if you have a fatal dependency? if some service or component you use changes and your code breaks -- that's fatal. it doesn't matter what code framework, software pattern, or architectural style you are using -- breakage is fatal -- stop it.
you can stave off fatalities by wrapping calls to dependents in what Nygaard calls in his book Release It! a Circuit Breaker but that requires you also have either 1) an alternate service provider (or set of them) or, 2) you write your code such that the unavailable dependency doesn't mean your code is essentially unusable ("Sorry, our bank is unable to perform deposits today."). and the Circuit Breaker pattern is not meant for use when services introduce breaking changes anyway -- it's for cases when the dependent service is temporarily unavailable.
you're much better off using services that make a promise to their consumers that any changes to that service will be non-breaking. IOW, changes to the interface will be only additive. no existing operations, arguments or process-flows will be taken away. this is not really hard to do -- except that existing tooling (code editors, build-tools, and testing platforms) make it really easy break that promise!
there are lots of refactoring tools that make it hard to break existing code, but not many focus on making it hard to break existing public interfaces. and it's rare to see testing tools that go 'red' when a public interface changes even though they are great at catching changes in private function signatures. bummer.
so you want to use services that keep the "no breaking changes" pledge, right? that means you also want to deploy services that make that pledge, too.
honoring the pledge
but how do you honor this "no breaking changes" pledge and still update your service with new features and bug fixes? it turns out that isn't very difficult -- it just takes some discipline.
here's a quick checklist for implementing the pledge:
- promise operations, not addresses
service providers SHOULD promise to support a named operation (
findCustomer) instead of promising exact addresses for those operations (
http://myservice.example.org/findCustomer). on the Web you can do that using properties like
idthat have predetermined values that are well-documented. when this happens, clients can "memorize" the name instead of the address.
- promise message formats, not object serializations
- object models are bound to change -- and change often for new services. trying to get all your service consumers to learn and track all your object model changes is just plain wrong. and, even if you wanted all consumers to keep up with your team's model changes, that means your feature velocity is tied to the slowest consumer in your ecosystem - blech! instead, promise generic message formats that don't require an understanding of object models. formats like VoiceXML and Collection+JSON are specifically designed to support this kind of promise. HTML, Atom, and other formats can be used in a way that maintains this promise, too. clients can now "bind" for the messgae format, not the object model -- changes to the model on the service don't leak out to the consumer. when this happens, adding new data elements in the response will not break clients.
- promise transitions, not functions
service providers SHOULD treat all public interface operations as message-based transitions, not fixed functions with arguments. that means you need to give up on the classic RPC-style implementation patterns so many tools lead you into. instead, publish operations that pass messages (using registered formats like
application/x-form-urlencoded) that contain the arguments currently needed for that operation. when this happens, clients only need to "memorize" the argument names (all pre-defined in well-written documentation) and then pay attention to the transition details that are supplied in service responses. some "old skool" peeps call these transition details FORMs, but it doesn't matter what you call them as long as you promise to use them.
- promise dynamic process-flows, not static execution chains
serivces SHOULD NOT promised fixed-path workflows ("I promise you will always execute steps X then A, then Q, then F, then be done."). this just leads consumers to hard code that nonsense into their app and break when you want to modify the workflow due to new business processes within the service. instead, services SHOULD promise a operation identifiers (see above) along with a limited set of process-flow identifiers (
done) that work with any process-flow that you need to support. when this happens clients only need to "memorize" the generic process-flow keywords and can be coded to act accordingly.
not complicated, just hard work
you'll note that all four of the above promises are not complicated -- and certainly not complex. but they do represent some hard work. it's a bummer that tooling doesn't make these kinds of promises easy. in fact, most tools do the opposite. they make address-based, object-serialization with fixed argument functions and static execution chain easy -- in some tools these are the defaults and you just need to press "build and deploy" to get it all working. BAM!
so, yeah. this job is not so easy. that's why you need to be diligent and disciplined for this kind of work.
and -- back to the original point here -- decoupling addresses, operations, arguments, and process-flow means you eliminate lots of fatal dependencies in your system. it is now safer to make changes in components without so much worry about unexpected side-effects. and this will be a big deal for all you microservice fans out there because deploying dozens of independent services explodes your interface-to-operation ratio and it's just brutal to do that with tightly-coupled interfaces that fail to support theses promises inherent in a loosely-coupled implementation.
for the win
so, do not fear. whether you are a "microservice" lover or a "service-oriented" fan, you'll do fine as long as your make and keep these four promises. and, if you're a consumer of services, you now have some clear measures on whether the serivce you are about to "bind" to will result in fatalities in your system.
Web For All
one of the key principles for W3C is the "Web for All"
The social value of the Web is that it enables human communication, commerce, and opportunities to share knowledge. One of W3C's primary goals is to make these benefits available to all people, whatever their hardware, software, network infrastructure, native language, culture, geographical location, or physical or mental ability.
when a single vendor has the power to block others' access to competitive products, the Web is a place that doesn't live up to this principle. I know that Tim Berners-Lee is not responsible for the way Amazon operates. Neither is W3C CEO Jeffery Jaffe. however, this unfolding battle to pre-empt Web user's ability to access any content anywhere is just another iteration of the same battle that timbl (Berners-Lee's handle) has called out numerous times at the network level. what he calls the Battle For The Net.
users over all others.
i know the W3C has attempted to deal with this in the past -- to mixed reviews. the very fact that Tim penned that piece on DRM and the Web shows that the W3C is aware of the issue. and a key part of that (now two-year-old) position paper was the notion of "users over all others":
In case of conflict, consider users over authors over implementors over specifiers over theoretical purity.
but events in this space seem to continue to run ahead of the W3C's ability to deal with them. and that's a big problem.
i know its hard, but it is important
i understand that the job of reigning in corporate greed on the open web is a tough job. and someone who has the best interests of users above all others needs to be leading the discussion. not following it. and certainly not silent while others establish the "operating rules" for yet another walled garden of profit built on the backbone of the free and Open Web.
enabling commerce is a good thing.
permitting exploitation and profiteering is not.
Tussle in Cyberspace
there is a great paper that deals with some of these issues: Tussle in Cyberspace: Defining Tomorrow's Internet. essentially:
This paper explores one important reality that surrounds the Internet today: different stakeholders that are part of the Internet milieu have interests that may be adverse to each other, and these parties each vie to favor their particular interests. We call this process “the tussle”.
the "tussle" is important. not because it happens, but because one of the key values of an open internet (not just Web) is that the Internet can be crafted (through standards) to "level the playing field" for all. this is why "Web For All" is important. this is why the Battle for the Net is important.
and it is the same for video content, too.
why standards exist
we don't need to "regulate" vendors, we need to continue to make sure the tussle is fair to all. that's why standards exist -- to level the playing field.
and, IMO, that's why the W3C exists.
and that's why the current news is disturbing to me.
Layer7's brain child
the idea for the API Academy came out of the core leadership of what was then known as Layer7. basically, it was the chance to pull together some of the great people involved in APIs for the Web and focus on promoting and supporting API-related technologies and practices. from the very beginning it was decided that the API Academy would focus on "big picture stuff" -- not a particular product or technology segment. and we dove in with both feet.
my first talk as part of the Academy was June 2012 on hypermedia and node.js at QCon NYC. Ronnie and i finally met face-to-face later that summer at the Layer7 office in Vancouver. we both talked at RESTFest 2012 where Layer7 became a key sponsor of that event. in December of 2012, Ronnie and I were proud to join Mehdi Medjaoui at the very first API Days in Paris. and we've been going strong ever since.
the API Academy is an amazing group of people. over the last three years, along w/ Matt and Ronnie we've had the chance to host Alex Gaber, Holger Reinhardt -- both of whom have moved on to other things -- and Irakli Nadareishvili who came to us from his work at National Public Radio here in the US. the oppty to work side-by-side with people of this caliber is a gift and i am happy to be a part of it.
since i joined API Academy, Layer7 has merged w/ CA Technologies and this has given us the chance to expand our reach and increase our contact w/ experienced people from all over the world. since joining w/ CA we've had the chance to travel to Australia, Hong Kong, Tokyo, Seoul, Beijing, and several cities Eastern Europe. later this year there will be visits to Shanghai and Rio de Janeiro along w/ dozens of cities in the US and Europe. along the way the API Academy continues to meet w/ CA product teams and leadership offering to assist in any way we can to continue the same mission we had at our founding.
Help People Build Great APIs for the Web
i've worked lots of places, with diverse teams, in all sorts of cultures. i have to say that my experience w/ the API Academy and w/ CA continues to be one of the most rewarding and supportive i've ever enjoyed. i am very lucky that i get to do the kind of work i love w/ people that challenge my thinking, support my experimenting, and offer excellent opportunities to learn and grow along the way.
the last three years have been full of surprises and i can't wait to see what the next three years brings. i am truly a #luckyMan
i'm very proud to announce that InfoQ has just released a new series that I helped edit. the series is called Description, Discovery, and Profiles: The Next Level in Web APIs and the contents of this series has a series of excellent contributing authors including Ronnie Mitra of API Academy/CA, Mike Stowe with Mulesoft, Kin Lane of API Evangelist fame and, Mark Foster from Apiary. it also includes interviews with Swagger creator Tony Tam and Profile RFC editor Erik Wilde. and it's packed with material covering what i think are three key patterns/technologies in the Web API space:
- "The ability to easily describe APIs including implementation details such as resources and URLs, representation formats (HTML, XML, JSON, etc.), status codes, and input arguments in both a human- and machine-readable form. There are a few key players setting the pace here."
- "Searching for, and selecting Web APIs that provide the desired service (e.g. shopping, user management, etc.) within specified criteria (e.g. uptime, licensing, pricing, and performance limits). Right now this is primarily a human-driven process but there are new players attempting to automate selected parts of the process."
- "Long a focus of librarians and information scientists, 'Profiles' that define the meaning and use of vocabulary terms carried within API requests and responses are getting renewed interest for Web APIs. Still an experimental idea, there is some evidence vendors and designers are starting to implement support for Web API Profiles."
lots of vendors and technologies here
this is a pretty wide-ranging set of topics and lots of vendors and technologies are highlighted over the next seven articles. some of them include:
- API Blueprint
- Visual Studio
- I/O Docs
- API Commons
- Programmable Web's API Directory
- Mashery's API Network
- Apache Zookeeper
- HashiCorp Consul
- CoreOS etcd
- Rapido API Designer
- Spring Data
and that's just in the first article in the series!
here's a quick rundown of all the articles that will br released between late May and early July:
Description, Discovery, and Profiles: A Primer
this articles takes a look at several formats, key vendors, and identify the opportunities and challenges in this fast-moving portion of the Web API field.
From Doodles to Delivery : An API Design Process
Ronnie Mitra investigates what good design is and how using Profiles along with an iterative process can help us achieve it.
The Power of RAML
Mike Stowe introduces us to the RAML format, reviews avilable uses and tools, and explains why Uri Sarid, the creator of RAML wanted to push beyond our current understandings and create a way to model our APIs before even writing one line of code.
APIs with Swagger : An Interview with Reverb's Tony Tam
I talk to founder and inventor Tony Tam about the history, and the future, of one of the most widely-used API Description formats today: Swagger.
The APIs.json Discovery Format: Potential Engine in the API Economy
In this piece, Kin Lane describes his APIs.json API discovery format which can provide pointers to available documentation, licensing, pricing for exsiting Web APIs.
Profiles on the Web: An Interview with Erik Wilde
In early April, 2015 Erik agreed to sit down with InfoQ to talk about Profiles, Description, Documentation, Discovery, his Sedola project and the future of Web-level metadata for APIs.
Programming with Semantic Profiles : In the land of magic strings, the profile-aware is king.
Mark Foster -- one of the editors of the ALPS specification -- explains what semantic profiles are and how they can transform the way Web APIs are desgined and implemented.
A Resource Guide to API Description, Discovery, and Profiles
To wrap up the series, we offer a listing of the key formats, specifications, tools, and articles on API Description, Discovery, and Profiles for the Web.
looking forward to the weekly releases
it was a pleasure working with such a distinguished group of authors and practitioners in this very important space and i am looking forward to the continued released between late May and early July. i'm also looking forward to feedback and discussion from readers of the series.
the Web is a dynamic and fast-moving space and it should be interesting to keep an eye on this "meta-level" of the API eco-system for some time to come.
it's that time of year again! RESTFest, one of my favorite geek events of the year, will be happening (once again) in beautiful Greenville, SC. The dates of the event this year are Sep 17-19 and there are still tickets available. And this year's event is shaping up to be another great combo of hacking, demos, lighting talks and socializing. You can see what last year was like by checking out the 2015 promo video.
Keynote: IBM's James Snell
we're proud to announce that this year's keynote speaker is James Snell. i've known James for several years. he's a prolific man and has been involved in editing/authoring several IETF standards including the Atom Syndication format, the HTTP PATCH method, and the WC3's Activity Streams spec. his keynote, Practical Semantics, is bound to be excellent.
the RESTFest way...
at RESTFest, we have a core set of principles that we think helps make for a unique and valuable experience for everyone involved. they boil down to...
- everybody talks
- if you show up, you're delivering a talk! our first principle is "everyone talks and everybody listens." for the past five years, we've stuck to a single track event and that allows everyone to hear *all* the talks, too. we all get to interact and experience the event in the same real-time space.
- less theory, more practice
- theories, formal papers, etc. etc. are all good, but we don't need them at RESTFest. we just want to hear what's on your mind, what you're working on, and what you are interested in talking about. show us your code!
- hacking is good
- day one is "hack day" and each year has a unique theme. anyone can propose a hackday theme and we encourage attendees to submit ideas and come prepared to code in whatever format, style, and framework they love. you can track and contribute to the hackday theme on the wiki.
- dont' be a jerk
- our Code of Conduct is very simple and very clear. essentially -- "Don't be a Jerk." it is critical that RESTFest be a safe, inviting, and positive experience for everyone. you're free to speak your mind as long as your respectful.
Sign Up and Start Interacting NOW
actually, RESTFest has already started! right now you can sign up at the wiki, add your people page and introduce yourself to the group. you can keep up on breaking news on our email list and secure your place at the event by purchasing one of the limited tickets for RESTFest 2015. if you're realy itching to get connected, drop into our IRC channel on freenode or link to our Twitter account and start chatting.
RESTFest is what you want it to be
it's the people who show up each year that make RESTFest such a great event. each year is different and each year is amazing. check out our video channel to see all the talks from the last few years. spots are limited and we'd love to see you there!