mca blog progressSat, 26 May 2018 13:09:33 GMTen-us180Preserving the Egg on the Web, 31 Mar 2018 12:32:00 GMT<p> <a href=""><img src="" width="300" class="inline" align="right" /></a> When I plan out an implementation for the Web, one of the things I think about is the problem of "breaking eggs." One great example of this is the old adage, "You can't make an omelette without breaking some eggs." That's cute. It reminds us that there are times in our lives when we need to commit. When we need to forge ahead, even if some people might disagree, even if there seems to be "no turning back." </p> <p>However, this "omelette" adage is not what I mean when I think about Web implementations and eggs.</p> <p> Instead, I think about <a href="">entropy</a> and how you cannot 'unscramble' an egg. I won't go into the physics or philosophical nuances of this POV except to say, when I am working on a web implementation I work very hard to avoid 'breaking any eggs' since it will be quite unlikely that I'll ever be able to put those eggs back together again.</p> <p>I don't want my Web solution to end up like <a href="">Humtpy Dumpty!</a></p> <h3>Web Interactions as Eggs</h3> <p> The web is a virtual world. It is a highly-distributed and non-deterministic -- much like our physical world. We can't know all the influences and their effects on us. We can only know our immediate surroundings and surmise the influences based on what we observe locally. The world is a random place. </p> <p>So each time we fill out a form and press "send", each time we click on a link, we're taking a risk and stepping into the unknown. For example:</p> <ul> <li>Is there really a page at the other end of this link or is there a dreaded 404 waiting for me at the other end?</li> <li>Have I filled out the form correctly or will I get a 400 error instead?</li> <li>Or, have I filled out the form correctly, only to encouter a 500-level server error?</li> <li>Finally, what if I've filled out the form, pressed "send" and never get a response back at all? what do I do now?</li> </ul> <h3>But What Can Be Done?</h3> <p>When I set out to implement a solution on the Web, I want to make sure to take these types of outcomes into account. I say "take them into account" because the truth is that I cannot <i>prevent</i> them. Most of the time these kinds of failures are outside my control. However, using the notion of <a href="">Safety-I and Safety-II</a> from Erik Hollnagel, I can adopt a different strategy: While can't prevent system failures, I can work to survive them.</p> <p>So how can I survive unanticipated and un-preventable errors in system? I can do this by making sure each interaction is not an "egg-breaking" event. An "egg-breaker" is an action that cannot be un-done, cannot be reversed. In the web world, this is an interaction that has only two outcomes: "success or a mess."</p> <p>A great example of the sad end of the "success-or-a-mess" moment is an action like "Delete All Data." We've probably all experienced a moment like this. Most likely we've answered "yes" or "OK" to a confirmation dialog and the moment we did, we realized (too late) that we "<a href="">chose poorly</a>." There was no easy way to fix our mistake. We had a mess on our hands.</p> <p>The obvious answer to this kind of mess is to support an "undo" action to reverse the "do." This turns an "egg-breaking" event into and "egg-preserving" event. And that's what I try to do with as much of my Web implementations as possible -- preserve the egg.</p> <p>Let's look at some other ways to prevent breaking eggs when implementing solutions in a non-deterministic world...</p> <h3>Network-Level Idempotency</h3> <p> One of the ways you can avoid "a mess" is to make sure your actions are idempotent at the network level. That means they are <b>repeatable</b> and you get the same results every time. Think of an SQL UPDATE statement. You can update the the <code>firstName</code> field with the value <code>"Mike"</code> over and over and the <code>firstName</code> field will always have the same value: <code>"Mike"</code>.</p> <p>In the HTTP world, both the <code>PUT</code> and <code>DELETE</code> methods are designed as idempotent actions. This means, in cases where you commit a <code>PUT</code> and never recieve a response, you can repeat that action without worry of "breaking the egg."</p> <p>Relying on network-level idempotency is very important when you are creating autonomous services that interact with each other without direct human intervention. Robots have a hard time dealing with non-idempotent failures.</p> <h3>Service-Level Event Sourcing</h3> <p> At the individual service level, a good way to "preserve the egg" is to make all writes (actions that change that state of things) <b>reversible</b>. Martin Fowler shows how this can be done using <a href="">Event Sourcing</a>. Event Sourcing was explained to me by Capital One's, <a href="">Irakli Nadareishvili</a> as a kind of "debit-and-credit" approach to data updates. You arrange writes as actions that can be reversed by another write. Essentially, you're not "un-doing" something, you're "re-doing" it. </p> <p>Fowler shows that, by implementing state changes using Event-Sourcing, you get several benefits including:</p> <ul> <li>detailed change logs</li> <li>the ability to run a complete rebuild</li> <li>the ability to run a "temporal query" (based on a set date/time)</li> <li>the power to replay past transctions (to fix or analyze system state)</li> </ul> <p>I like say that, with Event-Sourcing, you can't reverse the arrow of time, but you can move the <i>cursor</i>.</p> <h3>Solution-Level Sagas</h3> <p> In 1987, Garcia-Molina and Salem published a paper simply titled "<a href="">Sagas</a>." This paper describes how to handle long-lived transactions in a large-scale system where the typical Two-Phase Commit pattern results in a high degree of latency. Sagas are a another great way to keep from "breaking the egg."</p> <p> <a href="">Chris Richardson</a> has done some excellent work on <a href="">how to implement Sagas</a>. I like to think of Sagas as a way to bring the service-level event-sourcing pattern to a solution-level of multiple interoperable services. Richardson points out there is more than one way to implement Sagas for distributed systems including: </p> <ul> <li>Choreography-based (each service publishes their own saga events)</li> <li>Orchestration-based (each saga is managed by a central saga orchestrator)</li> </ul> <p>Sagas are a great way to "preserve the egg" when working with multiple services to solve a single problem.</p> <h3>And so...</h3> <p> When putting together your Web implementations, it is important to think about "preserving the egg" -- making sure that you can reverse any action in case of an unexpected system failure. Working to avoid "breaking the egg" adds a valuable level of resilience to your implementations. This can protect your services, your data, and your users from possibly catastrophic events that lead to "a mess" that is difficult and costly to fix.</p> <p>In this post, I shared three possible ways to do this at the network, service, and solution level. There are probably more. The most important thing to remember is that the Web is a highly-distributed, non-deterministic world. You can't <i>prevent</i> bad things from happening, but with enough planning and attention to details, you can <i>survive</i> them.</p> <h4>now that this post is done, anyone hungry for an omelette?</h4>Model Actions, Not Data, 13 Feb 2018 22:40:00 GMT<blockquote> I've not blogged in quite a while here -- lots of reasons, none of them sufficient. So, today, i break the drought with a simple rant. Enjoy -- <b>mca</b> </blockquote> <p> <a href=""> <img src="" width="150" align="right" class="inline"/> </a> It is really encouraging to see so many examples of companies and even industries spending valuable resources (time, money, people) on efforts to define, implement, and advocate for "open" APIs. in the last few years i've seen a steady rise in the number of requests to me for guidance on how to go about this work of creating and supporting APIs, too. And i hear similar stories from other API evangelists and practitioners up and down the spectrum from startups to multi-nationals. And, like i said, it is encouraging to see. </p> <p> But… </p> <p> Even though i've been quite explicit in my general guidance through lots of media and modes (interviews, presentations, even full-length books), it is frustrating to see many people continue to make the same simple mistakes when going about the work of designing and deploying their APIs. so much so, that i've decided to "dis-hibernate" this blog in order to squawk about it openly. </p> <p> And chief among these frustrations is the repeated attempts to design APIs based on data models. Your database is not your API! Stop it. Just stop. </p> <p> Unless you are offering API consumers a SaaS (storage-as-a-service) you SHOULD NOT be using your data model as any guide for your API design. Not. At. All. </p> <blockquote> <p> Arthur Jensen (In a loud, angry voice): "You are messing with the primal forces of nature Mr. Beale. And YOU! WILL! ATTONE!" </p> <p> (Arthur pauses and leans in to whisper in Beale's ear) </p> <p> Arthur: "Am I getting through to you, Mr. Beale?" </p> <p align="right"> <a href="">Network (1976)</a> </p> </blockquote> <p> When you cast about for a handle on how to go about designing your API, the answer is straightforward and simple: Model your Actions. </p> <p> It can't be stated any more directly. Model your Actions. </p> <p> Don't get caught up in your current data model. Don't fall down the rabbit hole of your existing internal object model. Just. don't. </p> <p> Need more? Here's a handy checklist: </p> <ol> <li> Start with a list of actions to be completed. (<a href="">Jobs To Be Done</a>) -- if that sparks your brain. </li> <li> Determine all the data elements that must be passed when performing that action. </li> <li> Identify all the data elements to be returned when the action is performed. Be sure to account for partial or total failure of the attempted action. </li> <li> Rinse and repeat. </li> </ol> <blockquote> <a href="" title="@leonardr">Leonard Richardson</a> and I offer up a long-form description of this four-step plan in our book <a href="">"RESTful Web APIs"</a>. </blockquote> <p> Once you feel good about your list of actions and data points (input and output), collect related actions together. Each collection identifies a context. A boundary for a component. That might sound <a href="" title="Domain-Driven Design">familiar</a> to some folks. </p> <p> Now you have an API to implement and deploy. When implementing this API, you can sort out any object models or data models you want to use (pre-existing or not). But that's all hum-drum implementation detail. Internal stuff no one consuming the API should know or care about. </p> <p> All they care about are the possible actions to perform. And the data points to be supplied and returned. </p> <p> That's it. Simple. </p> <p> Of course, simple is not <i>easy</i>. </p> <blockquote> <p> "I would have written a shorter letter but I just didn't have the time." </p> <p align="right">-- various attributions.</p> </blockquote> <p> now, if a simple rant is not enough, i offer up an epigram i first shared publicly back in 2016: </p> <blockquote> <p> remember, when designing your #WebAPI, your data model is not your object model is not your resource model is not your message model #API360 </p> <p align="right"><a href="" title="@mamund">@mamund</a></p> </blockquote> <p> And if a single epigram is not motivation enough, how about <a href="" title="WADM">a whole slide deck</a> from APIStrat 2016 devoted to explaining that simple phrase? </p> <p> Now, armed (again) with this clear, simple advice, you're ready to avoid the debacle of data-centric industry-wide APIs. </p> <p> <b>just go out there and take action (not data)!</b> </p>an online tutorial -- with friends, 02 Feb 2016 21:22:00 GMT<p> <a href=""> <img src="" height="175" align="right" class="inline" /> </a> there just a few days left before my live O'Reilly <a href="">Implementing Hypermedia</a> online tutorial on february 9th (11AM to 5PM EST). and i'm spending the day tweaking the slides and working up the six hands-on lessons. as i do this, i'm really looking forward to the interactive six hour session. we'll be covering quite a bit in a single day, too. </p> <h4>the agenda</h4> <p>most of the material comes from my 2011 O'Reilly book <a href="">Building Hypermedia APIs with HTML5 and Node</a>. however, i've added a few things from <a href="">RESTFul Web APIs</a> by <a href="">Leonard Richardson</a> and even brought in a few items from my upcoming book <a href="">RESTful Web Clients</a>. </p> <p>the high-level topics are:</p> <ul> <li>Designing a Hypermedia API</li> <li>Using the DORR Pattern for Coding Web APIs</li> <li>Understanding the Collection+JSON Media Type</li> <li>Building Hypermedia SPA Clients</li> </ul> <p> by the time the day is done, everyone will have a fully-functional Hypermedia API service up and running <b>and</b> a Cj-compliant general-pourpose hypermedia client that works w/ <i>ANY</i> Web API that supports the Collection+JSON media type. </p> <h4>Greenville Hypermedia Day</h4> <p> the tutorial is geared toward both individual and team participation. i know some companies are arranging a full-day session with their own dev teams for this, too. and i just heard about a cool event in Greenville, SC for people who want to get the "team" spirit... </p> <p> i found out that <a href="" title="@bigbluehat">Benjamin Young</a> is hosting a <a href="">Hypermedia Day</a> down in Greenville, SC on feb 9th. If you're in the area, you can sign up, show up, join the tutorial in progress, and chat it up w/ colleagues. I know Benjamin from our work together for <a href="">RESTFest</a> and he's a good egg w/ lots of skills. He'll be doing a Q&amp;A during the breaks in the tutorial modules and i <i>think</i> he might have something planned as an "after-party" thing at the end of the day. </p> <p> if you're anywhere <i>near</i> Greenville, SC on feb-09, you should join Benjamin's <a href="">Hypermedia Day</a> festivities! </p> <h4>cut me some slack</h4> <p> i know most of the attendees are going "solo" -- just you, me, and the code -- that's cool. O'Reilly is hosting a live private Slack channel for everyone who signs up for the tutorial. I'll be around all day (and probably some time after that, too) so we can explore the exercises, work out any bugs, and just generally chat. </p> <h4>it's all ready!</h4> <p> so, as i wrap up the slides, the hands-on lessons, the github repo, and the heroku-hosted examples, i encourage you to <a href="">sign up</a> and join us for a full day of hypermedia, NodeJS, and HTML5. </p> <h4>see you there!</h4>Dallas is my first stop in 2016, 07 Jan 2016 17:56:00 GMT<p> <a title="By Herman Brosius (active 1870s). [Public domain], via Wikimedia Commons" href=""><img width="150" class="inline" align="right" alt="Old map-Dallas-1872" src=""/></a> the week of january 11th i'll be in Dallas for two events. this is my first trip of 2016 and i'm looking forward to catching up w/ my Dallas peeps. I'll be visiting with the great folks at <a href="" title="meetup page">DFW API Professionals</a> on Jan-13 and addressing a gathering of Dallas-area IT dignitaries at <a href="">AT&amp;T Stadium</a> during the day on the 14th. </p> <h4>DFW API Professionals</h4> <p> i've known <a href="" title="@traxo">Traxo</a>'s <a href="" title="@stevenscg">Chris Stevens</a> for several years and, when i learned i would be in Dallas in January, we were able to arrange an oppty for me to address his meetup group: <a href="" title="meetup page">DFW API Professionals</a>. I'll be talking about and demoing hypermedia API client coding patterns and taking questions, too. check out the event and, if you can, join me and the whole <a href="" title="@DFW_API_Pros">DFW API Pro</a> membership. </p> <h4>API Management Best Practices Discussion</h4> <p> on thursday, i'll be at the AT&amp;T Stadium with my fellow <a href="">CA</a> colleagues and folks from <a href="">Perficient</a> to join in the discussion on API mgmt and a look into the near future. i get to share the podium w/ CA SVp and Distinguished Engineer, <a href="" title="@kscottmorrison">Scott Morrison</a>. in a lively open discussion (no slideware), we'll be covering API design, deployment, DevOps, Microservices, and IoT with Perficient's Director of Emerging Platform Solutions, <a href="" title="@anneladzem">Annel Adzem</a>. stellar conversation, stunning view of the field -- what's not to like? </p> <h4>just the beginning</h4> <p> of course, this is just the start of my travels for 2016. i've already got the cities of Vancouver, Washington DC, E. Brunswick, Seoul, Tokyo, Melbourne, Sydney, San Francisco, Sao Paulo, Rio, Buenos Aries, and New York on my agenda. and that's just the first few months of 2016! </p> <p> gonna be another great year with the API Academy! if you're anywhere near those cities, keep in touch and i hope we meetup sometime soon. </p> Microservice Style, 25 Nov 2015 00:08:00 GMT<blockquote> With apologies to <a href="">McIlroy, Pinson, and Tague</a>. </blockquote> <p> <a href=""> <img src="" width="150" align="right" class="inline" /> </a> </p> <p> A number of maxims have gained currency among the builders and users of microservices to explain and promote their characteristic style: </p> <p> <b>(i)</b> Make each microservice do one thing well. To do a new job, build afresh rather than complicate old microservices by adding new features. </p> <p> <b>(ii)</b> Expect the output of every microservice to become the input to another, as yet unknown, microservice. Don't clutter output with extraneous information. Avoid strongly-typed or binary input formats. Don't insist on object trees as input. </p> <p> <b>(iii)</b> Design and build microservices to be created and deployed early, ideally within weeks. Don't hesitate to throw away the clumsy parts and rebuild them. </p> <p> <b>(iv)</b> Use testing and deployment tooling (in preference to manual efforts) to lighten a programming task, even if you have to detour to build the tools and expect to throw some of them out after you've finished using them. </p>three keys to design-time governance: protocol, format, and vocabulary, 07 Nov 2015 17:11:00 GMT<p> <a href="" title="Three Keys"> <img src="" width="150" align="right" class="inline" /> </a> this is my ring of keys -- just three of them: work, home, car. i've been focusing over the last couple years on <i>reducing</i>. cutting back. lightening my load, etc. and the keys are one of my more obvious examples of success. </p> <p> i've also been trying to lighten my load <i>cognitively</i> -- to reduce the amount of things i carry in my head and pare things down to essentials. i think it helps me focus on the things that matter when i carry less things around in my head. that's me. </p> <p> staring at my keys today lead me to something that's been on my mind lately. something i am seeing quite often when i visit customers. the approach these companies use for governing their IT development lack the clarity and focus of my "three keys." in fact, most of the time as i am reading these companies' governance documents, they make me <i>wince</i>. why? because they're bloated, over-bearing, and -- almost all of them -- making things worse, not better. </p> <h4>over-constraining makes everyone non-compliant</h4> <p> i am frequently asked to provide advice on design and implementation of API-related development programs -- most often APIs that run over HTTP. and, in that process, i am usually handed some from of "<a href="">Design-Time Governance</a>" (DTG) document that has been written in-house. sometimes it is just a rough draft. sometims it is a detailed document running over 100 pages. but, while the details vary, there are general themes i see all too often. </p> <dl> <dt>Constraining HTTP</dt> <dd> almost every DTG approach i see lists things like HTTP methods (and how to use them), HTTP response codes (and what they mean), and HTTP Headers (including which new REQUIRED headers were invented for this organization). all carefully written. <b>and all terribly wrong</b>. putting limits on the use of standard protocols within your organization means every existing framework, library, and tool is essentially <i>non-compliant</i> for your shop. that's crazy. stop that! if your shop uses HTTP to get things done, just say so. don't try to re-invent, "improve", or otherwise muddle with the standard -- just use it. </dd> <dt>Designing URLs</dt> <dd> another thing i see in DTG documents is a section outlining the much-belabored and elaborate URLs design rules for the organization. Yikes! this is almost always an unnecessary level of "<a href="">bike-shedding</a>" that can only hold you back. designing URLs for your org (esp. large orgs) is a <a href="">fool's errand</a> -- you'll never get it right and you'll never be done with it. just stop. there are more than enough <a href="">agreed standards</a> on what makes up a valid URL and that's all you need to worry about. you should <a href="">resist the urge</a> to tell people how many slashes or dashes or dots MUST appear in a URL. it doesn't improve anything. <blockquote> look, i know that some orgs want to use URL design as a way to manage routing rules -- that's understandable. but, again, resist the urge to tell everyone in your org which URLs they can use for now and all eternity. some teams may not rely on the same route tooling and will use different methods. some may not use routing tools at all. and, if you change tooling after five years, your whole URL design scheme may become worthless. stop using URLs as your primary routing source. </blockquote> </dd> <dt>Canonical Models</dt> <dd> i really get depressed when i see all the work people put into negotiating and defining "<a href="">canonical models</a>" for the organization. like URL designs, <a href="">this always goes badly</a> sooner or later. stop trying to get everyone/every-team to use the same models! instead, use the same message formats. i know this is hard for people to grasp (i've seen your faces, srsly) but i can't emphasize this enough. there are several message formats <i>specifically designed</i> for data transfer between parties. use them! the only shared agreement that you need is the message format (along with the data elements carried <i>in</i> the message). </dd> <dt>Versioning Schemes</dt> <dd> here's one that just never seems to go away -- rules and processes for creating "new versions" of APIs. these things are a waste of time. <b>the phrase "new version" is a euphemism for "breaking changes" and this should never happen</b>. when you build sub-systems that are used by other teams/customers you are making a promise to them that you won't break things or invalidate their work (at least you SHOULD be making that promise!). it is not rocket-science to make backward-compatible changes -- just do it. once you finally accept your responsibility for not breaking anyone using your API, you can stop trying to come up w/ schemes to tell people you broke your promise to them and just get on with the work of building great software that works for a long time. </dd> </dl> <p> so, stop constraining HTTP, stop designing URLs, stop trying to dictate shared models, and forget about creating an endless series of breaking changes. "What then," you might ask, "IS the proper focus of design-time governance?" "How can I actually <i>govern</i> IT systems unless I control all these things?" </p> <h4>three keys form the base of design-time governance</h4> <p> ok, let me introduce you to my "three keys of DTG". these are not the ONLY things that need the focus on IT governance, but they are the <i>bare minimum</i> -- the essential building blocks. the starting point from which all other DTG springs. </p> <dl> <dt>Protocol Governance</dt> <dd> <p> first, all IT shops MUST provide protocol-level governance. you need to provide clear guidance and control over which application-level protocols are to be used when interacting with other parts of the org, other sub-systems, etc. and it is as simple as saying which protocols are REQUIRED, RECOMMENDED, and OPTIONAL. for example... </p> <p> "Here are BigCo, Inc. all installed components that provide an API MUST support <a href="">HTTP</a>. These components SHOULD also support <a href="">XMPP</a> and MAY also support <a href="">CoAP</a>. Any components that fail to pass this audit will be deemed non-compliant and will not be promoted to production." </p> <blockquote> you'll notice the CAPITALIZED words here. these are all special words taken from the IETF's <a href="">RFC2119</a>. they carry particular meaning here and your DTGs SHOULD use them. </blockquote> </dd> <dt>Format Governance</dt> <dd> <p> another <i>essential</i> governance element is the message formats used when passing data between sub-systems. again, nothing short of clear guidance will do here. and there is no reason to invent your own message-passing formats when there are so many good ones available. for example... </p> <p> "All API data responses passed between sub-systems MUST support <a href="">HTML</a>. They SHOULD also support one of the following: <a href="">Collection+JSON</a>, <a href="">HAL</a>, <a href="">Siren</a>, or <a href="">UBER</a>. sub-systems MAY also support responses in <a href="">Atom</a>, <a href="">CSV</a>, or <a href="">YAML</a> where appropriate. When accepting data bodies on requests, all components MUST support <a href="">FORM-URLENCODED</a> and SHOULD support request bodies appropriate for related response formats (e.g. Collection+JSON, Siren, etc.). Any components that fail to pass this audit will be deemed non-compliant and will not be promoted to production." </p> <blockquote> you'll notice that my sample statement does not include TXT, JSON or XML as compliant API formats. why? because all of them suffer the same problem -- they are insufficiently structured formats. </blockquote> </dd> <dt>Vocabulary Governance</dt> <dd> <p> the first two keys are easy. have a meeting, argue with each other about which existing standards are acceptable and report the reusults. done. but, this last key (<a href="">Vocabulary Governance</a>) is the hard one -- the kind of work for which enterprise-level governance exists. the one that will likely result in lots of angry meetings and may hurt some feelings. </p> <p> there MUST be an org-level committee that governs all the data names and action names for IT data transfers. this means there needs to be a shared dictionary (or set of them) that are the <i>final arbiter</i> of what a data field is <i>named</i> when it passes from one sub-system to the other. <a href="">managing the company domain vocabulary</a> is <b>the most important job of enterprise-level governance</b>. </p> <blockquote> the careful reader will see that i am not talking about governing <i>storage models</i> or <i>object models</i> here -- just the names of data fields passed within messages between sub-systems. understanding this is most <i>critical</i> to the success of your IT operations. models are the responsibility of local sub-systems. passing data <i>between</i> those sub-systems is the responsibility IT governance. </blockquote> </dd> </dl> <h4>what about all those "ilities"?</h4> <p> as i mentioned at the opening, these three keys form the <i>base</i> of a solid DTG. there are still many other <a href="">desirable properties</a> of a safe and healthy IT program including availability, reliability, security, and many more. this is not about an "either/or" decision ("Well, I guess we have to choose between Mike's three keys and everything else, right?" -- ROFL!). we can discuss the many possible/desirable properties of your IT systems at some point in the near future -- <i>after</i> you implement your baseline. </p> <p> so, there you have it. protocol, format, vocabulary. get those three right and you will be laying the important foundation for an IT shop that can retain stability without rigidity; that can adapt over time by adding new protocols, formats, and vocabularies without breaking existing sub-systems or ending up in a deep hole of <a href="">technical-debt</a>. </p> <h4>those are the keys to a successful design-time governance plan.</h4>dftw - decoupled for the win, 11 Oct 2015 23:51:00 GMT<p> <a title="By Daniel Schwen (Own work) [CC BY-SA 4.0 (], via Wikimedia Commons" href=""> <img width="150" alt="Train coupling" src="" align="right" class="inline"/> </a> it doesn't matter if your service is <a href="">"micro"</a> or <a href="">"oriented"</a>, if it's <a href="">tightly coupled</a> -- especially if your service is on the Web -- you're going to be stuck nursing your service (and all it's consumers) through <a href="">lots of pain</a> every time each little change happens (<a href="">addresses</a>, <a href="">operations</a>, <a href="">arguments</a>, <a href="">process-flow</a>). and that's just <a title="By from Tiverton, UK [CC BY-SA 2.0 (], via Wikimedia Commons" href="!_(3225490111).jpg">needless pain</a>. needless for you and for anyone attempting to consume it. </p> <h4>tight coupling is trouble</h4> <p> tight coupling to any external component or service -- what i call a <i>fatal dependency</i> -- is big trouble. you don't want it. run away. how do you know if you have a fatal dependency? if some service or component you use changes and your code breaks -- that's <b>fatal</b>. it doesn't matter what code framework, software pattern, or architectural style you are using -- breakage is fatal -- stop it. </p> <h4>the circuit</h4> <p> you can stave off fatalities by wrapping calls to dependents in what <a href="">Nygaard</a> calls in his book <a href="">Release It!</a> a <a href="">Circuit Breaker</a> but that requires you <i>also</i> have either 1) an alternate service provider (or set of them) or, 2) you write your code such that the unavailable dependency doesn't mean your code is essentially unusable (<i>"Sorry, our bank is unable to perform deposits today."</i>). and the Circuit Breaker pattern is not meant for use when services introduce <b>breaking changes</b> anyway -- it's for cases when the dependent service is <i>temporarily</i> unavailable. </p> <h4>a promise</h4> <p> you're much better off using services that make a promise to their consumers that any changes to that service will be non-breaking. IOW, changes to the interface will be only <i>additive</i>. no existing operations, arguments or process-flows will be taken away. this is not really hard to do -- except that existing tooling (code editors, build-tools, and testing platforms) make it really <i>easy</i> break that promise! </p> <blockquote> there are lots of refactoring tools that make it hard to break existing <i>code</i>, but not many focus on making it hard to break existing public <i>interfaces</i>. and it's rare to see testing tools that go 'red' when a public interface changes even though they are great at catching changes in private function signatures. bummer. </blockquote> <p>so you want to use services that keep the "no breaking changes" pledge, right? that means you also want to deploy services that make that pledge, too. </p> <h4>honoring the pledge</h4> <p> but how do you honor this "no breaking changes" pledge and still update your service with new features and bug fixes? it turns out that isn't very difficult -- it just takes some discipline. </p> <p>here's a quick checklist for implementing the pledge:</p> <dl> <dt>promise operations, not addresses</dt> <dd> service providers SHOULD promise to support a named operation (<code>shoppingCartCheckOut</code>, <code>computeTax</code>, <code>findCustomer</code>) instead of promising exact addresses for those operations (<code></code>). on the Web you can do that using properties like <code>rel</code> or <code>name</code> or <code>id</code> that have predetermined values that are well-documented. when this happens, clients can "memorize" the <i>name</i> instead of the address. </dd> <dt>promise message formats, not object serializations</dt> <dd> object models are bound to change -- and change often for new services. trying to get all your service consumers to learn and track all your object model changes is just plain wrong. and, even if you <i>wanted</i> all consumers to keep up with your team's model changes, that means your feature velocity is tied to the slowest consumer in your ecosystem - blech! instead, promise generic message formats that don't require an understanding of object models. formats like <a href="">VoiceXML</a> and <a href="">Collection+JSON</a> are specifically designed to support this kind of promise. <a href="">HTML</a>, <a href="">Atom</a>, and other formats can be used in a way that maintains this promise, too. clients can now "bind" for the messgae format, not the object model -- changes to the model on the service don't leak out to the consumer. when this happens, adding new data elements in the response will not break clients. </dd> <dt>promise transitions, not functions</dt> <dd> service providers SHOULD treat all public interface operations as message-based <i>transitions</i>, not fixed functions with arguments. that means you need to give up on the classic <a href="">RPC</a>-style implementation patterns so many tools lead you into. instead, publish operations that <i>pass messages</i> (using registered formats like <code>application/x-form-urlencoded</code>) that contain the arguments currently needed for that operation. when this happens, clients only need to "memorize" the argument names (all pre-defined in well-written documentation) and then pay attention to the transition details that are supplied in service responses. some "old skool" peeps call these transition details FORMs, but it doesn't matter what you call them as long as you promise to <i>use</i> them. </dd> <dt>promise dynamic process-flows, not static execution chains</dt> <dd> serivces SHOULD NOT promised fixed-path workflows ("I promise you will always execute steps X then A, then Q, then F, then be done."). this just leads consumers to hard code that nonsense into their app and break when you want to modify the workflow due to new business processes within the service. instead, services SHOULD promise a operation identifiers (see above) along with a limited set of process-flow identifiers (<code>start</code>, <code>next</code>, <code>previous</code>, <code>restart</code>, <code>cancel</code>, <code>done</code>) that work with <i>any</i> process-flow that you need to support. when this happens clients only need to "memorize" the generic process-flow keywords and can be coded to act accordingly. </dd> </dl> <h4>not complicated, just hard work</h4> <p> you'll note that all four of the above promises are not <a href="">complicated</a> -- and certainly not <a href=""><i>complex</i></a>. but they do represent some <a title="By J. Howard Miller, artist employed by Westinghouse, poster used by the War Production Co-ordinating Committee [Public domain], via Wikimedia Commons" href="!.jpg">hard work</a>. it's a bummer that tooling doesn't make these kinds of promises easy. in fact, most tools do the opposite. they make address-based, object-serialization with fixed argument functions and static execution chain easy -- in some tools these are the defaults and you just need to press "build and deploy" to get it all working. BAM! </p> <p> so, yeah. this job is not so easy. that's why you need to be diligent and disciplined for this kind of work. </p> <h4>eliminating dependencies</h4> <p> and -- back to the original point here -- decoupling addresses, operations, arguments, and process-flow means you eliminate lots of fatal dependencies in your system. it is now safer to make changes in components without so much worry about unexpected side-effects. and this will be a <b>big deal for all you microservice fans out there</b> because deploying dozens of independent services explodes your <i>interface-to-operation ratio</i> and it's just brutal to do that with tightly-coupled interfaces that <b>fail to support theses promises</b> inherent in a loosely-coupled implementation. </p> <h4>for the win</h4> <p> so, do not fear. whether you are a "microservice" lover or a "service-oriented" fan, you'll do fine as long as your make and keep these four promises. and, if you're a consumer of services, you now have some clear measures on whether the serivce you are about to "bind" to will result in fatalities in your system. </p>amazon, W3C and Tomorrow's Internet, 02 Oct 2015 21:36:00 GMT<p> <a href=""> <img src=";w=840&amp;h=485&amp;crop=1" align="right" class="inline" width="150"/> </a> i'll make this brief and to the point: </p> <p> Amazon's <a href="">decision</a> to bar Google- and Apple-tv products from its store is both disturibing and, IMO, an indication that the <a href="">W3C</a> is failing to live up to one of it's <a href="">key principles</a>. </p> <h4>Web For All</h4> <p> one of the key principles for W3C is the "Web for All" </p> <blockquote> The social value of the Web is that it enables human communication, commerce, and opportunities to share knowledge. One of W3C's primary goals is to make these benefits available to all people, whatever their hardware, software, network infrastructure, native language, culture, geographical location, or physical or mental ability. </blockquote> <p> when a single vendor has the power to block others' access to competitive products, the Web is a place that doesn't live up to this principle. I know that <a href="">Tim Berners-Lee</a> is not <i>responsible</i> for the way Amazon operates. Neither is W3C CEO <a href="">Jeffery Jaffe</a>. however, this unfolding battle to pre-empt Web user's ability to access any content anywhere is just another iteration of the same battle that <a href="">timbl</a> (Berners-Lee's handle) has called out numerous times at the network level. what he calls the <a href="">Battle For The Net</a>. </p> <h4>users over all others.</h4> <p> i know the W3C has attempted to <a href="">deal with this</a> in the past -- to mixed reviews. the very fact that Tim penned that piece on DRM and the Web shows that the W3C is aware of the issue. and a key part of that (now two-year-old) position paper was the notion of <a href="">"users over all others"</a>: </p> <blockquote> In case of conflict, consider users over authors over implementors over specifiers over theoretical purity. </blockquote> <p> but events in this space seem to continue to run ahead of the W3C's ability to deal with them. and that's a big problem. </p> <h4>i know its hard, but it is important</h4> <p> i understand that the job of reigning in corporate greed on the open web is a tough job. and someone who has the best interests of users above all others needs to be <i>leading</i> the discussion. not following it. and certainly not silent while others establish the "operating rules" for yet another walled garden of profit built on the backbone of the free and <a href="">Open Web</a>. </p> <p> enabling commerce is a good thing. </p> <p> permitting exploitation and profiteering is not. </p> <h4>Tussle in Cyberspace</h4> <p> there is a great paper that deals with some of these issues: <a href="">Tussle in Cyberspace: Defining Tomorrow's Internet</a>. essentially: </p> <blockquote> This paper explores one important reality that surrounds the Internet today: different stakeholders that are part of the Internet milieu have interests that may be adverse to each other, and these parties each vie to favor their particular interests. We call this process “the tussle”. </blockquote> <p> the "tussle" is important. not because it <i>happens</i>, but because one of the key values of an open internet (not just Web) is that the Internet can be crafted (through standards) to "level the playing field" for all. this is why "Web For All" is important. this is why the Battle for the Net is important. </p> <p> and it is the same for video content, too. </p> <h4>why standards exist</h4> <p> we don't need to "regulate" vendors, we need to continue to make sure the tussle is fair to all. that's why standards exist -- <i>to level the playing field</i>. </p> <p> and, IMO, that's why the W3C exists. </p> <p> and that's why the current news is disturbing to me. </p>three years and counting, 30 Jun 2015 14:17:00 GMT<p> <a href="" title="API Academy"> <img src="" align="right" class="inline" /> </a> it was three years ago <a href="" title="Mike Amundsen, Principal API Architect">this month</a> that i joined <a href="" title="@MattMcLartyBC">Matt McLarty</a> and <a href="" title="@mitraman">Ronnie Mitra</a> to form the <a href="" title="API Academy">API Academy</a>. and i've never regretted a minute of it. </p> <h4>Layer7's brain child</h4> <p> the idea for the API Academy came out of the core leadership of what was then known as <a href="" title="Layer7">Layer7</a>. basically, it was the chance to pull together some of the great people involved in APIs for the Web and focus on promoting and supporting API-related technologies and practices. from the very beginning it was decided that the API Academy would focus on "big picture stuff" -- not a particular product or technology segment. and we dove in with both feet. </p> <p> my first talk <a href="">as part of the Academy</a> was June 2012 on hypermedia and node.js at QCon NYC. Ronnie and i finally met face-to-face later that summer at the Layer7 office in Vancouver. we both talked at <a href="">RESTFest 2012</a> where Layer7 became a key sponsor of that event. in December of 2012, Ronnie and I were proud to join <a href="" title="@medjawii">Mehdi Medjaoui</a> at the very first <a href="">API Days in Paris</a>. and we've been going strong ever since. </p> <h4>great people</h4> <p> the API Academy is an amazing group of people. over the last three years, along w/ Matt and Ronnie we've had the chance to host <a href="" title="@intalex">Alex Gaber</a>, <a href="" title="@hlgr360">Holger Reinhardt</a> -- both of whom have moved on to other things -- and <a href="" title="@inadarei">Irakli Nadareishvili</a> who came to us from his work at National Public Radio here in the US. the oppty to work side-by-side with people of this caliber is a gift and i am happy to be a part of it. </p> <h4>CA Technologies</h4> <p> since i joined API Academy, Layer7 has merged w/ <a href="" title="@CAInc">CA Technologies</a> and this has given us the chance to expand our reach and increase our contact w/ experienced people from all over the world. since joining w/ CA we've had the chance to travel to Australia, Hong Kong, Tokyo, Seoul, Beijing, and several cities Eastern Europe. later this year there will be visits to Shanghai and Rio de Janeiro along w/ dozens of cities in the US and Europe. along the way the API Academy continues to meet w/ CA product teams and leadership offering to assist in any way we can to continue the same mission we had at our founding. </p> <h4>Help People Build Great APIs for the Web</h4> <p> i've worked lots of places, with diverse teams, in all sorts of cultures. i have to say that my experience w/ the API Academy and w/ CA continues to be one of the most rewarding and supportive i've ever enjoyed. i am very lucky that i get to do the kind of work i love w/ people that challenge my thinking, support my experimenting, and offer excellent opportunities to learn and grow along the way. </p> <p> the last three years have been full of surprises and i can't wait to see what the next three years brings. i am truly a <i><b>#luckyMan</b></i> </p> the next level in Web APIs, 25 May 2015 21:07:00 GMT<p> <a href="" title="Description, Discovery, and Profiles : The Next Level in Web APIs"> <img src="" align="right" class="inline" width="150"/> </a> i'm very proud to announce that <a href="">InfoQ</a> has just released a new series that I helped edit. the series is called <a href="">Description, Discovery, and Profiles: The Next Level in Web APIs</a> and the contents of this series has a series of excellent contributing authors including <a href="" title="@mitraman">Ronnie Mitra</a> of <a href="">API Academy/CA</a>, <a href="" title="@mikegstowe">Mike Stowe</a> with <a href="">Mulesoft</a>, <a href="">Kin Lane</a> of <a href="">API Evangelist</a> fame and, <a href="" title="@fosrias">Mark Foster</a> from <a href="">Apiary</a>. it also includes interviews with <a href="">Swagger</a> creator <a href="" title="@fehguy">Tony Tam</a> and <a href="" title="">Profile RFC</a> editor <a href="" title="@dret">Erik Wilde</a>. and it's packed with material covering what i think are three key patterns/technologies in the Web API space: </p> <dl> <dt>Description</dt> <dd> "The ability to easily describe APIs including implementation details such as resources and URLs, representation formats (HTML, XML, JSON, etc.), status codes, and input arguments in both a human- and machine-readable form. There are a few key players setting the pace here." </dd> <dt>Discovery</dt> <dd> "Searching for, and selecting Web APIs that provide the desired service (e.g. shopping, user management, etc.) within specified criteria (e.g. uptime, licensing, pricing, and performance limits). Right now this is primarily a human-driven process but there are new players attempting to automate selected parts of the process." </dd> <dt>Profiles</dt> <dd> "Long a focus of librarians and information scientists, 'Profiles' that define the meaning and use of vocabulary terms carried within API requests and responses are getting renewed interest for Web APIs. Still an experimental idea, there is some evidence vendors and designers are starting to implement support for Web API Profiles." </dd> </dl> <h4>lots of vendors and technologies here</h4> <p> this is a pretty wide-ranging set of topics and lots of vendors and technologies are highlighted over the next seven articles. some of them include: </p> <ul> <li><a href="">Swagger</a></li> <li><a href="">RAML</a></li> <li><a href="">API Blueprint</a></li> <li><a href="">WSDL</a></li> <li><a href="">Visual Studio</a></li> <li><a href="">Eclipse</a></li> <li><a href="">SmartBear</a></li> <li><a href="">I/O Docs</a></li> <li><a href="">API Commons</a></li> <li><a href="">Programmable Web's API Directory</a></li> <li><a href="">3Scale</a></li> <li><a href="">Mashery's API Network</a></li> <li><a href="">Apache Zookeeper</a></li> <li><a href="">HashiCorp Consul</a></li> <li><a href="">CoreOS etcd</a></li> <li><a href=""></a></li> <li><a href="">XMDP</a></li> <li><a href="">DCAP</a></li> <li><a href="">ALPS</a></li> <li><a href="">Rapido API Designer</a></li> <li><a href="">Spring Data</a></li> <li><a href="">RDF</a></li> </ul> <p> and that's just in the <b>first</b> article in the series! </p> <p> here's a quick rundown of all the articles that will br released between late May and early July: </p> <h4>Description, Discovery, and Profiles: A Primer</h4> <p> this articles takes a look at several formats, key vendors, and identify the opportunities and challenges in this fast-moving portion of the Web API field. </p> <h4>From Doodles to Delivery : An API Design Process</h4> <p> Ronnie Mitra investigates what good design is and how using Profiles along with an iterative process can help us achieve it. </p> <h4>The Power of RAML</h4> <p> Mike Stowe introduces us to the RAML format, reviews avilable uses and tools, and explains why Uri Sarid, the creator of RAML wanted to push beyond our current understandings and create a way to model our APIs before even writing one line of code. </p> <h4>APIs with Swagger : An Interview with Reverb's Tony Tam</h4> <p> I talk to founder and inventor Tony Tam about the history, and the future, of one of the most widely-used API Description formats today: Swagger. </p> <h4>The APIs.json Discovery Format: Potential Engine in the API Economy</h4> <p> In this piece, Kin Lane describes his APIs.json API discovery format which can provide pointers to available documentation, licensing, pricing for exsiting Web APIs. </p> <h4>Profiles on the Web: An Interview with Erik Wilde</h4> <p> In early April, 2015 Erik agreed to sit down with InfoQ to talk about Profiles, Description, Documentation, Discovery, his Sedola project and the future of Web-level metadata for APIs. </p> <h4>Programming with Semantic Profiles : In the land of magic strings, the profile-aware is king.</h4> <p> Mark Foster -- one of the editors of the ALPS specification -- explains what semantic profiles are and how they can transform the way Web APIs are desgined and implemented. </p> <h4>A Resource Guide to API Description, Discovery, and Profiles</h4> <p> To wrap up the series, we offer a listing of the key formats, specifications, tools, and articles on API Description, Discovery, and Profiles for the Web. </p> <h4>looking forward to the weekly releases</h4> <p> it was a pleasure working with such a distinguished group of authors and practitioners in this very important space and i am looking forward to the continued released between late May and early July. i'm also looking forward to feedback and discussion from readers of the series. </p> <p> the Web is a dynamic and fast-moving space and it should be interesting to keep an eye on this "meta-level" of the API eco-system for some time to come. </p>