Home
been quite a while since i blogged here.
let's see if this gets picked up by feediverse and posted to my mastodon instance account.
who knows. this might get to be a habit again...
i'm happy to announce that, starting in June of this year, I'll be working as an API Strategy Advisor with Mulesoft. this gives me a chance to team up w/ my long-time friend and colleague, Matt McLarty and to renew acquaintances w/ a handful of really talented and experienced API professionals at Mulesoft.
API Strategy
Matt and I (along with Irakli and Ronnie) worked together on the Microservices Architecture book and it will be good to be working with him again. that book, and a more recent one -- Continuous API Management (with Mehdi and Erik) -- focus on the steps beyond API as a "thing" and reflect a trend i see growing in the API community. that trend is to view APIs as a strategy for accomplishing business objectives.
as few as five years ago just having an API program could set you apart from your competition in the market. however, that is no longer the case. now, it is important the APIs that you spend you time and money to build, test, deploy, and support actually provide a measurable business advantage. that's where an API Strategy comes into play.
and companies that can enable that API Stratgey -- companies that not only understand it but can also help others adopt it -- are the companies i am anxious to work with, share, and learn from. and that is why today i am honored and excited to be able to begin working with Mulesoft and promote this "next level" approach to APIs.
MuleSoft Connect
over the coming weeks and months, Matt and I will be working to collect and share API Strategy experiences from companies of all sizes from around the world. we'll be taking the time to meet with and talk to industry leaders as well as promising startups in order to learn how they are using APIs to go to the "next level" and we'll be working to conslidate and share their knowledge and experience at events across the globe.
along these lines, Mulesoft will be hosting a series of events -- Mulesoft CONNECT -- at cities in North America, Europe, and Asia. Matt and I will be at the next CONNECT event in San Francisco later this month. you can also check the ongoing schedule to see when we'll be at a city near you. i'm looking forward to these opportunities to share and learn about the role API Strategy is playing in the community and hope to see you there.
2019 and Beyond
of course, strategy is just one step in the process of identifying, implementing, evaluating, and improving your organization's business and the IT ecosystem that supports it. and this is not an overnight process. it takes time, persistence, dedication, and patience. i'm excited to be able to join Mulesoft in helping promote this approach and looking forward to seeing how it can advance to role of APIs in businesses worldwide not just this year but also for many years to come.
When I plan out an implementation for the Web, one of the things I think about is the problem of "breaking eggs." One great example of this is the old adage, "You can't make an omelette without breaking some eggs." That's cute. It reminds us that there are times in our lives when we need to commit. When we need to forge ahead, even if some people might disagree, even if there seems to be "no turning back."
However, this "omelette" adage is not what I mean when I think about Web implementations and eggs.
Instead, I think about entropy and how you cannot 'unscramble' an egg. I won't go into the physics or philosophical nuances of this POV except to say, when I am working on a web implementation I work very hard to avoid 'breaking any eggs' since it will be quite unlikely that I'll ever be able to put those eggs back together again.
I don't want my Web solution to end up like Humtpy Dumpty!
Web Interactions as Eggs
The web is a virtual world. It is a highly-distributed and non-deterministic -- much like our physical world. We can't know all the influences and their effects on us. We can only know our immediate surroundings and surmise the influences based on what we observe locally. The world is a random place.
So each time we fill out a form and press "send", each time we click on a link, we're taking a risk and stepping into the unknown. For example:
- Is there really a page at the other end of this link or is there a dreaded 404 waiting for me at the other end?
- Have I filled out the form correctly or will I get a 400 error instead?
- Or, have I filled out the form correctly, only to encouter a 500-level server error?
- Finally, what if I've filled out the form, pressed "send" and never get a response back at all? what do I do now?
But What Can Be Done?
When I set out to implement a solution on the Web, I want to make sure to take these types of outcomes into account. I say "take them into account" because the truth is that I cannot prevent them. Most of the time these kinds of failures are outside my control. However, using the notion of Safety-I and Safety-II from Erik Hollnagel, I can adopt a different strategy: While can't prevent system failures, I can work to survive them.
So how can I survive unanticipated and un-preventable errors in system? I can do this by making sure each interaction is not an "egg-breaking" event. An "egg-breaker" is an action that cannot be un-done, cannot be reversed. In the web world, this is an interaction that has only two outcomes: "success or a mess."
A great example of the sad end of the "success-or-a-mess" moment is an action like "Delete All Data." We've probably all experienced a moment like this. Most likely we've answered "yes" or "OK" to a confirmation dialog and the moment we did, we realized (too late) that we "chose poorly." There was no easy way to fix our mistake. We had a mess on our hands.
The obvious answer to this kind of mess is to support an "undo" action to reverse the "do." This turns an "egg-breaking" event into and "egg-preserving" event. And that's what I try to do with as much of my Web implementations as possible -- preserve the egg.
Let's look at some other ways to prevent breaking eggs when implementing solutions in a non-deterministic world...
Network-Level Idempotency
One of the ways you can avoid "a mess" is to make sure your actions are idempotent at the network level. That means they are repeatable and you get the same results every time. Think of an SQL UPDATE statement. You can update the the firstName
field with the value "Mike"
over and over and the firstName
field will always have the same value: "Mike"
.
In the HTTP world, both the PUT
and DELETE
methods are designed as idempotent actions. This means, in cases where you commit a PUT
and never recieve a response, you can repeat that action without worry of "breaking the egg."
Relying on network-level idempotency is very important when you are creating autonomous services that interact with each other without direct human intervention. Robots have a hard time dealing with non-idempotent failures.
Service-Level Event Sourcing
At the individual service level, a good way to "preserve the egg" is to make all writes (actions that change that state of things) reversible. Martin Fowler shows how this can be done using Event Sourcing. Event Sourcing was explained to me by Capital One's, Irakli Nadareishvili as a kind of "debit-and-credit" approach to data updates. You arrange writes as actions that can be reversed by another write. Essentially, you're not "un-doing" something, you're "re-doing" it.
Fowler shows that, by implementing state changes using Event-Sourcing, you get several benefits including:
- detailed change logs
- the ability to run a complete rebuild
- the ability to run a "temporal query" (based on a set date/time)
- the power to replay past transctions (to fix or analyze system state)
I like say that, with Event-Sourcing, you can't reverse the arrow of time, but you can move the cursor.
Solution-Level Sagas
In 1987, Garcia-Molina and Salem published a paper simply titled "Sagas." This paper describes how to handle long-lived transactions in a large-scale system where the typical Two-Phase Commit pattern results in a high degree of latency. Sagas are a another great way to keep from "breaking the egg."
Chris Richardson has done some excellent work on how to implement Sagas. I like to think of Sagas as a way to bring the service-level event-sourcing pattern to a solution-level of multiple interoperable services. Richardson points out there is more than one way to implement Sagas for distributed systems including:
- Choreography-based (each service publishes their own saga events)
- Orchestration-based (each saga is managed by a central saga orchestrator)
Sagas are a great way to "preserve the egg" when working with multiple services to solve a single problem.
And so...
When putting together your Web implementations, it is important to think about "preserving the egg" -- making sure that you can reverse any action in case of an unexpected system failure. Working to avoid "breaking the egg" adds a valuable level of resilience to your implementations. This can protect your services, your data, and your users from possibly catastrophic events that lead to "a mess" that is difficult and costly to fix.
In this post, I shared three possible ways to do this at the network, service, and solution level. There are probably more. The most important thing to remember is that the Web is a highly-distributed, non-deterministic world. You can't prevent bad things from happening, but with enough planning and attention to details, you can survive them.
now that this post is done, anyone hungry for an omelette?
I've not blogged in quite a while here -- lots of reasons, none of them sufficient. So, today, i break the drought with a simple rant. Enjoy -- mca
It is really encouraging to see so many examples of companies and even industries spending valuable resources (time, money, people) on efforts to define, implement, and advocate for "open" APIs. in the last few years i've seen a steady rise in the number of requests to me for guidance on how to go about this work of creating and supporting APIs, too. And i hear similar stories from other API evangelists and practitioners up and down the spectrum from startups to multi-nationals. And, like i said, it is encouraging to see.
But…
Even though i've been quite explicit in my general guidance through lots of media and modes (interviews, presentations, even full-length books), it is frustrating to see many people continue to make the same simple mistakes when going about the work of designing and deploying their APIs. so much so, that i've decided to "dis-hibernate" this blog in order to squawk about it openly.
And chief among these frustrations is the repeated attempts to design APIs based on data models. Your database is not your API! Stop it. Just stop.
Unless you are offering API consumers a SaaS (storage-as-a-service) you SHOULD NOT be using your data model as any guide for your API design. Not. At. All.
Arthur Jensen (In a loud, angry voice): "You are messing with the primal forces of nature Mr. Beale. And YOU! WILL! ATTONE!"
(Arthur pauses and leans in to whisper in Beale's ear)
Arthur: "Am I getting through to you, Mr. Beale?"
When you cast about for a handle on how to go about designing your API, the answer is straightforward and simple: Model your Actions.
It can't be stated any more directly. Model your Actions.
Don't get caught up in your current data model. Don't fall down the rabbit hole of your existing internal object model. Just. don't.
Need more? Here's a handy checklist:
- Start with a list of actions to be completed. (Jobs To Be Done) -- if that sparks your brain.
- Determine all the data elements that must be passed when performing that action.
- Identify all the data elements to be returned when the action is performed. Be sure to account for partial or total failure of the attempted action.
- Rinse and repeat.
Leonard Richardson and I offer up a long-form description of this four-step plan in our book "RESTful Web APIs".
Once you feel good about your list of actions and data points (input and output), collect related actions together. Each collection identifies a context. A boundary for a component. That might sound familiar to some folks.
Now you have an API to implement and deploy. When implementing this API, you can sort out any object models or data models you want to use (pre-existing or not). But that's all hum-drum implementation detail. Internal stuff no one consuming the API should know or care about.
All they care about are the possible actions to perform. And the data points to be supplied and returned.
That's it. Simple.
Of course, simple is not easy.
"I would have written a shorter letter but I just didn't have the time."
-- various attributions.
now, if a simple rant is not enough, i offer up an epigram i first shared publicly back in 2016:
remember, when designing your #WebAPI, your data model is not your object model is not your resource model is not your message model #API360
And if a single epigram is not motivation enough, how about a whole slide deck from APIStrat 2016 devoted to explaining that simple phrase?
Now, armed (again) with this clear, simple advice, you're ready to avoid the debacle of data-centric industry-wide APIs.
just go out there and take action (not data)!
there just a few days left before my live O'Reilly Implementing Hypermedia online tutorial on february 9th (11AM to 5PM EST). and i'm spending the day tweaking the slides and working up the six hands-on lessons. as i do this, i'm really looking forward to the interactive six hour session. we'll be covering quite a bit in a single day, too.
the agenda
most of the material comes from my 2011 O'Reilly book Building Hypermedia APIs with HTML5 and Node. however, i've added a few things from RESTFul Web APIs by Leonard Richardson and even brought in a few items from my upcoming book RESTful Web Clients.
the high-level topics are:
- Designing a Hypermedia API
- Using the DORR Pattern for Coding Web APIs
- Understanding the Collection+JSON Media Type
- Building Hypermedia SPA Clients
by the time the day is done, everyone will have a fully-functional Hypermedia API service up and running and a Cj-compliant general-pourpose hypermedia client that works w/ ANY Web API that supports the Collection+JSON media type.
Greenville Hypermedia Day
the tutorial is geared toward both individual and team participation. i know some companies are arranging a full-day session with their own dev teams for this, too. and i just heard about a cool event in Greenville, SC for people who want to get the "team" spirit...
i found out that Benjamin Young is hosting a Hypermedia Day down in Greenville, SC on feb 9th. If you're in the area, you can sign up, show up, join the tutorial in progress, and chat it up w/ colleagues. I know Benjamin from our work together for RESTFest and he's a good egg w/ lots of skills. He'll be doing a Q&A during the breaks in the tutorial modules and i think he might have something planned as an "after-party" thing at the end of the day.
if you're anywhere near Greenville, SC on feb-09, you should join Benjamin's Hypermedia Day festivities!
cut me some slack
i know most of the attendees are going "solo" -- just you, me, and the code -- that's cool. O'Reilly is hosting a live private Slack channel for everyone who signs up for the tutorial. I'll be around all day (and probably some time after that, too) so we can explore the exercises, work out any bugs, and just generally chat.
it's all ready!
so, as i wrap up the slides, the hands-on lessons, the github repo, and the heroku-hosted examples, i encourage you to sign up and join us for a full day of hypermedia, NodeJS, and HTML5.