Building your API properly

So far, we’ve strongly advocated the following strategy:

  • Building many small components.
  • Managing those components through dependencies.

When this strategy is working well, it simplifies our job by limiting the scope of what we need to understand to make any given change.

This strategy can become a big mess fueled by cascading dependencies in a hurry if we’re not careful with how we build our APIs, though. Specifically, maintaining a healthy ecosystem is difficult if each release of an API breaks compatibility with old API versions.

Breaking changes have several different forms:

  • An API can add requirements to input parameters, such as adding a new required field to our Quiz.

  • An API can change the shape of the output, such as changing all of our quiz functions to {:ok, quiz}.

  • An API can change its behavior in unexpected ways, such as treating an amount as dollars rather than cents.

Let’s look quickly at an approach to APIs that will improve compatibility as we improve the various independent components in our system. We’ll honor three rules.

Rule one: Don’t add new requirements to existing APIs, only options

Many beginning developers tend to validate all arguments for a remote API. Then, as those APIs need to be extended, they require those as well. There’s a problem with that approach:

  • If servers provide requests that require all parameters, each new parameter means we’ll have to upgrade the client and server simultaneously.

  • With just one client and one server component, that strategy may seem viable. As dependencies like this cascade through a system, though, upgrades get exponentially more difficult.

Then, we lose all of the advantages we were seeking by building decoupled components in the first place.

If we want to extend an API, we can extend it with options. This leads to two advantages:

  • Servers can provide new API functionality to the same endpoints without requiring all clients to change.

  • Later, clients can upgrade to take advantage of these new options.

Rule two: Ignore anything you don’t understand

The no new requirements rule pertains to public-facing APIs. There’s a similar rule for dealing with data. Ignoring everything you don’t understand makes it possible to:

  • Slowly add new fields.

  • Request options that may not yet be supported.

  • Upgrade our systems incrementally.

These first two rules work together well. For example, say there’s an export program expecting a fixed set of fields representing a product. When the server makes new fields optional, it does two things:

  • It ignores optional fields that are empty.

  • It ignores fields it doesn’t know about.

This way, the system will function well through change. It doesn’t matter which system deploys first. The server exports the new fields only when both the client and server provide them. This is the ideal behavior.

Rule three: Don’t break compatibility; provide a new endpoint

Don’t break users of an endpoint. Rather than extending an existing endpoint in incompatible ways, provide a new endpoint to do the new thing. Modern languages have many ways to scope and delegate functions, and these features give us infinite flexibility with naming.

We’ll go one step further. Server endpoints are not the only APIs that could stand to benefit from this approach. Everyday function libraries break these rules every day. A concept called semantic versioning says minor versions are compatible, and major versions are possibly incompatible. These rules might look wise, but a far better way is to adopt practices that don’t break compatibility in the first place.


Get hands-on with 1200+ tech skills courses.