Byte for your thoughts

Some thoughts on programing. Mostly subjective, potentially helpful.

I've recently been working a lot with Spring Cloud Gateway. It's been (mostly) a pleasant experience, but I do encounter some annoying quirks from time to time. This is a story about one of them.

Use-case

A few days ago I was debugging an issue at work when a backend microservice developer that my API Gateway proxies to asked me to try using keep-alive connections. The request sounded reasonable enough, adding headers to downstream request is easy in Cloud Gateway. I already have few headers that I'm adding.

Hold my beer. I got this.

me, going into this

setRequestHeader

I configure my Spring Gateway routes programmatically, so adding headers to request looks something like this:

builder.routes()
  .route(r -> r.path("/foo/bars")
    .filters(f -> f.setRequestHeader(HttpHeaders.CONNECTION, "keep-alive"))
    .uri(backendUrl))
  .build();

I already had my route set up, so all I needed to add was the setRequestHeader line. So far so good.

Signs of trouble

Next up, I updated tests to check for the new header. This is where I detected the problem. I use WireMock to simulate backend services in tests. Checking requests that API Gateway sends downstream is straightforward:

verify(getRequestedFor(urlEqualTo("/foo/bars"))
  .withHeader("Connection", equalTo("keep-alive"))
);

And the test failed. Here's what WireMock told me:

No requests exactly matched. Most similar request was:  expected:<
GET
/foo/bars

Connection: keep-alive
> but was:<
GET
/foo/bars

Basically, requests were going through, the route was properly sending them to mocked backend service, but Connection header was missing.

Debugging

Having a test that fails consistently was useful, because it allowed me to debug the issue. I put my first breakpoint inside SetRequestHeaderGatewayFilterFactory class. That's where GatewayFilter that sets headers is implemented. I ran my test and everything looked good. Connection header was added to mutated request, and mutated exchange was passed on to filter chain.

Next up, I decided to look into NettyRoutingFilter. That's where Spring Cloud Gateway makes HTTP requests to backend services. I put a break point at the start of filter method and inspected the exchange parameter. My Connection header was there. I proceeded to read the rest of the method, and found this line:

HttpHeaders filtered = filterRequest(getHeadersFilters(), exchange);

Turns out there's a set of HttpHeadersFilters that operate on HTTP headers and can exclude some of the ones previously set by GatewayFilters. In my particular case the culprit was RemoveHopByHopHeadersFilter. As its name suggests, its function is to remove headers that are only relevant for requests between client and API Gateway, and not intended to be proxied to backend services. In this particular case, I wanted to retain the Connection header. Fortunately RemoveHopByHopHeadersFilter can be configured with external configuration.

Solution

The solution was to add the following:

spring.cloud.gateway.filter.remove-hop-by-hop.headers:
    - transfer-encoding
    - te
    - trailer
    - proxy-authorization
    - proxy-authenticate
    - x-application-context
    - upgrade

to application.yaml config file. With that RemoveHopByHopHeadersFilter no longer removed the Connection header. Tests passed, I deployed API Gateway and backend services handled themselves better.

Example

I've created a demo project that illustrates this issue and the solution. You can find it on GitHub. Feel free to play around with it, and if you find additional issues, or better solutions, please let me know about them at:

Tower of Babel

I've recently been working on a command line tool. You can read more about it in my previous blog post. In the process of writing it I've stumbled upon a couple of ideas and best practices that I'd like to share with you in a series of blog posts. In this first post I'll go over picking the programming language and package manager for your CLI.

A single executable

The first decision we developers face when starting a new project is which language to use. In this case, like in most other, I'd suggest you think of your users first. The main way they will interact with your CLI is by executing it on the command line. (Yeah, I know, I deserve a Captain Obvious badge for this observation. Bear with me.) When building a CLI you should try to make it as simple as possible to execute. And what's easier to run that a single self-contained executable?

This puts languages that require an interpreter or a virtual machine (like Python or Java) at a disadvantage. Sure, most Linux distros come with Python pre-installed, but even they might have conflicting versions. And Windows users don't have it out of the box.

This machine disparity leads into a second consideration: building executables for multiple platforms. You users will most likely be spread over Linux, Windows and MacOS. You should build your CLI app for all three platforms. Having a toolchain that supports compilation to multiple target platforms from the same machine will make your life easier. You will be able to compile your code locally for any platform. And your CI pipeline will be simpler (no need for dedicated Mac nodes).

One situation when you can get around these constraints is if you are targeting specific group of users that rely on a cross-platform technology. A good example would be Node in JavaScript community. If you are building a CLI tool exclusively for frontend developers you can presume they all have Node and npm installed. However, this can still limit you in the future. For example you might be locked into an older version of Node until you are sure all of your users have upgraded.

Note that it might not be enough for technology to be ubiquitous among your users if it's cumbersome to use. For example, I'd caution against packaging your CLI as a .jar file, even if you are targeting Java developers. Java classpath is just too much of a mess.

No external dependencies

In addition to not using an external runtime, your CLI app should also avoid using external dynamically linked libraries. Depending on your packaging and distribution setup, you can satisfy all of your dependencies at install time. However this will complicate your setup. There will be situations when just giving your users an executable file that they can immediately run pays off.

In order to satisfy this requirement you should look for a language that can bundle your dependencies within a single compiled executable. This will result in bigger file sizes, but it's much more practical. It's also worth considering how well your language interacts with the OS. You should look for both platform agnostic and powerful APIs for working with underlying OS baked into the language. Having a solid standard library helps a language meet these requirements.

Distribution

If your language of choice matches all suggestions from the previous sections you can easily build a statically linked executable for any major OS. You can give that executable to your users and they can run it as is. This is super useful for testing early builds or custom flavors of your app. It does require some effort from your users to set up their PATH to include your executable. That's something a package manager can help with.

Another argument in favor of using a package manager is upgrade flow. You will inevitably release new versions of your CLI. Package manager will alert your users that a new version is out and will make upgrading easy. It's hard to overstate benefits of having users on the latest versions of your app.

If you base your tool on a cross-platform technology like Node chances are that ecosystem has a preferred package manager, like npm. If you choose to build native executables you should look for native package manager. This is where cross platform approach makes things easier. However, having your app compile into standalone executable simplifies integration with multiple package managers.

You will need to consider your users' habits when choosing package managers. Mac users are probably used to Homebrew. Linux has more diversity. You can start by building a Debian package, then listen to your users' feedback and include more packages as they are requested. On Windows the situation is not so clear. Chocolatey is one option, but it may not be widely adopted by your users. As a rule you should avoid forcing users to adopt a new tool just to install your app. If it come to that prefer manual installation process.

What I ended up with

For a language I picked Go. It provides dead simple compilation into a single statically linked executable for all major OS platforms. It comes with very strong standard library, good APIs for interacting with underlying OS and a vibrant open-source community.

If your audience allows for it you might be able to stick with Node + npm combination. Alternatively you might pick some other natively compiled language. For example Rust is one popular option, though compilation for multiple targets is a bit more involved than with Go. You can find more about using Rust to build CLIs here. Lastly, you can even use Java with something like GrallVM. With Grall you can build native executables that don't require JRE to run.

For packaging I choose to create Homebrew and Debian packages. Both builds were relatively simple to automate using Jenkins CI. Homebrew in particular is easy as all it requires is a Git repository with a Ruby script describing your package. Since my CLI is used internally at work, I publish my packages to internally hosted Bitbucket and Artifactory. My Windows users do not have a favorite package manager, so I leave them with executables they can simply download anywhere onto their PATH and use like that.

Next up

In the next installment of this blog series I'll go over what makes CLI app a good command line citizen. I'll cover topics like usability and consistency. If that seems like something you would be interested in consider:

Should I use angular, react of insert-framework-of-the-month for my next project?

prospective web developer in 2019

What framework should I use for web development in insert-exciting-new-language?

unsuspecting developer encountering an exciting new language

Web has become the default UI layer in recent years. Most projects grow around either a web or native mobile app. I myself usually start prototyping a new product by cobbling together a web app and hooking it up to a backend service.

However, what if that web (or mobile) app was not the right choice? Can we get away without building it at all? Here are a few interesting results I've come across while exploring this premise.

War Frontend, what is it good for?

Let's see what we'll be getting rid of. Web app frontend serves 2 basic functions:

  1. Present information to the user.
  2. Collect input from the user.

Any substitute will need to serve those 2 functions.

Drawbacks of writing a web app:

  1. You have build it in the first place. Code you don't have to write is the best code.
  2. You need to teach your users where to find it and how to use it. Using conventions such as Material design helps with usability, but discovery is still an issue.
  3. It can get hard to satisfy power users. Think about users who want to do everything without lifting their fingers away from the keyboard, or write python scripts to crawl your web app.
  4. You might be more interested in backend development. This one becomes more important in case of a side project, or Google stye 20% project.

Having these drawbacks in mind, here are a few projects that I've worked on lately and how I've gotten away without writing a web app frontend for them.

Internal framework support tool

Use case

At my job I lead an API infrastructure team. We develop a framework that other dev teams in the company (~20 teams) use to expose their public APIs. We also maintain the application that runs those APIs. It's something like an in-house implementation of Amazon lambda and API gateway in one service. We've noticed that developers from other teams had low visibility into current state of our production system. I decided to build a dashboard-like tool for them to monitor and manage their APIs.

Failed attempt

First I envisioned the solution like a web app dashboard that collected data from production instances of API service and provided some management operations. Looking for a learning opportunity on this pet project I picked Angular Dart framework to built it with. Few weeks later I built a really nice generic table component (which lagged terribly if populated by more than 5 rows) and lost interest in the project. Count this as 1th and 4th drawback taking their toll.

Success story

Few months later, inspired by the frontendless idea, and after discovering wonderful systems programming language Go, I decided to revisit the project. For my second attempt I decided to build a command line app instead of web frontend.

I actually finished this time, and have discovered new and interesting use-cases in the process. Writing a CLI tool allowed me to easily implement scaffolding features that help developers build their API scripts locally. This is something that would have been difficult to implement in web app, and would have probably never crossed my mind.

Since target audience for this project were other developers, having a CLI instead of a web app did not hurt usability. If anything, it was easier to make power users happy, as they can integrate the CLI into their CI pipelines, and other scripts. So this approach countered the 3rd drawback (in addition to 1st and 4th).

Client hash lookup tool

Use case

API platform I work on uses hashids to generate short ids for out clients. Occasionally folks from support or sales departments needed to find the hash id that belongs to a given client, or reverse. They used to ping my team each time. We decided to build a simple tool they could use to do the lookup themselves.

Roads not taken

We abandoned few ideas right away. For example, building a CLI like in the previous example wouldn't have worked because our users, support and sales people, were not tech savvy enough. We also decided not to go the web app route because it seemed like an overkill for such a simple functionality.

Solution

One tool that all departments within the company use is Slack. So we decided to implement this lookup tool as a slack bot. We used the Hubot framework and I ended up finally learning basics of CoffeeScript. I guess there's no escaping web technologies, even in a frontendless project.

Unexpected benefit of using a Slack bot was ease of discovery. Since our bots participates in public channels, every time a user interacts with it all other channel members see it happen. Every usage is simultaneously a feature demo for potential new users.

Projects registry

Use case

My team recently decided to invest more time into API governance. One thing that become clear immediately was that we needed a registry of all existing APIs. We needed to know which in-house team exposes which APIs and what platform features they use.

You guessed it, frontendless

For this one we didn't even consider building a web app. We already use Confluence to store our internal project documentation. That's the place our product owner and other stakeholders go to find information. However, API projects grow dynamically as developers work on them on daily bases. Manually updating a Confluence page every time a dev in the company adds a new feature to their API wasn't sustainable.

In the end we created a script that can crawl through our git server, find all API projects, collect relevant info and update the Confluence page with it. Both Confluence and Bitbucket (our git server of choice) provide detailed enough APIs, so this wasn't hard to pull off. We set the script to run every night and that was it.

Using existing wiki platform to display our data allowed us to skip entire categories of features, like access permissions, user mentions and comments. And in the end it was easier for our users to find the information they need because they were already used to looking for it on Confluence.

Takeaways

There's one common thing in all three of these examples:

Web app was replaced by a tool that's “closer” to intended users.

In case of developers that was a CLI app. In case of employees from other departments that was Slack. In case of stakeholders that seek project information that was internal wiki. Each time the end product was either easier to discover and learn for new users, or more flexible or power users.

Stepping out of the web app mindset has also had some interesting side effects:

  • Discovering exciting new features that wouldn't fit into a web app.
  • Learning new technologies, such as Slack bots.
  • Significantly reducing development times.

Granted, there are still many situations when picking a classic software as a service approach and building a web app is the right call. However, when you find yourself starting a new project ask yourself if there is another model better suited to your users.

And then share your #frontendless success story with me!