Byte for your thoughts

Some thoughts on programing. Mostly subjective, potentially helpful.

Any software system will eventually experience a degradation, outage or a similar incident. Depending on the type of business you're in this can lead to a loss of revenue and clients or even have legal repercussions. That is why it's important to have a good incident management process for resolving incidents, reporting on their impact and preventing them from happening again.

In this blog post I'll describe my personal experience with some challenges of establishing the incident management process and why I built a tabletop role-playing game Deployments and Disasters to deal with them.

My perspective

I work at Infobip. We are a tech company with over 300 developers and additional 200 customer support engineers. Our developers are divided into teams of around 5 to 10 people. Teams own their services and are responsible for both building and maintaining them in production. Support engineers monitor key business metrics and maintain contact with clients. In total that's over 500 people and each one of them can be called upon to participate in resolving an ongoing incident.

One benefit of this approach is that people most familiar with a given code-base will directly work on resolving the incident. Additionally, trained support personnel are in contact with clients thus allowing developers to focus on fixing the issue.

On the other hand, there are drawbacks to this approach. Not everyone will work equally well under pressure, different people have different levels of experience, each team might develop their own procedures, use different tools, etc. Most of these can be addressed with a fine-tuned incident management procedure and a set of common tools to back it up. Some good ideas include chatops, centralized logging and metrics, premade dashboards and alerting. I will not go into details on these things here.

Challenges to tackle

What I'd like to focus on instead are the following three challenges that emerge after management procedure and all of the tools are in place:

  1. All involved with incident management should familiarize themselves with the procedure and available tools. The more people the bigger this issue is. At one point even awareness of the process can become a problem.
  2. Incident management process involves different roles: customer support, programmers, sysadmins, database administrators, devops engineers, etc. They all need to work together, despite having different objectives at any given time during the incident.
  3. Many roles involved in incident management are technical. They view resolving the incident as their objective and are focused on detecting and removing the immediate cause of the issue. As a result they may not think of affected customers and thus miss out on opportunities to notify them of the impact, or even alleviate parts of it sooner.

Awareness of the procedure

Educating people about procedures and tools can generally be achieved with incident management training. Roughly speaking, there are 2 approaches to it:

  1. Simulating the incident realistically, with participants using actual tools, interacting with high fidelity data and directly applying their real life skills.
  2. Keeping the training abstract and basing it on gaming techniques. I've found success with adopting the mechanics of tabletop role-playing games.

Picking a game based approach has several advantages. For one, it reduces the prohibitive cost of recreating the data required to realistically simulate the incident. It allows for addressing the other two challenges, namely the empathy towards other roles and customer centric mentality.

However, the killer feature of game based training is that it's fun. Especially when compared to reading procedure documentation and how-to guides, or attending seminars. The benefits of this are twofold. First it makes the exercise more memorable for the attendants. Secondly, it helps with organizing future sessions, as people are more interested in attending.

There's one additional benefit of role-playing based approach. It turns entire exercise into a structured storytelling experience. This structure provides a safe environment for all attendants to share their insights and knowledge with each other. The benefits are most noticeable with introverted players.

Empathy for other roles

At any one time during the incident, each different role might have a different objective. For example, support engineer needs to inform the clients of exact impact of the incident. On the other hand programmers need to identify the cause of the issue. In this situation support needs information on client facing API from the development team that is focused on debugging the backend. This can create tension between those two roles.

One thing that games excel at is placing players into other people's shoes. In Deployments and Disasters I facilitate this by defining specific roles with unique mechanical characteristics. When starting the game session I make sure that players shuffle the roles so that they don't play the same one they have in real world. For example, I encourage developers to play the role of customer support.

This has two benefits:

  1. Players get to experience what incident looks like from the perspective of other roles. This builds empathy by making players go through the tough choices and strive for hard to reach objectives that their colleagues usually experience.
  2. It also encourages players to share their knowledge and practices. It reverses the real world dependencies between the roles. For example, if developers usually depend on database administrators for optimizing their databases then inverting the roles will make admins more sympathetic towards the other role's needs.

Customer centric mindset

I'd like my developers to approach incident management with more of a customer centric mindset. Other teams, companies or situations may require some other adjustments. Fortunately, game mechanics are well suited for this.

In games, players regularly receive and accomplish arbitrary objectives. By carefully picking stated objectives and mechanical incentives game designers can impact player mindset.

In Deployments and Disasters I achieve this with a few rules:

  1. The main objective of the game session is to resolve the incident within a set number of turns, represented by an incident clock. At the beginning of the game players have 6 turns to resolve the issue. However, If they devote time to communicate the issues to the clients their time doubles to a total of 12 turns.
  2. Clients are active actors in the game (controlled by the DM) and they can impact the state of the system. For example, they can escalate the problem by attempting to fix it themselves. Alternatively they can be used to reveal valuable information and hints.
  3. During the course of the game some important (gold / platinum) clients can contact the players and ask for status updates or request special attention. This can be used to illustrate different types of clients.
  4. Incident scenario starts with only some clients impacted. Players can still escalate the situation and spread the impact to other clients. Or, they can proceed with caution and reduce the impact as they go along.

Work so far

So far I've set up basic set of rules and game mechanics for Deployments and Disasters which you can find on GitHub. The game presented there is early sample of a work in progress. One significant ommition is the lack of incident scenarios. I've created one of them for test runs I've played at work, however it is tightly coupled with our internal procedures and custom tools we use. My plan is to create an example scenario with open-source tooling that anyone can use as a base for their exercise.

I've held two test training sessions at work and feedback was generally good. Players found the game entertaining, but also reported learning about new tools and procedures. I'm yet to create additional scenarios, but there's interest in replaying the existing one with teams that haven't seen it yet. I'm also exploring ways of connecting the exercise with employee evaluation and professional development programs that we have.

Feel free to use the Deployments and Disasters to build your own incident scenarios on top of. Or stay tuned for future developments, as I will strive to publish example scenarios myself. You can watch the GitHub repo for updates, or follow this blog by:

If you have any feedback, comments or improvement ideas you can send me a pull request, or just contact me at:

Advanced use cases

In this part of CLI development series I'll go over some of the more advanced use cases. I've previously discussed general tips for making command line apps nice to use. If you missed that blog post you can find it here.

As use cases grow more complex it makes more and more sense to look for existing solutions and reuse / incorporate them into our own applications. That's why I'll devote more space in this post to highlighting existing projects, as opposed to talking about my own experiences.

Source code generation

One use case where CLIs excel is project scaffolding and code generation. For example of such apps you can take a look at Yeoman and its list of generators. Yeoman itself is oriented towards web development, although its most popular generator JHipster outputs Spring Boot applications. As a side note, if you ever find yourself with some spare time, checking out JHipster is a wonderful way to spend few hours. One more old school example of project scaffolding in Java world are Maven archetypes.

If you look at those examples you will quickly find that they provide rich set of features and customizations. One thing that most dedicated code generation tools have in common is a plugin system that allows users to define their own templates. If code generation is your primary use case, developing a plugin for an existing tool is a good idea. That approach will save you a lot of time and you'll end up with a more polished product.

On the other hand, there are CLIs that offer code generation as an addition to their core features. For example think of init commands in tools like npm or git. Extracting scaffolding features out of these tools and delegating it to dedicated code generations apps would be detrimental to user experience. If you find yourself in a similar situation you should implement code generation within your CLI instead.

Most popular approach to code generation is to treat source files as plain text. In order to generate them you will need a good templating library. I've had to use some clumsy and cumbersome templating libs in legacy projects so I appreciate the importance of picking a library that works for you. One experiment I like to do when evaluating a templating library is to try to make a template for serializing some data structure into a pretty formatted json. Json format has a few tricky rules, like requiring comma after all but the last property, escaping quotes in string values, proper indentation for pretty format etc. If a templating library makes writing json templates enjoyable you'll probably have no problems with source code.

One last trick that can simplify source code generation is running output of your CLI through standard formatters for the language you are generating. This doesn't work if there are competing formatting standards, or if community uses widely different formats. Example of this would be Java world where no two code bases look the same. On the other hand, Go programming language comes with prescribed gofmt formatter that code generation tools use. Having properly formatted source code becomes important when it comes to pull requests and similar situation that require diffing 2 files / versions.

Introspection

Another advanced use case for CLIs is source code analysis. This one is more complicated that source code generation. While generation can be implemented using text manipulation, in order to analyze source code you will generally need to tokenize it and build a syntax tree out of it. This is getting away from templating and into compiler theory.

Fortunately, most modern languages provide tools for introspection of their own source code. So, if you know you'll need to analyze Java code it might be a good idea to write your CLI in Java. You can probably find a library for parsing your language of choice, which you can reuse in your tool.

Problems with this approach arise if you need your tool to analyze multiple languages. One example of this is code editors and IDEs. A common solution for that type of apps is to use Language Server protocol to communicate to dedicated language server implementations. That way the application code is decoupled from language servers which can be implemented for each language. A more advanced example is source{d} engine, an application for source code analysis. Under the hood it is using Babelfish, a universal code parser that can produce syntax trees for various languages. Like LSP, it too has dedicated language drivers executing in separate Docker containers that communicate with a single app server over gRPC.

If your CLI requires analysis of source code from multiple languages you should probably use one of existing solutions. If features of LSP are enough for you that seems like the most widely adopted solution. If you need full flexibility of abstract syntax trees, then Babelfish might be a bit more complex, but more powerful solution.

Text-based UI

You might find that a simple loop of reading the arguments, processing them and outputting results is no longer enough for your use case. Processing can be complex, so you may need to update users on it's progress. You may need extra input from your users. Workflow you are implementing might be complex and require continuous interaction from users. At this point you may need to build a Text-based User Interface (TUI).

Input prompts are the most basic use case. Example of this is how npm init (and may other scaffolding tools) guide users through process of setting up a project. When designing such interactions you should allow for all prompts to be customized with CLI flags, so that command can execute without any prompts. Common pattern for doing this is to add a -y / --yes flag that will automatically accept default options. In doing this you will make your command usable by larger scripts for which user interaction is impractical.

Next up, there's the use case of informing users of the progress of CLI command execution. Modern dependency management tools (again npm is a good example) will display a live progress bar while downloading dependencies. Another example is HTTP load testing tool vegeta. It can dynamically output the progress of a stress test while its running. It also does something interesting: it allows you to pipe its output through a formatter tool to a dedicated plotting terminal application jplot. Jplot then renders live charts in the terminal. This is a good pattern to follow if you need live plotting and don't feel like re-implementing it yourself.

Lastly, there are full-blown TUI applications, starting with “simple” ones like htop and on to the likes of vim and emacs. If you are thinking of building similar TUI apps you should be able to find a framework in your language of choice that can help with laying out your application's UI elements. However, if you are expecting other developers' contribution to your application, it might be a better idea to go with something like a web app UI. That way you will have a larger pool of contributors to attract to your project.

What I did

In the CLI I built for work I implemented some code generation features. For project scaffolding I actually reused an existing parent project that all subsequent ones fork off of. Because of this my init command basically does a shallow git clone of the parent repository.

I also implemented an add commands for generating config and code required to expose a new API endpoints. Since I picked Go for my language I went with standard library's template package. I found it expressive enough to write all my templates and generate properly formatted json, Groovy and Kotlin code. It has just enough features for all my use cases, and not too many as to make it complicated to use. Much like Go language itself, using it was a zen-like experience.

I did not have any use for code analysis or TUI in that particular project. However, I've recently been playing around with termui, a terminal dashboard library also written in Go. It's easy enough to work with, but my use cases are not all that advanced either.

In conclusion

This blog post concludes the series on CLI development. You can find first post that deals with picking the technology and distribution stack here, and general CLI usability tips here. While the series is done, I might have some more thought on CLI development in the future. In case you are interested in this type of content, you can:

Command line citizen

Continuing with series on development of command line tools, this week I'll look into more practical tips for making a CLI app that's nice to use. If you missed the first part where I discussed picking the language for your app you can find it here. Like I mentioned in the first post, what follows are just some of my own opinions, tips and tricks.

Arguments and flags

On the most basic level users interact with your app by invoking commands and sub-commands, and by passing arguments and flags to them. Maintaining a consistent and intuitive set of commands, arguments and flags makes for a better user experience. Here are a few to keep in mind when defining them:

  1. Keep command names short and intuitive. They should be easy to remember. You can define several aliases for the same command. For example you can make both new and init commands do the same thing.

  2. Provide helpful error message in case user attempts to invoke a non existing command. A good example of this is how git will suggest similar sounding alternative:

    $ git stats
    git: 'stats' is not a git command. See 'git --help'.
        
    The most similar command is
        status
    
  3. Define a short version for commonly used flags. One convention is to use double dash for full name and single dash for short name. Also, you can group short flags behind a single dash. For example docker run -it image is same as docker run --interactive --tty image.

  4. For extra credits add auto-completion support. Covering bash and perhaps zsh can make most of your users happy.

Common commands

There are some commands that many CLIs could benefit from. One example is a help command that exposes the documentation from within the app itself. Another one is version. Try to keep consistent format for the version number. Semantic versioning is always a good option.

If there's some kind of project setup, like scaffolding of new project or initialization of CLI configuration files, automate that with an init command. Examples of this include git init and npm init.

In case your CLI requires some specific setup on the local machine for some or all of its functions, it's a good idea to build a doctor command that verifies the local setup and offers instructions for fixing it. For examples check out npm doctor and flutter doctor. I've found that giving users a diagnostic tool like that makes supporting your CLI way easier.

Providing help

Regardless of how logically laid out the CLI commands seem to you, your users will manage to get lost. Having an always present and accessible help system helps with that. Make sure that every command supports the --help flag.

Get into a habit of writing help documentation for your commands. Think of it in same way you think of tests. Your command is not done without it. I find it interesting to use a documentation first approach and write the docs before implementation. Document, implement, refactor loop works well too. Docs don't have to be super detailed or cover all the features from the start. You can add details as your command develops, and you can go back and rephrase parts of it later. The trick is to have some docs from the beginning, and keep iterating on them.

When it comes to presentation, make sure help is accessible from the CLI itself. Depending on platform your users may expect docs to be available in a dedicated format as well. For example man pages on Linux. You might also want to package your docs as markdown and include it in your git repository. Most git servers like GitHub, GitLab and Bitbucket will nicely render markdown files. If that's not enough you can go one step further with dedicated GitHub wiki or a GitHub pages page.

When it's time to quit

By now your users can find their way around your well structured commands, and thy know where to look for help. They are happily using your tool, until they encounter some long running CLI process and they decide to quit. When that time comes, try not to turn your app into an Internet meme:

I've been using VIM for about 2 years now, mostly because I can't figure out how to exit it.

Average VIM user

You should listen to kill signals from the OS (like SIGINT and SIGTERM) and handle them by terminating your app. Perform any cleanup you need, like flushing files and stopping background processes, but make sure app exits in the end.

Miscellaneous tips

A few more random tips:

  1. Provide a global flag for controlling output verbosity level. No-one likes overly chatty apps, but having no debugging output when something goes wrong is worse. Add --verbose flag to all of your commands so your users can pick level they need.

  2. Provide users with a way to format the CLI output. Tabulated format is nice for showing results in a terminal. However, piping results to another command would benefit from more straightforward, simpler formatting. For an example of rich formatting support take a look at docker formatting.

  3. Allow users to define commonly used settings and flags in a configuration file. If your use-case revolves around working with individual projects, add support for project-level configuration, like git does.

What I did

Taking care of all the things mentioned above gets easier if you start with a feature rich framework. For my needs I picked Cobra, a well established library in the Go community. It's used by Docker, Kubernetes and etcd, to name a few. Other languages / ecosystems will have other popular frameworks. Take time to find one that fits your needs and coding style. You can often find them by looking at popular CLI tools written in a given language. For example, Heroku CLI is a Node app, and uses the oclif framework.

Next up

That wraps up the discussion of general patterns for making CLIs that are nice to use. In the next post of the series I'll go into more advanced use cases that CLI tools are good for. If that sounds interesting, you might want to:

I've recently been working a lot with Spring Cloud Gateway. It's been (mostly) a pleasant experience, but I do encounter some annoying quirks from time to time. This is a story about one of them.

Use-case

A few days ago I was debugging an issue at work when a backend microservice developer that my API Gateway proxies to asked me to try using keep-alive connections. The request sounded reasonable enough, adding headers to downstream request is easy in Cloud Gateway. I already have few headers that I'm adding.

Hold my beer. I got this.

me, going into this

setRequestHeader

I configure my Spring Gateway routes programmatically, so adding headers to request looks something like this:

builder.routes()
  .route(r -> r.path("/foo/bars")
    .filters(f -> f.setRequestHeader(HttpHeaders.CONNECTION, "keep-alive"))
    .uri(backendUrl))
  .build();

I already had my route set up, so all I needed to add was the setRequestHeader line. So far so good.

Signs of trouble

Next up, I updated tests to check for the new header. This is where I detected the problem. I use WireMock to simulate backend services in tests. Checking requests that API Gateway sends downstream is straightforward:

verify(getRequestedFor(urlEqualTo("/foo/bars"))
  .withHeader("Connection", equalTo("keep-alive"))
);

And the test failed. Here's what WireMock told me:

No requests exactly matched. Most similar request was:  expected:<
GET
/foo/bars

Connection: keep-alive
> but was:<
GET
/foo/bars

Basically, requests were going through, the route was properly sending them to mocked backend service, but Connection header was missing.

Debugging

Having a test that fails consistently was useful, because it allowed me to debug the issue. I put my first breakpoint inside SetRequestHeaderGatewayFilterFactory class. That's where GatewayFilter that sets headers is implemented. I ran my test and everything looked good. Connection header was added to mutated request, and mutated exchange was passed on to filter chain.

Next up, I decided to look into NettyRoutingFilter. That's where Spring Cloud Gateway makes HTTP requests to backend services. I put a break point at the start of filter method and inspected the exchange parameter. My Connection header was there. I proceeded to read the rest of the method, and found this line:

HttpHeaders filtered = filterRequest(getHeadersFilters(), exchange);

Turns out there's a set of HttpHeadersFilters that operate on HTTP headers and can exclude some of the ones previously set by GatewayFilters. In my particular case the culprit was RemoveHopByHopHeadersFilter. As its name suggests, its function is to remove headers that are only relevant for requests between client and API Gateway, and not intended to be proxied to backend services. In this particular case, I wanted to retain the Connection header. Fortunately RemoveHopByHopHeadersFilter can be configured with external configuration.

Solution

The solution was to add the following:

spring.cloud.gateway.filter.remove-hop-by-hop.headers:
    - transfer-encoding
    - te
    - trailer
    - proxy-authorization
    - proxy-authenticate
    - x-application-context
    - upgrade

to application.yaml config file. With that RemoveHopByHopHeadersFilter no longer removed the Connection header. Tests passed, I deployed API Gateway and backend services handled themselves better.

Example

I've created a demo project that illustrates this issue and the solution. You can find it on GitHub. Feel free to play around with it, and if you find additional issues, or better solutions, please let me know about them at:

Tower of Babel

I've recently been working on a command line tool. You can read more about it in my previous blog post. In the process of writing it I've stumbled upon a couple of ideas and best practices that I'd like to share with you in a series of blog posts. In this first post I'll go over picking the programming language and package manager for your CLI.

A single executable

The first decision we developers face when starting a new project is which language to use. In this case, like in most other, I'd suggest you think of your users first. The main way they will interact with your CLI is by executing it on the command line. (Yeah, I know, I deserve a Captain Obvious badge for this observation. Bear with me.) When building a CLI you should try to make it as simple as possible to execute. And what's easier to run that a single self-contained executable?

This puts languages that require an interpreter or a virtual machine (like Python or Java) at a disadvantage. Sure, most Linux distros come with Python pre-installed, but even they might have conflicting versions. And Windows users don't have it out of the box.

This machine disparity leads into a second consideration: building executables for multiple platforms. You users will most likely be spread over Linux, Windows and MacOS. You should build your CLI app for all three platforms. Having a toolchain that supports compilation to multiple target platforms from the same machine will make your life easier. You will be able to compile your code locally for any platform. And your CI pipeline will be simpler (no need for dedicated Mac nodes).

One situation when you can get around these constraints is if you are targeting specific group of users that rely on a cross-platform technology. A good example would be Node in JavaScript community. If you are building a CLI tool exclusively for frontend developers you can presume they all have Node and npm installed. However, this can still limit you in the future. For example you might be locked into an older version of Node until you are sure all of your users have upgraded.

Note that it might not be enough for technology to be ubiquitous among your users if it's cumbersome to use. For example, I'd caution against packaging your CLI as a .jar file, even if you are targeting Java developers. Java classpath is just too much of a mess.

No external dependencies

In addition to not using an external runtime, your CLI app should also avoid using external dynamically linked libraries. Depending on your packaging and distribution setup, you can satisfy all of your dependencies at install time. However this will complicate your setup. There will be situations when just giving your users an executable file that they can immediately run pays off.

In order to satisfy this requirement you should look for a language that can bundle your dependencies within a single compiled executable. This will result in bigger file sizes, but it's much more practical. It's also worth considering how well your language interacts with the OS. You should look for both platform agnostic and powerful APIs for working with underlying OS baked into the language. Having a solid standard library helps a language meet these requirements.

Distribution

If your language of choice matches all suggestions from the previous sections you can easily build a statically linked executable for any major OS. You can give that executable to your users and they can run it as is. This is super useful for testing early builds or custom flavors of your app. It does require some effort from your users to set up their PATH to include your executable. That's something a package manager can help with.

Another argument in favor of using a package manager is upgrade flow. You will inevitably release new versions of your CLI. Package manager will alert your users that a new version is out and will make upgrading easy. It's hard to overstate benefits of having users on the latest versions of your app.

If you base your tool on a cross-platform technology like Node chances are that ecosystem has a preferred package manager, like npm. If you choose to build native executables you should look for native package manager. This is where cross platform approach makes things easier. However, having your app compile into standalone executable simplifies integration with multiple package managers.

You will need to consider your users' habits when choosing package managers. Mac users are probably used to Homebrew. Linux has more diversity. You can start by building a Debian package, then listen to your users' feedback and include more packages as they are requested. On Windows the situation is not so clear. Chocolatey is one option, but it may not be widely adopted by your users. As a rule you should avoid forcing users to adopt a new tool just to install your app. If it come to that prefer manual installation process.

What I ended up with

For a language I picked Go. It provides dead simple compilation into a single statically linked executable for all major OS platforms. It comes with very strong standard library, good APIs for interacting with underlying OS and a vibrant open-source community.

If your audience allows for it you might be able to stick with Node + npm combination. Alternatively you might pick some other natively compiled language. For example Rust is one popular option, though compilation for multiple targets is a bit more involved than with Go. You can find more about using Rust to build CLIs here. Lastly, you can even use Java with something like GrallVM. With Grall you can build native executables that don't require JRE to run.

For packaging I choose to create Homebrew and Debian packages. Both builds were relatively simple to automate using Jenkins CI. Homebrew in particular is easy as all it requires is a Git repository with a Ruby script describing your package. Since my CLI is used internally at work, I publish my packages to internally hosted Bitbucket and Artifactory. My Windows users do not have a favorite package manager, so I leave them with executables they can simply download anywhere onto their PATH and use like that.

Next up

In the next installment of this blog series I'll go over what makes CLI app a good command line citizen. I'll cover topics like usability and consistency. If that seems like something you would be interested in consider:

Should I use angular, react of insert-framework-of-the-month for my next project?

prospective web developer in 2019

What framework should I use for web development in insert-exciting-new-language?

unsuspecting developer encountering an exciting new language

Web has become the default UI layer in recent years. Most projects grow around either a web or native mobile app. I myself usually start prototyping a new product by cobbling together a web app and hooking it up to a backend service.

However, what if that web (or mobile) app was not the right choice? Can we get away without building it at all? Here are a few interesting results I've come across while exploring this premise.

War Frontend, what is it good for?

Let's see what we'll be getting rid of. Web app frontend serves 2 basic functions:

  1. Present information to the user.
  2. Collect input from the user.

Any substitute will need to serve those 2 functions.

Drawbacks of writing a web app:

  1. You have build it in the first place. Code you don't have to write is the best code.
  2. You need to teach your users where to find it and how to use it. Using conventions such as Material design helps with usability, but discovery is still an issue.
  3. It can get hard to satisfy power users. Think about users who want to do everything without lifting their fingers away from the keyboard, or write python scripts to crawl your web app.
  4. You might be more interested in backend development. This one becomes more important in case of a side project, or Google stye 20% project.

Having these drawbacks in mind, here are a few projects that I've worked on lately and how I've gotten away without writing a web app frontend for them.

Internal framework support tool

Use case

At my job I lead an API infrastructure team. We develop a framework that other dev teams in the company (~20 teams) use to expose their public APIs. We also maintain the application that runs those APIs. It's something like an in-house implementation of Amazon lambda and API gateway in one service. We've noticed that developers from other teams had low visibility into current state of our production system. I decided to build a dashboard-like tool for them to monitor and manage their APIs.

Failed attempt

First I envisioned the solution like a web app dashboard that collected data from production instances of API service and provided some management operations. Looking for a learning opportunity on this pet project I picked Angular Dart framework to built it with. Few weeks later I built a really nice generic table component (which lagged terribly if populated by more than 5 rows) and lost interest in the project. Count this as 1th and 4th drawback taking their toll.

Success story

Few months later, inspired by the frontendless idea, and after discovering wonderful systems programming language Go, I decided to revisit the project. For my second attempt I decided to build a command line app instead of web frontend.

I actually finished this time, and have discovered new and interesting use-cases in the process. Writing a CLI tool allowed me to easily implement scaffolding features that help developers build their API scripts locally. This is something that would have been difficult to implement in web app, and would have probably never crossed my mind.

Since target audience for this project were other developers, having a CLI instead of a web app did not hurt usability. If anything, it was easier to make power users happy, as they can integrate the CLI into their CI pipelines, and other scripts. So this approach countered the 3rd drawback (in addition to 1st and 4th).

Client hash lookup tool

Use case

API platform I work on uses hashids to generate short ids for out clients. Occasionally folks from support or sales departments needed to find the hash id that belongs to a given client, or reverse. They used to ping my team each time. We decided to build a simple tool they could use to do the lookup themselves.

Roads not taken

We abandoned few ideas right away. For example, building a CLI like in the previous example wouldn't have worked because our users, support and sales people, were not tech savvy enough. We also decided not to go the web app route because it seemed like an overkill for such a simple functionality.

Solution

One tool that all departments within the company use is Slack. So we decided to implement this lookup tool as a slack bot. We used the Hubot framework and I ended up finally learning basics of CoffeeScript. I guess there's no escaping web technologies, even in a frontendless project.

Unexpected benefit of using a Slack bot was ease of discovery. Since our bots participates in public channels, every time a user interacts with it all other channel members see it happen. Every usage is simultaneously a feature demo for potential new users.

Projects registry

Use case

My team recently decided to invest more time into API governance. One thing that become clear immediately was that we needed a registry of all existing APIs. We needed to know which in-house team exposes which APIs and what platform features they use.

You guessed it, frontendless

For this one we didn't even consider building a web app. We already use Confluence to store our internal project documentation. That's the place our product owner and other stakeholders go to find information. However, API projects grow dynamically as developers work on them on daily bases. Manually updating a Confluence page every time a dev in the company adds a new feature to their API wasn't sustainable.

In the end we created a script that can crawl through our git server, find all API projects, collect relevant info and update the Confluence page with it. Both Confluence and Bitbucket (our git server of choice) provide detailed enough APIs, so this wasn't hard to pull off. We set the script to run every night and that was it.

Using existing wiki platform to display our data allowed us to skip entire categories of features, like access permissions, user mentions and comments. And in the end it was easier for our users to find the information they need because they were already used to looking for it on Confluence.

Takeaways

There's one common thing in all three of these examples:

Web app was replaced by a tool that's “closer” to intended users.

In case of developers that was a CLI app. In case of employees from other departments that was Slack. In case of stakeholders that seek project information that was internal wiki. Each time the end product was either easier to discover and learn for new users, or more flexible or power users.

Stepping out of the web app mindset has also had some interesting side effects:

  • Discovering exciting new features that wouldn't fit into a web app.
  • Learning new technologies, such as Slack bots.
  • Significantly reducing development times.

Granted, there are still many situations when picking a classic software as a service approach and building a web app is the right call. However, when you find yourself starting a new project ask yourself if there is another model better suited to your users.

And then share your #frontendless success story with me!