Previous View on autonomous teams: part one
Next Making a Difference: Essent’s Role in the Refugee Talent Hub

View on autonomous teams: part two

Toon Wijnands
7 minutes

WELCOME TO PART 2: BUILDING OUR FOUNDATION

In the first part of this blog (click here for part one), we explored key building blocks for team autonomy, including goal setting, minimizing dependencies, and creating effective boundaries. These steps are critical to empowering teams and aligning their efforts with organizational goals. 

Now, in part 2, we’ll dive in the journey by diving into tools, information access, and guardrails—practical measures that help teams achieve true autonomy while maintaining alignment and quality. Let’s continue! 

TESTING

It’s great if we can build stuff more independently of other teams. But what about testing? One of the benefits of “keep things together that are bound to change together” is that the reverse is also true: other components don't need a change. And other components don't change, why test them? However, that is not the full story. There will always be some end-to-end testing performed, at least for regression purposes. The solution here is test automation. If the end-to-end tests are fully automated, every team can trigger them. No shocking conclusion, but we still have work to do to reach that state.

MORE SELF-SERVIE 

Let’s come back to the DNS example. Imagine yourself in the situation that you are coding on a feature, fingers dancing around the keyboard because you are fully in your zone. And then you find out you need an extra DNS record. You can manage that locally, but once putting this code into the pipeline, it will break if you can’t set up that DNS record for every test and production environment necessary.  The typical process in an enterprise is that the DNS is managed by another team, and you need to fill in some kind of form in Word, send it out, and then wait for the other team to manage it. And it probably involves calling some person on that other team if he can do you a personal favor, to get this request fulfilled a bit faster than normal. 

Typical throughput time: a few days and if you are unlucky 2 weeks. So far with team autonomy and deploying independently of others. 

If we have a closer look at what is happening here, there is no real reason the other team needs to look at your request. You’ve passed on an A-record with an IP number and a TTL (or any other DNS record) and the other team just copies it in the DNS server configuration. To put it in another way: the DNS is not adding any value in this particular process (of course they do add value by running the DNS service smoothly). The solution here is clear: just automate it by providing self-service to the original team. You can enter your info and boom it works. 

NOT EVERYTHING HAS TO BE SELF-SERVICE. THE KEY IS TO IDENTIFY SITUATIONS WHERE THE SERVICE TEAM DOES NOT ADD ANY VALUE, AS THIS HAPPENS QUITE OFTEN.

To stick with the example: a zone transfer for a domain name is typically something that needs proper attention from the DNS-team: communicating with the previous owner about the transfer time, doing the paperwork, adjusting TTLs, etc. The value is added here by offloading the original team with all that work, and – more important than the amount of work – offloading them with the specific knowledge and experience on this process. 

The DNS service is just one example but the general rule here is to automate internal IT processes with a self-service interface. That also puts a requirement on the IT-tools we select: they should come with APIs (or a proper web-based GUI that enables decent role-based access controls).

That also introduces the next challenge: 

GETTING THE RIGHT INFORMATION TO THE TEAMS

Now we have all these fancy self-service thingies. Great, right? But where do we find them?

And this is not only about self-service stuff. But also, about finding API documentation, and quality rules, the solution: make developer information accessible in 1 place.

That sounds easy but it is not so easy in reality. A famous answer within Essent has been: “It’s on the wiki”. The statement was probably correct, but the problem was in finding that answer. Tons of information was put on the wiki but structuring that content and making it properly searchable was nobody’s responsibility.

We are solving that by introducing our development portal HyperHub. It’s based on the open-source Backstage framework. And there is a team owning this, making sure usability for our engineers is top priority. Because of its component-based structure we are able to integrate all kinds of things on the portal: self-service components, API service catalog, Event catalog, CI/CD pipeline status, team documentation, development guidelines etc.

One could argue that there is no difference between a Wiki and HyperHub. However, there is a significant difference: Hyperhub can tap into the workflow of the teams: lots of info on Hyperhub is based on YAML-files in the repos of the team. So, updating documentation can be integrated into the team’s CI/CD workflow, which - by design – enforces completeness of the documentation.

Touching the subject of guidelines brings us to the next topic:

BETTER UNDERSTANDING OF GUARDRAILS

In a big enterprise there is a lot of guidance for the teams. Things that we need to comply with.

To name a few: legal guidelines, licensing guidelines, lifecycle management policies, security policies, architectural principles etcetera.

Guidance often goes hand in hand with checks: does your solution comply with the guidance? And typically, someone outside of the team needs to sign off. There you have it. Yet another thing that hampers your autonomy.

So how do we get rid of this checking dependency? Our solution? Reverse the psychology, a paradigm shift. As long as a team sees this guidance as extra work which distracts us, it will see an increasing number of compliances.

If we want true team autonomy, the team should live up to that: with great power comes great responsibility. This means that a team intrinsically wants to deliver a functionally great solution, but also legally compliant, architecturally a perfect match, fully secure etcetera.

And that means for engineers to drop the ‘'stop-controlling-me-I’ve-got-this mindset’' and replace it with the ‘'how-do-you-see-this-puzzle-and-how-would-you-have-solved-it mindset’'. This goes both ways: often enough people who perform the checks are checking in with a '‘just-follow-the-rule-mindset'’ instead of a ‘'we-both-work-together-to-make-a-compliant-solution-mindset.’'

This shift in mindset is a big change that we need to work on. It’s not that teams don’t want to own a great, perfect solution. All teams I’ve met in my career (even the bad ones) were proud of what they delivered. The question here is more about which things we need to change in the organization to make it easier to live up to that desire. 

People setting the guidance must be able to explain the reason behind the guidance. Remember the second paragraph of this blog?

Let’s not tell people how to do something but tell them what to achieve and why that is necessary. The team will figure out the rest. Often a guideline is not written in that format. It will be difficult for a team to own it in that situation. That is not a problem (we value human interaction above extensive documentation), but the person responsible for the guidance should be able to explain this in a conversation. If that’s not possible, the guideline should be dropped or improved. These conversations about guidelines are important feedback loops for the quality of the guidance.

Another root cause to solve is teams not being aware of the guidance. Then it is quite logical that you don't see that guidance reflected in the implementation. Therefore, making the guidance easily findable, searchable, and digestible to the team is key. (Yet another area of attention for our Hyper Hub).

This is not an easy task either. The primary reason for these guidelines is that they address a topic that a team cannot oversee themselves. It takes a different perspective and dedicated focus on that domain to stay on top of things. The guidance to the team is the condensed summary of information from that domain that is relevant to the teams. We need to strike a careful balance between sharing not too much information, (so that the team can digest it quickly) and enough information to be complete.

Furthermore, many checks can be automated in the CI/CD pipeline. This removes the dependency over time: we only have to check that the guideline is well implemented in the pipeline, because after that the checks will run on every deployment. This is of course the preferred way, but it will not always be practically feasible.

CONCLUDING THOUGHTS

There’s no single silver bullet that will instantly create ''autonomous teams.'' —it’s a journey, not a destination. However, I firmly believe that by making continuous, incremental improvements, our teams can grow a little more autonomous every week. Over time, these small steps will lead us to the kind of true autonomy that supports our vision of becoming The Energy Tech Company.



Toon Wijnands

Lead Enterprise Architect

Hi, I'm Toon and I am a Lead Enterprise Architect at Essent.

My days are mostly filled with managing and facilitating our DevOps teams and architects to transform our application landscape towards a cloud centric, event-driven landscape and keep delivering business value at the same time. In my free time I like to play the piano and used to play volleyball for a long time.