A Toolbox of Software Architecture Review Techniques - Pt 1

Part 1: The problem with many architecture reviews and the simplest thing you can do to make them better

I was at the Software Architect conference in London back in October and saw Rob Smallshire (@robsmallshire) give a talk on Conducting an Architecture Review. Rather than presenting techniques for reviewing an architecture, he instead focussed on things that go on around a review that you need to get right in order to be successful - such as how to get stakeholders involved, how to get the right buy in, how to set expectations with the project owner, and how to the right people to participate and what they should contribute. It was a good talk, you should check out the video in the link.

Chatting with people in the break it was clear that whilst lots of people do some kind of architecture / design review in their work, they aren’t very familiar with established techniques and methodologies for performing them. There was actually an industry study (Babar2009 [pdf]) a few years ago which showed that most organisations use home-grown, informal, ad-hoc techniques. And of the approximately 40% that claim to use a structured approach very few have heard of or use any established technique - despite how long they have existed and been successfully used.

So this is a short blog series about architecture review techniques - the ones that exist, the differences between them, and how to choose between them based on your context. In this first post I’ll start with the simplest technique that can help and then get into more detail later on.

Overview Stuff

Let’s start with a simple overview. Like many topics in software architecture, Grady Booch has provided a short and too the point description that explains how to perform an architecture review (Booch2010):


  • Identify the forces on the system.
  • Grok the system’s essential architecture.
  • Generate scenarios that exercise the relevant forces against the architecture.
  • Throw the essential architecture against those scenarios, then evaluate how they land relative to the relevant forces.
  • Wash, rines, repeat.

A short digression: This reference (Booch2010) is to an IEEE Software article. Although IEEE SW is an excellent source of information, it is behind a paywall and therefore it may as well not exist for the vast majority of software architects that I know. Luckily they provide a free podcast where Grady Booch reads the article for you. For all the references mentioned in these blog posts I’ll try to also link a publicly accessible sources for further reading (podcast index, mp3).

That list of steps is everything you need in a nutshell. Pretty much all other sources of information are just an elaboration of one or more of those steps in varying degrees of formality.

Let’s begin with a very basic technique, look at the structure for more detailed reviews, and different aspects concerning context that help you find which parts of a detailed review structure that are necessary for you.

Q: What’s the simplest thing I could do that would be useful?
A: Active Design Reviews

Active Design Reviews (Parnas1985 [pdf]) is one of the original discussions on the topic - and its still excellent. There are some older sources (e.g., Fagan1999 [pdf]) but this description from Parnas in the 1980s is a great place to start. The wording may read a little old fashioned in places - e.g., “...a description of the modules access programs…” - but the principles are just as applicable now as they were back then.

Parnas' article begins with a description of how ad-hoc "design reviews" are performed in practice and why this makes it difficult to for the review to produce a useful result. I’m going to repeat them here because even though these design review anti-patterns were described back in the 80s I still see them practiced today on large-scale projects with budgets in the order of 50-100M euros.

Here’s the sequence of events Parnas describes in an ad-hoc review:

  1. A massive quantity of highly detailed design documentation is delivered to the reviewers three to four weeks before the review.
  2. The designated reviewers, many of them administrators who are not trained in software development, read as much of the documentation as is possible in the time allowed. Often this is very little.
  3. During the review, a tutorial presentation of the design is given by the design team; during this presentation, the reviewers ask any questions they feel may help them to understand the design better.
  4. After the design tutorial, a round-table discussion of the design is held. The result is a list of suggestions for the designers.

And here are the consequences. Again, this is pretty much verbatim because Parnas captured it so well 30+ years ago
  1. The reviewers are swamped with information, much of which is not necessary to understand the design… decisions are hidden in a mass of implementation details.
  2. Most reviewers are not familiar with all of the goals of the design and the constraints placed on it.
  3. All reviewers may try to look at all of the documentation, with no part of the design receiving a concentrated examination.
  4. Reviewers who have a vague idea of their responsibilities or the design goals, or who feel intimidated by the review process, can avoid potential embarrassment by saying nothing.
  5. Detailed discussions of specific design issues become hard to pursue in a large all-encompassing design review meeting
  6. People who are mainly interested in learning the status of the project, or who are interested in learning about the purpose of the system may turn the review into a tutorial.
  7. Reviewers are often asked to examine issues beyond their competence.
  8. There is no systematic review procedure and no prepared set of questions to be asked about the design.
  9. As a result of unstated assumptions, subtle design errors may be implicit in the design documentation and go unnoticed

Sound familiar? I don’t understand how we were pretty clear about these problems - and how to overcome them - 30+ years ago and yet this still happens in many places - but that’s a topic for another rant...

Parnas goes into detail about how to address these problems but I wont repeat them now because we’ll get to that when we look at more recent techniques. But there a number of principles in his recommendations which provide the simplest thing you could possibly do.

Let’s start with the most important - and no prizes for guessing where he gets the name of his article from - the reviewer has to take an active stance. That involves two things:

  • As a reviewer you must prepare in advance checklists and scenarios that you want to test for the system qualities that you are interested in.
  • And for those scenarios you must ask active, open questions. I.e., questions that require a descriptive answer and not simply «yes/no» questions. You want the designer to explain how the solution will solve a particular problem or how the design would need to change for a proposed future need. That is, never ask questions such as "is it highly available?". Instead, ask "which state is maintained between calls?", "what are the consequences if that server goes down?", etc.

Some other useful principles are also part of his approach to running a review


  • You will most likely need an iterative approach to the review. That is, you’ll need to run a number of sessions where you start with the overall architecture and then subsequent iterations to dig into more detail or specific areas of concern.
  • You shouldn’t approach the task as a single, all-encompassing, «let’s just review everything» workshop. Instead, you need to identify particular aspects of the design - the qualities of the system - that are most important. Also, you will most likely need multiple people on the review team and they should focus on particular qualities. When working with your typical enterprise applications there is usually a security expert who has a specific area of focus, but other system qualities such modifiability, performance, useability, testing, operations, etc, can also be assigned to particular people on the team.
  • Finally, make sure there is a design representation that is actually reviewable. That doesn’t mean you need 300 pages with inch-perfect UML specifications and mathematical proofs. But it also doesn’t mean a random bunch of box and line pictures with no description of what those boxes and lines are supposed to represent. Identify the views that you need in order to depict the system qualities that are important. Then use a notation that other people understand. UML, Archimate, or whatever.

If there is only one thing that you get from this article series, then just focussing on these takeaways will help you perform better reviews. In fact, whenever you are in a review and you hear someone ask a question such as, «Is the response time good enough?», «is it secure?», «is it service oriented?», «can that be reused?», I recommend that you print out the Active Design Reviews article, roll it up, and give them a good whack with it - and then make them read it before they are allowed to take part in another review.

That’s probably enough for the first article - its already drifting into TL;DR territory.

Before we get into specific review techniques its worth looking at the structure of a comprehensive review (in a little more detail than Booch’s 4 steps) so that it’s easier to compare them. That will be the topic for the next post.

Thanks to colleagues @marioaparicio and @morten_moe for reviewing a draft of this article.






2 comments

I think this is an i nformative post and it is very useful and knowledgeable. therefore, I would like to thank you for the efforts you have made in writing this article.
architects in Pakistan

Reply

thanks

Reply

Post a Comment