A Toolbox of Software Architecture Review Techniques - Pt 2

Part 2: Building on our collective experience


The first article in this series looked at the simplest thing that could be useful when doing architecture reviews. This one looks at how you structure a more systematic review using the collected experience of our industry.

Standing on each other’s shoulders rather than each other’s toes

Previously I lamented the fact that we’ve been both using techniques for architecture reviews for 30+ years - so why are we still not making use of them effectively? Wouldn’t it be great if someone collected all that experience and made it easy to use in practice? Well, they have - it’s called the Software Architecture Review and Assessment (SARA) report. Here we’ll look closer at how to use it to design a structured architecture review.

Back in 1999 a working group got together to collect industrial experience and research techniques for performing architecture reviews. There were many participants from multiple organisations and they presented the SARA report at the ICSE in 2002. (Obbink2002, [pdf]) If you plan on running an architecture review, or currently do them regularly, you should grab a copy and make use of it.

A digression: Apart from ICSE, there are some other conferences you should check out. For instance the «Working International Conference on Software Architecture (WICSA)» - (and the ECSA, and SATURN conferences). There are more conferences that discuss useful techniques for software architects than just the QCon and GOTO series.

The SARA report provides:
concrete, practical, experience-based guidance on how to conduct architectural reviews. This includes guidelines on:
  • what steps to follow,
  • what questions to ask,
  • what information to collect and document,
  • what documentation templates to use, and
  • tips on how to manage the social, managerial, and technical issues that arise when reviewing an artifact as important and complicated as a software architecture.
There are additional articles that collect experience with architecture reviews and provide an overall structure. I’ll include those as well in this article, but the main focus will be how to structure an architecture review using the SARA report.

Designing a Structured Architecture Review


The report structures the knowledge about architecture reviews into the following parts:

  • Review Inputs
  • Review Outcomes
  • Review Workflow
  • Methods and Techniques
  • Pragmatics and People Issues

These parts are a good starting point for planning your review.

The report then finishes with case studies to exemplify each aspect. This article will primarily focus on the inputs and outputs and we’ll come back to the others in later posts.

It begins by defining concepts and how they hang together. Nothing fancy, but an important place to start (figure 4-1).


The most important concepts are what the report terms the ASRs and ASDs:
The purpose of an architecture review is to understand the impact of every architecturally significant decision (ASD) on every architecturally significant requirement (ASR).
This sounds perfectly reasonable and straightforward to do. The problem is - as the report notes - that it’s often very difficult because the architecturally significant requirements are often hard to identify, the architecturally significant decisions are often not documented, and the way the decisions interrelate is often not easily understood. A significant part of the review process is often teasing these out.

Inputs

The inputs include:
  • Objectives
  • Scope
  • Architecture description and artefacts
  • Evaluation Criteria
Obviously you’re going to need artefacts that describe the architecture, but the first input you need to consider are the objectives for the review. Why you are doing the review and who you are doing it for will drive the rest of the inputs and outcomes that you need to identify.


There are (at least) two starting points for running a review:

  1. It’s a one-off review that has been requested by particular stakeholders in the project.
  2. It’s one of a series of regularly scheduled reviews as part of the project lifecycle.

Objectives

From this starting point you then identify what you want to get out of the review. The report lists the following examples:

  • Certifying conformance to some standard:
    • For example: Does the architecture fulfill the constraints and requirements of the relevant standards?
  • Assessing the quality of the architecture (the most common objective):
    • Does the architecture fit to the problem or mission statement?
    • Can specific qualities (e.g., scalability, performance, etc.) be architecturally controlled?
    • Etc.
  • Identifying opportunities for improvement:
    • Which design decisions should be revised in order to improve the architecture?
  • Improving communication between stakeholders.

Other objectives can include reviewing the portfolio of a newly acquired company to determine functional overlap and to determine if it’s worthwhile investing further in an existing, long-lived system compared to purchasing/building something based on more recent design/technology.

Scope

Once you have the objectives, then you need to set the scope of the review. There are many aspects to consider and you need to agree on both what’s in and what’s out of scope for the review. This is especially important if it’s a one-off review and you are an external reviewer. Scope creep is just a damaging for reviews as it is for software development - especially in situations where stakeholders may see the review as a means for pushing their own agenda. Examples of scope include:

  • Is it the whole system or just some of the major components?
  • Is it a system of systems?
  • Are you including all the stakeholders or just a selection?
  • Will you consider all the evaluation criteria, or just focusing on some of them?

The answers to these questions will often be affected by the time available, stakeholder availability, concerns of whomever has commissioned the review and if it’s a one-off review or part of a regular series of reviews.

Evaluation criteria

With the objectives and scope you can then move on to evaluation criteria - the requirements and other inputs that you want to throw against the architecture to see how it stands up. We’ll dig into these in the next article but they include:

  • Quality requirements
    • Scenario-based and experience-based
  • Check lists
  • Application of architecture patterns
  • Relevant standards, regulations and legislation
  • Architecture smells

Evaluation criteria can also include more than these traditional requirements. For instance, a useful input is a high level problem statement. It might be the world's greatest architecture, but if its not actually solving the right problem then its not much good. Similarly, you might have a Business Motivation Model or Impact Map - which gives you the same content but with more hipster packaging - that traces the business strategy down to specific tactical goals for the system.

Architecture description

Finally we get to the input where most people start - the architecture description itself. Often some (or all) of this documentation won't exist. It may be part of the initial stage of a review to work with the relevant team members to create the architecture views needed. As a starting point you could use a common set of views. For instance, the Kruchten 4+1 view model is good starting point. More recently I’ve also seen people using Simon Brown’s C4 model.

4+1 ModelC4 Model
Logical ViewSystem Context diagram
Development ViewContainer diagram
Process ViewComponent diagrams
Physical ViewClass diagrams
+1 for scenarios that instantiate the views for particular use cases...

These are great places to start. However for doing effective reviews you should have a good understanding of which architecture views you need. These will be affected by the objectives for the review, the quality requirements that are architecturally significant, and the point in the project lifecycle that you’re running the review. You need to identify the architecture views that allow you to apply the evaluation criteria to meet your objectives. For instance, the C4 doesn’t have behaviour views, so if you need to evaluate anything to do with timing and concurrency then you will need to provide an additional view. Rozanski and Woods have captured a many in their book on Views and Viewpoints for Software Architecture - and many are on their book website.

You may have additional architecture artefacts than just the architecture description. There may be proof-of-concepts, feasibility studies, or if it’s part of an agile project you may have a walking skeleton solution. For reviews of existing systems you’ll (hopefully) have the code.

Outcomes

That’s quite a lot on inputs - though there is more in the SARA report that is worth exploring. Now let’s look at the collective experience with outcomes.

The main tangible output - usually some kind of report - will depend on the objectives and the type of review you are performing. The SARA report provides a suggested structure for the report which you can use as a starting point and tailor appropriately.

Note that a review that is part of regular planned review activities and performed by people within the organisation/project will be less formal than if you are an external reviewer who is performing a one-off review for a particular stakeholder. The later will have greater need for documentation and justification for the outcomes.

The primary outcome is a list of risks, opportunities and recommendations concerning how the design deals with the prioritised evaluation criteria. But before we get into those there are also intangible outcomes that should not be ignored. One of the most useful outcomes from a review is simply the improved understanding about the planned architecture - especially for those who are not central in the design team. Additionally, a review team may not identify significant risks but the simple process of having the relevant team members explain the design will help share knowledge - and the act of doing this can often help the architects find and plug holes in their own understanding.

The first outcome most people think of are the risks, or issues, with the design. A useful approach is to identify both the risk and any relevant tradeoff that the stakeholders need to consider. For example, a design for a mobile app could recommend a native app over a html5 solution. The architecture review may document the extra cost and resources for supporting multiple platforms for this decision. But often what the stakeholders need is to understand the tradeoff so that they can prioritise the correct qualities. For instance, for a mobile game the user experience may be prioritised over the cost of supporting multiple platforms. For a relatively simple data entry app, the tradeoff might be prioritised the other way.

Risks don’t have to be solely about design. The review can identify issues such as:

  • business plan
  • process issues
  • requirements analysis
  • applied standards and technologies
  • design and implementation
  • test and monitoring
  • human resource issues

Be sure to also document strengths of the architecture under review and also the aspects that could not be reviewed. For instance, because the review:

  • lacked documentation
  • lacked access to particular people
  • was too early in the project to consider them in detail.

Finally,you should include recommendations and an action plan. Identifying weaknesses is one thing but stakeholders want suggestions for how to deal with them. Those recommendations don’t need to be a list of things that must be done. They can be an opening for discussions with the architecture team.

Review Workflow

Organise your review into the following phases:

  • review inception
  • the review itself
  • post review

The first phase results in the agreement on the review scope, cost, duration, participants, etc. The second phase is the ..., iterative process of discovering, capturing and comparing ASRs and the architecture description. The last phase concentrates on summarizing and communicating the finding, as well as improving the review techniques and methods

Methods and Techniques

The report provides an inventory of methods and techniques and I’ll go into them in subsequent articles. These will discuss:

  • Should you use scenario-based or experience-based method?
  • How will you identify those scenarios?
  • Do you need a full ATAM (Architecture Tradeoff Analysis Method) or should you use a lightweight variant?
  • Do you need quality-specific techniques such as Rate Monotonic Analysis to analyse issues with real-time systems?
  • etc.

Pragmatics and People issues

The report finishes with a section on Pragmatics and People issues. These are extremely important but I think this blog post has covered enough.

Wrap-up

It’s not uncommon in our industry to bump into a grey-haired software architect, mention some new solution or technique, and watch them roll their eyes and ramble on about how it’s just a rehash of something they’d used or read about decades before. And it’s true, many things that we treat as new in software architecture have been solved before - it’s just that those solutions are unnecessarily hard to find.

Techniques for software architecture reviews are one of those areas and this article has described how to structure a more systematic software architecture review based on many years of collective experience. Go and download the SARA report and build on the experience of others.

Thanks to Pär for reviewing a draft of this article.

1 comments:

This is really great and a very detailed explanation. Amazing.

Reply

Post a Comment