The following is the first in a series of blogs aimed at providing a concise comparison of the most salient features of the leading Agile scaling frameworks. Hopefully, this information will help you choose the framework that is most suitable for your organization to meet their business needs.

For large software development projects involving up to a 100 to several 100’s of software developers, analysts and testers, the inherent techniques of agile methodologies such as Scrum or XP prove inadequate for effectively managing the progress of such enormous effort.

In this blog, we look at a quick comparison between two leading frameworks for scaling the Agile approach for large software development projects: Scaled Agile Framework (SAFe 4.5) and Large-Scale Scrum (LeSS).   Each has its strong points that may fit different organizational situations of large software product development.

Let’s get started: the “Big Picture”

See below the overview pictures for each of SAFe (www.scaledagileframework.com) and LeSS (less.works).

Right off the bat, you can see that while the SAFe Framework appears more comprehensive, it also appears more process-heavy.  In fact, the inventors of the LeSS framework are proud of its acronym indicating less process, less artifacts, and fewer roles, remaining faithful to having mainly the original Scrum roles of PO, SM and Team.

For example, SAFe offers the role of Product Manager, who is in charge of setting the priorities and overall scope of functionality to be delivered by a Program containing many Agile teams.  The Product Owner in SAFe performs the usual Scrum role for up to a couple of Agile Teams that typically work from a similar/shared backlog.

In contrast, LeSS offers the regular Scrum role of Product Owner (PO) for up to 8 Teams.  This is because in LeSS, the PO is not a liaison with the end-Customer: the Teams get to interact directly with the end-Customer to understand the details of the requirements, giving the PO the opportunity to focus on the overall priorities and scope for up to 8 Scrum Teams.

Hence if an organization can afford the opportunity for the Agile Teams to interact directly with the end-Customer LeSS can be a good fit in this particular aspect. Otherwise, SAFe can accommodate both the Team-direct and the liaison-PO situations.

SAFe 4.5 Framework

Organizational Structure

The Inventors of LeSS very much believe that culture follows structure.  To that end they offer LeSS not just as a practice to scale up the Scrum approach, but as a direct impetus for change of organizational structure.  The picture below shows what LeSS advocates for organizational structure for up to 8 Scrum Teams working together to develop a software product in order to provide what an Agile culture needs from an organization to succeed.

In this picture, you can see that there are no functional departments (e.g. development vs. testing) or a PMO.  Instead, in addition to the Scrum Teams, there is the Head of the Product Group, which LeSS views (as it views all other managers similar to the “Toyota Way”) as a teacher of those reporting to him/her, there is the Product Owner team that provides a pool of PO’s for every Scrum team (large or small scale) effort, and then there is the Undone Department.

The latter is a curious thing.  In LeSS, a permeating theme is that the Teams are supposed to do everything needed to put a high-quality software product in the hands of end-Customers: from analysis to development to testing to packaging, all while coordinating with other Teams.  All of that is represented in the Definition of Done of the Teams.  But it may take the Teams a few years to mature to that set of comprehensive capabilities.  Hence the Undone Department is a placeholder for resources that fill in for whatever the Teams are yet to be able to do (e.g. DevOps) until the Teams mature.

In contrast, SAFe does not advocate drastic organizational change as emphatically as LeSS.  It presents its approach for adoption even with the current organizational structure, and lets the organization take its time deciding when it may want to restructure to be more efficient with Agile.  That’s not to say that LeSS presents its approach as an “all or nothing deal” – it just emphasizes structural change in the organization more strongly than SAFe does.

Differences in Planning

SAFe stipulates that sprints should be grouped in sets of 4-5 consecutive sprints, each set being called a Program Increment (PI).  And while the Teams (and the Product as a whole) are expected to demonstrate incremental achievements at the end of each sprint (i.e. completed Stories), it is at the end of a PI that complete “Features” of the software product are expected to be available.  SAFe, however, maintains the option of releasing on-demand any time during a PI with the Features, or even Stories, that happen to be complete at that point in time.

Planning in SAFe happens in a 2-day session at the beginning of each PI, in addition to the usual sprint planning at the beginning of each sprint.  In the PI planning session all the Teams working together in what SAFe calls an Agile Release Train

(ART) attend to commit to delivering a set of Features for the PI, and to have each Team present a plan showing which stories (which are children of Features in SAFe) the Team plans to complete for each sprint in the PI.  Finally, in addition to the usual sprint demos and retrospective, SAFe has an overall Inspect and Adapt Workshop (analogous to the Sprint Demo and Retrospective) at the end of the PI which includes a PI demo, Quantitative Measurement, and a Problem Solving Workshop that dives into deeper root cause analysis than a normal Sprint Retrospective.

In contrast, LeSS remains faithful to just the usual sprints of Scrum, with the following additions:

  • Sprint Planning happens in 2 stages. The1st stage is attended by 2 representatives of each Team, which do not usually include the Team’s Scrum Master.  This stage decides which items from the common Product Backlog each Team will develop.  It also has cross-team estimations to unify the estimation numbers.  This is in contrast to SAFe, which suggests normalizing cross-Team estimations by equating a story point to a story that would take ½ day to code and ½ day to test.The second stage of sprint planning is the same as sprint planning in regular Scrum.
  • Each sprint review is held with all Teams as a “science fair”, where each Team has a station to demonstrate its accomplishments for the sprint. Attending stakeholders can visit the stations in which they are interested.
  • The Sprint Retrospective is held in two stages: the first being the same as regular Scrum; the second is for the overall progress of the software product being developed by the Teams.

Portfolio Management

As represented in the top level of the SAFe “Big Picture” shown earlier, SAFe offers a comprehensive approach of prioritizing “projects” (represented as Epics or a set of related Epics in SAFe) and budgeting for them in an Agile manner.  In its latest version, SAFe 4.5, there is an additional, optional, level for Large Solutions (shown below the Portfolio level in the aforementioned diagram) – it is usually relevant to projects with hundreds, or thousands of participants comprising multiple Agile Release Trains.

In contrast, LeSS does not delve into Portfolio Management: it only offers techniques that can be compared to the Program and Team levels of SAFe.

2 Versions of LeSS

LeSS has two versions:  the one we saw earlier for 2 to 8 Teams, and Less Huge for more than 8 Teams, depicted below.

LeSS Huge is formed by having several regular LeSS frameworks working in parallel with each other.  The most notable addition in LeSS Huge is making each regular LeSS belong to a separate Requirements Area with its own Area Product Owner (APO) under the overall Product Owner.

If you were thinking “Well, isn’t an ART the same as a Requirements Area?” you’d be partially right; a similarity is that the Product Backlog relationship to the Area Product Backlog is analogous to the relationship of a Portfolio Backlog to a Program Backlog in the sense that items on the former are coarser grained than items on the latter.  However, one of the differences is that an APO is still only one for 8 Teams, whereas the SAFe PO covers very fewer Teams.

Other Differences between LeSS and SAFe

LeSS can appear to offer one seemingly shocking advice (which is not offered by SAFe):  Don’t scale! (But if you have to scale, use LeSS J). It advocates that even very large software products can be built more successfully by a relatively small Team of co-located master programmers and testers.  They cite at least one example on their website (less.works) of a huge software project that followed a torturous path to completion.  When the overall project director was asked if he were to do it again, what would he do differently, he said that he’d pick the 10 best programmers and have them build it all.  I can cite a more recent example with the Affordable Care Act, where a traditional government contractor put an enormous number of resources on the project that failed miserably.  Later, about a dozen master developers and testers were put together in a house to work on fixing the ACA, which they did within a period of several months. (See http://www.theatlantic.com/technology/archive/2015/07/the-secret-startup-saved-healthcare-gov-the-worst-website-in-america/397784/)

  • Whereas SAFe is generally tool-neutral from a specific vendor perspective, SAFe does encourage early and often automation as much as possible, utilizing the system team to support that.
    LeSS, on the other hand, strongly recommends that you not use automated tools until after your organization becomes quite proficient with LeSS, opting instead to use manual resources like very big white boards and wall charts. Otherwise, LeSS declares that if you automate a mess, you get an automated mess.  And even after the Teams become proficient with LeSS, it recommends that you only use open source tools, which you can easily jettison if they don’t work out for you, without losing high-dollar investments in them.
  • SAFe takes a more customary view of the role of Scrum Master. In SAFe, the SM is pretty much a permanent role with the Scrum Team and does a lot of intra-Team and inter-Team coordination.  In LeSS, the SM is first and foremost a Teacher.  He can fade away from the day-to-day Team dynamics once the Team becomes proficient in the Scrum and LeSS approaches.
  • In SAFe: Epics, Capabilities, Features and Stories are explicitly handled as integral parts of the SAFe backlogs. LeSS, on the other hand, only talks about coarser vs. finer grained Backlog Items, staying faithful to Scrum by relegating Epics, Features and Stories as instruments of XP, which is not part of Scrum proper.

Conclusion

The quick comparison between LeSS and SAFe in this blog is by no means comprehensive.  Yet it shows SAFe to be more wide-ranging in offering processes and roles to handle the development of software from the highest profile levels down to the individual Agile Team for large-scale Agile efforts, while making those roles and levels configurable according the organization’s needs.  Furthermore, for a typical traditional large organization it is perhaps more palatable to begin to adopt SAFe than LeSS, since the latter strongly advocates some major changes to the structure of the organization as early as possible in the adoption of LeSS.


The As-Is DevOps Value Stream Mapping

Value Stream Mapping is a crucial step in assessing an organization’s DevOps capability. The objective of mapping a DevOps value stream is to eliminate wasteful waiting and improve the completeness and accuracy of all activities in the value stream. We create a value stream map of the software development lifecycle early in any DevOps engagement.

To understand and expose those wastes and inaccuracies, the first step is, naturally, to map the as-is state of your organization’s software development and operations.  Such mapping typically starts with a two-day session involving business and IT staff to capture the major activities involved in software development and operations.  The figure below shows an example of a value stream map (VSM) of as-is DevOps activities.

Legend:
%C/A: For a given VSM step, this is the percent of Complete/Accurate work items received from the previous step in the VSM.
LT: Lead Time for a given VSM step, i.e. from the instance a work item leaves the previous step to the instance it leaves this step towards the next step.  LT includes both idle time as well as time during which the item is being productively processed.
VA: Value Added Time, which is only the time during which the item is being productively processed.
Determining the above metrics in the as-is map will guide the desired improvements later when mapping the to-be DevOps process.

The following are the type of questions that would help a group of business and IT staff convene to produce an as-is DevOps VSM. The questions are not meant to be walked-through in strict order, but can be navigated back-and-forth in the session of drawing the as-is VSM.

  1. What are the main steps involved in the current process of software development and operations? We need to look at the factors that determine the boundaries between those steps, including handoffs, queues, and organizational stipulation.
  2. Who performs each step? Include role names and names of some of the specific people who perform the step.
  3. What is the %C/A for each step? For each step in the Value Stream Map, capture the percentage of work items that arrive at the step being complete and accurate. To get a realistically representative value of this metric, you may have to capture an average of it over several weeks, or even several months.
  4. What is the LT for each step? As with all metrics of Value Stream Mapping, to get a realistically representative value of this metric, you may have to capture an average of it over several weeks, or even several months.
  5. What is the VA time for each step? The VA excludes waiting time (e.g. being on a queue) or any other non-productive time experienced by the work item.
  6. What tools do you currently use for each step? Answering this question would help uncover manual steps that can be automated, determine opportunities for integrating various tools, and improve efficiency and accuracy of automated steps.

The To-Be DevOps VSM

Once you have the as-is DevOps VSM mapped, the to-be DevOps VSM is driven by the following:

  • How can we significantly increase the %C/A for each activity in our as-is VSM?
  • How can we dramatically reduce, or even eliminate the non-productive time in the LT of each as-is activity?
  • How can we improve the performance of the VA in each as-is activity?

Answering the above questions can lead to drawing a new, to-be VSM with realistic, but sufficiently challenging, new targets for each of the above three metrics: %C/A, LT, and VA for each activity in the new, to-be VSM.  Such to-be VSM will usually encompass activities that do not correspond 1-to-1 with the activities on the as-is VSM.  The following is an example of such to-be DevOps VSM:

Mapping Your DevOps Value Stream

Our Blue Agility DevOps coaches can help your organization with:

  • Guidance for detailed answers of the above listed questions for drawing the as-is DevOps VSM.
  • Templates for capturing comprehensive information for each as-is / to-be activity.
  • Assessment to determine which of the activities shown in the above to-be example VSM are suitable for your organization.

_________________________________________________________
Ali is a senior consultant at Blue Agility. He performs strategic services to key customers in order to accelerate achievement of business goals by leveraging the benefits of process mentoring and automation through tight integration with business critical processes. He works with customers to identify the Key Performance Indicators (KPIs) that are critical to business success. He is SPC4 certified. 


For large software development projects involving up to a 100 to several 100s of software developers, analysts and testers, the inherent techniques of agile methodologies such as Scrum or XP prove inadequate for effectively managing the progress of such enormous effort.

In this blog, we look at a quick comparison between two leading frameworks for scaling the Agile approach for large software development projects: Scaled Agile Framework (SAFeTM) and Large-Scale Scrum (LeSS).  Each has its strong points that may fit different organizational situations of large software product development.

Let’s get started: the “Big Picture”

See below the overview pictures for SAFe (www.scaledagileframework.com) and LeSS (less.works).

SAFe 4.0 Big Pic

 

LeSS Framework

 

Right off the bat, we see that while the SAFe Framework appears more comprehensive, it also appears more process-heavy. In fact, the inventors of the LeSS framework are proud of its acronym indicating less process, less artifacts, and fewer roles, remaining faithful to having only the original Scrum roles of PO, SM and Team.

As an example, SAFe offers the role of Program Manager, who is in charge of setting the priorities and overall scope of functionality to be delivered by a Program containing many Agile teams. The Product Owner in SAFe performs the usual Scrum role for up to a couple of Agile Teams.

In contrast, LeSS offers the regular Scrum role of Product Owner (PO) for up to 8 Teams.  This is because in LeSS, the PO is not a liaison with the end-Customer: the Teams get to interact directly with the end-Customer to understand the details of the requirements, giving the PO the opportunity to focus on the overall priorities and scope for up to 8 Scrum Teams.

Hence if an organization can afford the opportunity for the Agile Teams to interact directly with the end-Customer LeSS can be a good fit in this particular aspect. Otherwise, SAFe can accommodate both the Team-direct and the liaison-PO situations.

Organizational Structure

The inventors of LeSS very much believe that culture follows structure. To that end they offer LeSS not just as a practice to scale up the Scrum approach, but as a direct impetus for change of organizational structure.  The picture below shows what LeSS advocates for organizational structure for up to 8 Scrum Teams working together to develop a software product in order to provide what an Agile culture needs from an organization to succeed.

Organizational Structure of LeSS

In this picture, you can see that there are no functional departments (e.g. development vs. testing) or a PMO.  Instead, in addition to the Scrum Teams, there is the Head of the Product Group, which LeSS views (as it views all other managers similar to the “Toyota Way”) as a teacher of those reporting to him/her, the Product Owner team, which provides a pool of POs for every Scrum (large or small scale) effort, and the Undone Department.

The latter is a curious thing.  In LeSS, a permeating theme is that the Teams are supposed to do everything needed to put a high-quality software product in the hands of end-Customers: from analysis to development to testing to packaging, all while coordinating with other Teams.  This is represented in the Definition of Done of the Teams.  But it may take the Teams a few years to mature to that set of comprehensive capabilities.  Hence the Undone Department is a placeholder for resources that fill in for whatever the Teams are yet to be able to do (e.g. DevOps) until the Teams mature.

In contrast, SAFe does not advocate drastic organizational change as emphatically as LeSS.  It presents its approach for adoption even with the current organizational structure, and lets the organization take its time deciding when it may want to restructure to be more efficient with Agile.  That’s not to say that LeSS presents its approach as an “all or nothing deal” – it just emphasizes structural change in the organization more strongly than SAFe does.

Differences in Planning

SAFe stipulates that sprints should be grouped in sets of 4-5 consecutive sprints, each set being called a Program Increment (PI).  And while the Teams (and the Product as a whole) are expected to demonstrate incremental achievements at the end of each sprint, it is at the end of a PI that complete “Features” of the software product are expected to be available.  SAFe, however, maintains the option of releasing on-demand any time during a PI with the Features that happen to be complete at that point in time.

Planning in SAFe happens in a 2-day session at the beginning of each PI, in addition to the usual sprint planning at the beginning of each sprint.  In the PI planning session all the Teams working together in what SAFe calls an Agile Release Train (ART) attend to commit to delivering a set of Features for the PI, and to have each Team present a plan showing which stories (which are children of Features in SAFe) the Team plans to complete for each sprint in the PI.  Finally, in addition to the usual sprint demos and retrospective, SAFe has an overall PI demo at the end of each PI, and a general Inspect and Adapt session, which is a scaled up version of a sprint retrospective.

In contrast, LeSS remains faithful to just the usual sprints of Scrum, with the following additions:

  • Sprint Planning happens in 2 stages. The 1st stage is attended by 2 representatives of each Team, which do not usually include the Team’s Scrum Master.  This stage decides which items from the common Product Backlog each Team will develop.  It also has cross-team estimations to unify the estimation numbers. This is in contrast to SAFe, which suggests normalizing cross-Team estimations by equating 8 story points to 1 staff-day.The second stage of sprint planning is the same as sprint planning in regular Scrum
  • Each sprint review is held with all Teams as a “science fair”, where each Team has a station to demonstrate its accomplishments for the sprint. Attending stakeholders can visit the stations in which they are interested.
  • The Sprint Retrospective is held in two stages: the first being the same as regular Scrum; the second is for the overall progress of the software product being developed by the Teams.

Portfolio Management

As represented in the top level of the SAFe “Big Picture” shown earlier, SAFe offers a comprehensive approach to prioritizing projects (represented as Epics in SAFe) for the organization and budgeting for them in an Agile manner.  In its latest version, SAFe 4.0, there is an additional, optional, level for Value Streams below the Portfolio level – it is usually relevant to projects with hundreds, or thousands of participants.

In contrast, LeSS does not delve into portfolio management: it only offers techniques that can be compared to the Program and Team layers of SAFe.

2 Versions of LeSS

LeSS has two versions:  the one we saw earlier for 2 to 8 Teams, and Less Huge for more than 8 Teams, depicted below.

LeSS Framework

LeSS Huge is formed by having several regular LeSS frameworks working in parallel with each other.  The most notable addition in LeSS Huge is making each regular LeSS belong to a separate Requirements Area with its own Area Product Owner (APO) under the overall Product Owner.

If you were thinking “Well, isn’t an ART the same as a Requirements Area?”, you’d be partially right; a similarity is that the Product Backlog relationship to the Area Product Backlog is analogous to the relationship of a Portfolio Backlog to a Program Backlog in the sense that items on the former are coarser grained than items on the latter.  However, one of the differences is that an APO is still only one for 8 Teams, whereas the SAFe PO covers very fewer Teams.

Other Differences between LeSS and SAFe

  • LeSS can appear to offer one seemingly shocking advice (which is not offered by SAFe): Don’t scale! (But if you have to scale, use LeSS J) It advocates that even very large software products can be built more successfully by a relatively small Team of co-located master programmers and testers.  They cite at least one example on their website (less.works) of a huge software project that followed a torturous path to completion.  When the overall project director was asked if he were to do it again, what would he do differently, he said that he’d pick the 10 best programmers and have them build it all.  I can cite a more recent example with the Affordable Care Act, where a traditional government contractor put an enormous number of resources on the project that failed miserably.  Later, about a dozen master developers and testers were put together in a house to work on fixing the ACA, which they did within a period of several months. (See http://www.theatlantic.com/technology/archive/2015/07/the-secret-startup-saved-healthcare-gov-the-worst-website-in-america/397784/)
  • Whereas SAFe is generally tool-neutral, LeSS strongly recommends that you not use automated tools until after your organization becomes quite proficient with LeSS, opting instead to use manual resources like very big white boards and wall charts. Otherwise, LeSS declares that if you automate a mess, you get an automated mess.  And even after the Teams become proficient with LeSS, it recommends that you only use open source tools, which you can easily jettison if they don’t work out for you, without losing high-dollar investments.
  • SAFe takes a more customary view of the role of Scrum Master. In SAFe, the SM is pretty much a permanent role with the Scrum Team and does a lot of intra-Team and inter-Team coordination. In LeSS, the SM is first and foremost a Teacher.  He can fade away from the day-to-day Team dynamics once the Team becomes proficient in the Scrum and LeSS approaches.
  • In SAFe: Epics, Features and Stories are explicitly handled as integral parts of the SAFe backlogs. LeSS, on the other hand, only talks about coarser vs. finer grained Backlog Items, staying faithful to Scrum by relegating Epics, Features and Stories as instruments of XP, which is not part of Scrum proper.

Conclusion

The quick comparison between LeSS and SAFe in this blog is by no means comprehensive.  However, it does show SAFe to be more wide-ranging in offering processes and roles to handle the development of software from the highest profile levels down to the individual Agile Team for large-scale Agile efforts, albeit at the cost of perhaps appearing too process-heavy.  However, it is perhaps more palatable for a typical traditional large organization to begin to adopt SAFe than LeSS, since the latter strongly advocates some major changes to the structure of the organization as early as possible in the adoption of LeSS.

 


The idea that in an agile approach, software design “emerges” from code as it gets written, and refactored can be quite foreign to a traditional developer. To facilitate an understanding of such emergent design, I present here an example of developing software functionality, once using the traditional design approach, and again using the agile approach.

What is meant by “traditional” in this context is not an industry-standard best practice, but rather what is empirically observed to be the approach followed in most non-agile software development.  Furthermore, although there are wide variations within the traditional as well as the agile approaches, the explanations in this document are aimed to be sufficiently representative of typical steps of each of the two approaches.

The discussion here does not purport to show preference between the traditional and the agile approach to design.  The choice between the two approaches is normally driven by several factors, not among the least of which – is the culture of the organization that is developing the software.  An experienced coach can help a client organization customize a suitable approach, which is likely to be a hybrid between traditional and agile approaches.

The Traditional Approach to Design

Traditionally, design models normally captured in a document such as the Rational Unified Process’ (RUP’s) Software Architecture Document (SAD).  Capturing the design as such takes place before production code is written.  This holds true even in the traditional iterative approach, not just the Waterfall approach, since traditional iterations are usually treated as mini-waterfalls.

To demonstrate, consider an application for managing student records in a university.  Let’s assume that a high-priority scenario of a use case titled “Print Student Report Cards” has been chosen for implementation.  What follows, is a simplification of a detailed textual description of the scenario, which is completed and provided to architects and designers.

Scenario: Print Student Report Cards

  1. {Start of Use Case} The Registrar requests the system to print a report card for each student that has been registered for the semester that just ended.
  2. {Retrieve Student’s Grades} For each student registered for the past semester, the system retrieves the information to be printed for the student’s report card as shown in the report mockup attached below.
  3. {Calculate GPA}
    1. For each student who has taken courses as a “Regular” student, the system, assigns the following points to the course based on the letter grade, adds up the points, divides by the number of the courses for the student in the semester, and prints the GPA on the student’s report card.[1]
      1. A = 4 points
      2. B = 3 points
      3. C= 2 points
      4. D= 1 point
      5. F = 0 points
    2. For each student who has taken courses as an “Honors” student, the system raises the grade by one level (e.g. the system counts a “C” grade for an Honors student as a “B” grade), and proceeds to calculate and print the GPA as specified above
  4. {End of Use Case} Once the system prints the report cards for all students registered for the semester, the use case ends.

 

Mockup of Report Card

 

Semester Date(e.g. Fall 2008)

Student Name                                                                                              Date Printed

Student ID Number

Student Address

Name of Student’s Advisor

 

Course ID                 Course Title                                      Grade             Credits

——                          ———————————              –                   –

——                          ———————————              –                   –

——                          ———————————              –                   –

——                          ———————————              –                   –

——                          ———————————              –                   –

 

GPA for the semester: ___

Total credits for the semester: ___

Cumulative Credits: ___

 

After reading the above scenario, and other scenarios, the Architect in charge of this application draws the following analogies between the type of student (Regular or Honors) and the corresponding grading scheme on the one hand, and the following relationships on the other hand which the Architect has handled in his/her previous experience of developing other applications:

Current Application

Student <———————–> Grading scheme(Regular or Honors)

Previous Analogous Applications

Hotel Guest<——————>Reservation priority (Silver, Gold or Platinum)

Employee<——————–>Compensation (Hourly, Salaried, or Commissioned)

In that previous, analogous experience, the Architect used the Strategy pattern to account for the variations in the above reservation priorities and employee compensation.  Realizing the equivalence between that previous experience and the current variations in the Grading scheme, the Architect captures the Strategy pattern in the Software Architecture Document (SAD) as one of the mechanisms to utilize in building the current application.  The Architect captures the Strategy pattern in the SAD as follows:

Screen-Shot-2014-08-21-at-9.27.43-AM-1024x454

A Designer/Developer would then apply the above pattern to the Student and the Grading scheme resulting in the following design:

Screen-Shot-2014-08-21-at-9.23.07-AM-1024x502

And so it goes in the traditional approach: first the requirements are detailed, then the architect decides what mechanism and patterns shall be applied, then the designers/developers apply those mechanisms and patterns in designing the code, and then the code will be written, followed traditionally by unit tests performed by the developers and finally integration tests performed by testers at the end of the iteration.  As indicated earlier, the above holds true not only in the Waterfall approach, but even in the traditional iterative approach, where each iteration is treated as a mini-waterfall for the set of scenarios scheduled to be implemented in that iteration.

 

The Agile (Emergent) Approach to Design

In this section, we consider how the same scenario of functionality would be implemented using an agile approach.  To begin with, instead of the detailed scenario description and report mockup documented in the above traditional approach, an agile team would probably have one or more user stories covering the equivalent functionality of that scenario.  One such user story is as follows:

User Story: Print Student Report Cards

As The Registrar I want the system to calculate the GPA using regular grading (A=4, B=3, C=2, D=1, F=0), or if the student is taking a course as an “Honors” student, the system first raises the corresponding grade by 1 letter grade (e.g. C becomes a B, …etc.).

(The Conditions of Satisfaction are skipped here for the sake of brevity.)

In a mature agile approach, developing the above story would be test-driven, i.e. would start by writing a test case to test the GPA calculation before writing the code to implement that calculation.  The following code snippet[1] is an example[2] of such a unit test.

public void testCalculateGPA() {                                                               //1

boolean honorIndicator = false;

Student student = new Student(“Jack Smith”, honorIndicator);      //2

assertEquals(0.0, student.getGPA());                                             //3

student.addGrade(“A”);

assertEquals(4.0, student.getGPA());

student.addGrade(“B”);

assertEquals(3.5, student.getGPA());

student.addGrade(“C”);

assertEquals(3.0, student.getGPA());

student.addGrade(“D”);

assertEquals(2.5, student.getGPA());

student.addGrade(“F”);

assertEquals(2.0, student.getGPA());

}

Before we proceed any further and for those of us who are not Java masters, here’s a quick explanation of the above test code.  Line 1 begins the definition of a test method that is written within a class (not shown here) that extends (i.e. inherits from) the JUnit testing framework.  This framework provides many test methods, such as the assertEquals used in the body of this method.

Line 2 creates a variable, student, and initializes it with a new Student with the name Jack Smith, and indicating that Jack is not an Honors student. Since no grades have been added yet to the newly created student, the first assertEquals on the Line 3 tests that the GPA is 0.0. Subsequent Lines add several grades to the student, each time testing that the GPA has the right value after adding the grade.

Note that since the above test code is written before the actual application code, it won’t immediately compile, because, for example, the Student class has not been created yet.  Some purist adherents to Test-Driven Development (TDD) believe that you should first stub the application classes (like Student) and stub the application methods (like getGPA) just to have the test compile even if the test fails the first time you run it with such stubbed, skeletal code.  In this TDD discussion, we will simplify by directly considering code that would make the above test pass.

The above test method, and several others that the developer writes to test the code of the Student class, not only assure that the code maintains high quality, but also serve as documentation of the public methods, i.e. the functionality, or behavior, of that class.  Once the developer “specifies” the class behavior by writing such test methods, the developer then proceeds to write the code for the Student class to implement that behavior as follows:

  1. The developer codes the Student class, providing a way for the class to keep track of the student’s grades (e.g. some kind of Java Collection).
  2. The developer codes the addGrade method to add a grade to the above-mentioned Collection of grades.
  3. The developer codes the getGPA method, which would go through the Collection of grades, totaling then averaging the corresponding number of points for each grade and returning the result.

The developer runs the test method we saw earlier, and fixes any detected bugs and reruns the test, until the test method produces no errors.

The developer may then proceed to write a similar test method, and then application code, for the Honors student.

public void testCalculateHonorsGPA() {                                                     //1

boolean honorIndicator = true;

Student student = new Student(“Jack Jones”, honorIndicator);      //2

assertEquals(0.0, student.getGPA());                                             //3

student.addGrade(“A”);

assertEquals(5.0, student.getGPA());

student.addGrade(“B”);

assertEquals(4.5, student.getGPA());

student.addGrade(“C”);

assertEquals(4.0, student.getGPA());

student.addGrade(“D”);

assertEquals(3.5, student.getGPA());

student.addGrade(“F”);

assertEquals(3.0, student.getGPA());

}

The above test method, testCalculateHonorsGPA(), is very similar to the test method we saw earlier, except for setting the honorIndicator to true, and testing for higher GPA’s.  The developer then augments the earlier code he/she wrote for the Student class to account for the calculation of a GPA for an Honors student, and runs/reruns the two test methods until all assert tests pass.

After augmenting the code to accommodate Honors students, the developer notices that the code of the Student class now looks unwieldy and contains duplication, and hence needs to be refactored.  He/she discusses with another team member (e.g. his/her partner if the team is utilizing pair programming) a couple of possibilities for refactoring the code.  They draw a couple of alternatives on a white board  as shown below:

Screen-Shot-2014-08-21-at-9.26.16-AM-1024x572

Before, or after, the team members implement any of the above refactoring alternatives, they check on the following issues with the Product Owner (if they are using the Scrum approach):

  1. Should the code accommodate changes of a student from Regular to Honors, or the reverse?  This is particularly important here to determine, because changing an object from being one type to being a different type is difficult.  The inheritance used in one or both of the above alternatives may confirm that difficulty.
  2. Could additional future schemes (other than Regular vs. Honors) affect the way the GPA is calculated?

The team members find out that the answer to both of the above questions is “yes”.  They conclude that instead of the above two alternatives, they should refactor the code to a design pattern that keeps the structure and behavior of the Student class unchanged when the GPA Grading scheme changes.  The design pattern should also facilitate the addition of new Grading schemes in the future.  After consulting resources like the GoF “Design Patterns” book, online resources, or simply asking a team member or a consultant who is more experienced in design, the team members choose the Strategy pattern, which they sketch on a whiteboard as follows:

Screen Shot 2014-08-21 at 9.29.05 AM

 

Once the code is refactored according to this new design pattern, the developer runs/reruns the test methods we saw earlier until all tests again pass without errors.

And so it goes in the agile approach: the Strategy design pattern on which the developers finally settled emerged by growing the code incrementally though one or more refactorings.  It was not dictated by an Architect prior to writing the code, nor was it “pre-ordained” by a Designer.  Note that this is not to preclude using an Architecture discipline or using design tools in an agile approach.  Rather, the focus here is just to show the differences of how design is arrived at in the agile vs. the traditional approach.

It is worthwhile to note here that once the developers finish coding the most user stories, i.e. the varying calculation of the GPA, they may go back to the Product Owner to ask for more details on the requirements of printing the student report card.  Together, they would sketch on a whiteboard what the report card should look like as follows:

Screen Shot 2014-08-21 at 9.30.22 AM

Coming up with the report card sketch after a significant amount of testing, coding and design has been done for this functionality is a demonstration of how the various disciplines, or “practices”, (in this case, the requirements discipline with the other disciplines) proceed in parallel, in an agile Sprint (or iteration), as opposed to a waterfall-based sequence of activities for which each must be completed before the next one begins.

Conclusion

An optimal approach need not be a choice of one of the above two approaches to the exclusion of the other – in fact, organizations often end up customizing a suitable approach for their needs that combines aspects from both as follows:

The Main Traditional Advantage:
Establishing an Architecture Runway that guards against massive rework downstream.
It’s Disadvantage:
Attempting to impose design upfront that may later prove suboptimal, unnecessarily complicated, and unsuitable to the nuances of the requirements.

The Main “Emergent” Advantage:
Letting the Code tell you what the optimal design should be guards against force fitting or complicating the design which deteriorates the robustness of the system.
It’s Disadvantage:
Failing to get early decisions on fundamental architectural aspects, thereby necessitating major subsequent rework.

An experienced coach can bring the best of both approaches to a client.  For example, in my experience, it was necessary for a particular project to decide upfront whether web services will be accessed via JMS or http, since the decision affected major aspects of the architecture.  On the other hand, again in my experience, it was much more suitable to let emergent design show whether a session bean should be stateful or stateless.

_________________________________

[1] The code snippets used here are similar to those discussed in the book “Agile Java” by Jeff Langr.

[2] This test code example is simplified to accommodate readers of various experience with Java.  For example, we are ignoring internal numerical inaccuracies caused by binary arithmetic, and also ignoring more optimal refactoring of this test code example.

[1] As a simplification, we are assuming here that the student takes all of his/her courses either as a Regular or an Honors student without mixing Regular and Honors courses.


A common myth about agile software development is that it doesn’t produce documentation.  This misconception was likely propagated from the fact that most agile methods avoid writing copious requirement documentation before implementation.  The fact of the matter is, most agile methods are keen on documenting software by growing “as-built” document(s) as each increment of software is completed.

In this blog, we’ll look at the most important differences of traditional[1] vs. agile software documentation.  The differences are not entirely in the type of documents, but also in the timeline of when the documentation is produced.  In other words, for some types of documents you may not be able to tell by looking at the final document whether it was written in an agile or a non-agile project – the difference is in the progression of writing the documents.

“Documents” in this blog refer to the written artifacts that persist after the end of the project.  Hence, the documents of concern here do not include temporary artifacts like stories, which only drive the development during the agile project, or retrospective records, which only drive the ongoing improvement of how the agile team works.  Rather, the focus here is on how the agile approach “grows” as-built documents, such as operations manuals, and why the agile approach forgoes to-be-built documents such as traditional requirements and design documents.

documentation


Differences in Types of Documents

Non-agile projects tend to produce copious documentation upfront, before any production code is written.  Such documents include, but are not restricted to:

  • Various types of requirement documents, such as Stakeholder needs, Product Requirement Documents, Marketing Vision, Business Requirements Documents, System/Software Requirements Documents, Use Case Descriptions, Non-Functional Requirements…etc.
  • Architecture/Design documents/models, such as Technical Specification, Software Architecture Document, and the like.  These documents can include Sequence Diagrams, Communication Diagrams, Entity Relation Diagrams…etc.

Agile projects avoid such large documentation efforts at the start of the project.  This is because the Agile approach considers working software to be the most high-fidelity indicator of progress, so agile projects start focusing on implementing and delivering working functionality, rather than writing traditional requirements and design documents from the very beginning of the project.

Yet while agile teams do not produce traditional requirements and design documents at the beginning of their project, they do produce documents that are similar, perhaps even identical to other types of traditional documents.  These include user manuals, operations manuals, system manuals, maintenance manuals…etc.  So although user and technical stories are, more often than not, treated as a temporary work product that drives the Agile development of software, but does not persist once the software is released, Agile definitely does produce documentation that records the software functionality.  Furthermore, when newcomers to Agile ask how you can tell which functionality you’ve implemented if you don’t have written, detailed requirements, the answer lies in the documents that Agile does produce; it produces “as-built” rather than “to-be-built” documentation.   Sometimes, those documents may even be named similarly to design documents (e.g. Technical Specifications), with one BIG difference: they are produced side-by-side with the incremental completion of working software, as explained below.

documentationinagiles

 

Differences in the Progression of Documentation

In addition to focusing very early on requirements and design documents, non-agile projects tend to focus very late on instructional and maintenance documents.  The latter include, but are not restricted to user manuals, operations manuals, system manuals, maintenance manuals…etc.  This serialization stems from the very nature of traditional projects, which focus on finishing one type of software development activity before starting the next.

Agile projects, on other hand, tend to perform nearly in parallel all that is required to complete an increment of working software every sprint (i.e. iteration) including documentation.  Hence, as each chunk of agilely developed software functionality is completed, i.e. readied for demonstration and potential shipping, the documentation (e.g. operations manual) is updated with new or modified sections capturing the newly developed incremental functionality.  Each such agilely written document would thus remain a living document, until the software is released by the developing organization.

For example, if the agile team is using stories and breaking the stories into tasks, then for each story there is a task (in addition to the coding, testing…etc. tasks) that accounts for documenting the functionality accomplished in that story.  The documentation task can be owned and performed by any team member, or the team may choose to tap a technical writer as an extended resource to write the documentation.  No story can be considered complete without the documentation task getting completed.

In conclusion, writing the upfront copious requirements and design documentation, and postponing instructional and maintenance documentation until late in the project may be hard habits to break.  A good coach can play a central role to facilitate the transformation to how software documents are produced in an agile manner.

 


[1] “Traditional” here includes the Waterfall approach, as well as iterative projects where each iteration is treated as a mini-Waterfall.


Scrum, XP and other agile methodologies have proved over the past two decades their efficacy in markedly increasing the effectiveness and efficiency of software development teams. However, it turns out that those agile methodologies have to be augmented by other measures to scale up properly to large projects, where you have up to 100 or more software developers, analysts and testers. As Dean Leffingwell, the inventor of the Scaled Agile Framework® (SAFe™) has indicated, for large projects, “Scrum teams may be sprinting, but the system [they are building], is [often] not sprinting”.[1]

Large organizations had made various attempts to augment agile methodologies to scale them up to large projects. Such attempts were met with various degrees of success, until more systematic scaled frameworks were introduced to cull the best large-scale agile practices. Those frameworks have different, often complementary strong points to allow an enterprise to scale up the agile approach without having to “reinvent the wheel.”

Let’s look at a quick comparison between some of the strong points of two leading frameworks, SAFe™ and Disciplined Agile Delivery (DAD), that aim to scale up one of the most popular Agile methodologies, Scrum, to bolster it for large projects.

What DAD brings to the table:

DAD

Inception Phase

Once a project is selected, this DAD phase allows for requirements to be understood, traceability’s between various levels of requirements (if needed) established, and Stories written and planned for the coming iterations of the project.  While SAFe™ does stipulate sufficient preparation for what SAFe™ calls “PSI (Potentially Shippable Increment) Planning”, the Inception Phase of DAD does more justice to preparing the backlog needed for the project’s iterations (i.e. sprints).

Transition Phase

At the end of the project’s regular sprints, this DAD phase allows for the hardening of the project’s deliverables with various types of testing, including User Acceptance, Regression, Smoke, Performance, Stress, Exploratory and other types of testing.  While SAFe™ does stipulate Acceptance Driven Testing and the preparation of the system by a Release Team prior to delivery, the Transition Phase of DAD more readily allows for all finishing testing of the system.

What SAFe™ brings to the table:

SAFeBigPictureblog-300x231

Portfolio Management

SAFe™ guides the enterprise through deriving Epics from the enterprise’s Investment Themes, prioritizing the Epics, and implementing them in one or more “Agile Release Trains”.  While DAD does have a milestone called “Project Selected” at the beginning of its Inception Phase, it does not provide sufficient guidance as to how projects get selected from a Portfolio.

PSI Planning

This is a 2-day session that occurs once every 5 sprints.  It concisely guides scrum teams to enact their shared business direction and architectural vision, determine groups of work items that are achievable in each sprint, and coordinate with external team representatives as needed.  While DAD clearly recognizes all of those tenets of PSI Planning, it lacks SAFe™’s PSI’s unique, concise mechanism to concretely enact the above.

The HIP Sprint

This is a sprint that occurs at the end of each PSI. It stands for Hardening, Innovation and Planning. DAD seems conflicted on something like the HIP sprint: it discourages it lest it dilute quality maintenance from the other sprints, yet DAD still admits that it can be necessary to inject “Transition” iterations into the solution plan as often as needed. SAFe™ provides concise guidance for the HIP sprint, while emphasizing quality in every sprint.

Coordinating Business vs. Technical Priorities

Via the technique of “Capacity Allocation”, SAFe provides a way to separate concerns, such that we can deliver the right mix of adding business value as well as resolving technical risk.  While DAD recognizes the necessity of striking such tradeoff, it lacks an approach for resolving such priority conflicts.

The System Team

This is a Scrum Team that lags the other scrum teams on a project by one sprint, receives their output, and performs end-to-end integration and other types of testing on the system.  SAFe’s System Team systematically actualizes DAD’s special emphasis on parallel integration testing – DAD, on the other hand, presents no clear, systematic way for this other than general recommendations.

The RPM Metric

The Release Predictability Metric (RPM) is tracked at the beginning and end of each PSI.  It offers a systematic approach to quantifying the stakeholder qualitative valuations of deliverables.  DAD recognizes that qualitative metrics are often the deciding factors that have to surmount difficulties in manual, expensive, inconsistent and infrequent collections that lack timeliness.  DAD, however, does not offer a clear approach to overcome those difficulties.

As with every discipline in life, no one agile-scaling framework can be expected to perfect every needed measure. An agile coach who is well-versed in those frameworks can help a client organization benefit from the best of both worlds.


[1] Private communication with Mr. Leffingwell.


An agile coach is a technical professional, so why should she/he worry about politics at the client’s organization? “Politics” here is meant to include culture, egos, prejudices, hidden agendas, personal priorities and the like. Some of my technical colleagues often focus on the technological aspects of client engagements to the exclusion of other, human-relations aspects. Lack of appreciation of those soft-skills can cause some of the best agile coaches to commit avoidable errors.

460218-1024x640

Politics alone will not make us successful – ignoring politics, however, is a liability that is bound to hurt us.  Here are some tips that I found to be helpful in navigating the political currents at various levels of a client organization:

High-level Champion

Addressing the client’s problems is how we build a value proposition. However, the client is sometimes reluctant for political reasons to cooperate with us to detect their problems. To address that, it helps to have a high-level champion in the organization who is secure enough to help us identify the organizations problems, and hence articulate the value brought by an agile approach to software development. The security of such champion may be, for example, due to seniority or recently joining the organization specifically to address such problems.

Operational Champion

A high-level champion is not likely to be accessible to help overcome daily hurdles.  What we need as well is a lower-level, “operational champion” who is savvy and willing to help us navigate the political currents at the client’s organization. She/He can help involve the right resources (i.e. right in competence and attitude) in projects using our solutions, elicit support from those with clout at the client’s organization, and raise successes we achieve at the client’s organization to the highest possible level of visibility.

Proactive Adjustment

In a politically charged environment we often face a dilemma: on the one hand, we need to proactively tell the client what changes they need to implement to achieve the results they desire.  On the other hand, political considerations may stipulate that no finger-pointing is allowed, usually not even to improve the results.  So we should try to influence the choice of early client projects to have participants with the right attitude. I.e. those who welcome proactive advice.  The success of such projects helps to create a momentum within the client’s organization to sway the nay-sayers.  If we fail to win over the project participants, we may try as artfully as possible, to escalate the problem to the corporate champion(s) to seek their help for a satisfactory resolution.

leadership

We may very well need to effect a cultural shift in the client organization to change traditional project participants into agile go getters.  Start effecting that shift with the most sympathetic among the lot of the client, while closely coordinating with the operational champion.

Without those soft skills, navigating the client’s political currents can be more treaturous than it needs to be.