The following is the first in a series of blogs aimed at providing a concise comparison of the most salient features of the leading Agile scaling frameworks. Hopefully, this information will help you choose the framework that is most suitable for your organization to meet their business needs.

For large software development projects involving up to a 100 to several 100’s of software developers, analysts and testers, the inherent techniques of agile methodologies such as Scrum or XP prove inadequate for effectively managing the progress of such enormous effort.

In this blog, we look at a quick comparison between two leading frameworks for scaling the Agile approach for large software development projects: Scaled Agile Framework (SAFe 4.5) and Large-Scale Scrum (LeSS).   Each has its strong points that may fit different organizational situations of large software product development.

Let’s get started: the “Big Picture”

See below the overview pictures for each of SAFe (www.scaledagileframework.com) and LeSS (less.works).

Right off the bat, you can see that while the SAFe Framework appears more comprehensive, it also appears more process-heavy.  In fact, the inventors of the LeSS framework are proud of its acronym indicating less process, less artifacts, and fewer roles, remaining faithful to having mainly the original Scrum roles of PO, SM and Team.

For example, SAFe offers the role of Product Manager, who is in charge of setting the priorities and overall scope of functionality to be delivered by a Program containing many Agile teams.  The Product Owner in SAFe performs the usual Scrum role for up to a couple of Agile Teams that typically work from a similar/shared backlog.

In contrast, LeSS offers the regular Scrum role of Product Owner (PO) for up to 8 Teams.  This is because in LeSS, the PO is not a liaison with the end-Customer: the Teams get to interact directly with the end-Customer to understand the details of the requirements, giving the PO the opportunity to focus on the overall priorities and scope for up to 8 Scrum Teams.

Hence if an organization can afford the opportunity for the Agile Teams to interact directly with the end-Customer LeSS can be a good fit in this particular aspect. Otherwise, SAFe can accommodate both the Team-direct and the liaison-PO situations.

SAFe 4.5 Framework

Organizational Structure

The Inventors of LeSS very much believe that culture follows structure.  To that end they offer LeSS not just as a practice to scale up the Scrum approach, but as a direct impetus for change of organizational structure.  The picture below shows what LeSS advocates for organizational structure for up to 8 Scrum Teams working together to develop a software product in order to provide what an Agile culture needs from an organization to succeed.

In this picture, you can see that there are no functional departments (e.g. development vs. testing) or a PMO.  Instead, in addition to the Scrum Teams, there is the Head of the Product Group, which LeSS views (as it views all other managers similar to the “Toyota Way”) as a teacher of those reporting to him/her, there is the Product Owner team that provides a pool of PO’s for every Scrum team (large or small scale) effort, and then there is the Undone Department.

The latter is a curious thing.  In LeSS, a permeating theme is that the Teams are supposed to do everything needed to put a high-quality software product in the hands of end-Customers: from analysis to development to testing to packaging, all while coordinating with other Teams.  All of that is represented in the Definition of Done of the Teams.  But it may take the Teams a few years to mature to that set of comprehensive capabilities.  Hence the Undone Department is a placeholder for resources that fill in for whatever the Teams are yet to be able to do (e.g. DevOps) until the Teams mature.

In contrast, SAFe does not advocate drastic organizational change as emphatically as LeSS.  It presents its approach for adoption even with the current organizational structure, and lets the organization take its time deciding when it may want to restructure to be more efficient with Agile.  That’s not to say that LeSS presents its approach as an “all or nothing deal” – it just emphasizes structural change in the organization more strongly than SAFe does.

Differences in Planning

SAFe stipulates that sprints should be grouped in sets of 4-5 consecutive sprints, each set being called a Program Increment (PI).  And while the Teams (and the Product as a whole) are expected to demonstrate incremental achievements at the end of each sprint (i.e. completed Stories), it is at the end of a PI that complete “Features” of the software product are expected to be available.  SAFe, however, maintains the option of releasing on-demand any time during a PI with the Features, or even Stories, that happen to be complete at that point in time.

Planning in SAFe happens in a 2-day session at the beginning of each PI, in addition to the usual sprint planning at the beginning of each sprint.  In the PI planning session all the Teams working together in what SAFe calls an Agile Release Train

(ART) attend to commit to delivering a set of Features for the PI, and to have each Team present a plan showing which stories (which are children of Features in SAFe) the Team plans to complete for each sprint in the PI.  Finally, in addition to the usual sprint demos and retrospective, SAFe has an overall Inspect and Adapt Workshop (analogous to the Sprint Demo and Retrospective) at the end of the PI which includes a PI demo, Quantitative Measurement, and a Problem Solving Workshop that dives into deeper root cause analysis than a normal Sprint Retrospective.

In contrast, LeSS remains faithful to just the usual sprints of Scrum, with the following additions:

  • Sprint Planning happens in 2 stages. The1st stage is attended by 2 representatives of each Team, which do not usually include the Team’s Scrum Master.  This stage decides which items from the common Product Backlog each Team will develop.  It also has cross-team estimations to unify the estimation numbers.  This is in contrast to SAFe, which suggests normalizing cross-Team estimations by equating a story point to a story that would take ½ day to code and ½ day to test.The second stage of sprint planning is the same as sprint planning in regular Scrum.
  • Each sprint review is held with all Teams as a “science fair”, where each Team has a station to demonstrate its accomplishments for the sprint. Attending stakeholders can visit the stations in which they are interested.
  • The Sprint Retrospective is held in two stages: the first being the same as regular Scrum; the second is for the overall progress of the software product being developed by the Teams.

Portfolio Management

As represented in the top level of the SAFe “Big Picture” shown earlier, SAFe offers a comprehensive approach of prioritizing “projects” (represented as Epics or a set of related Epics in SAFe) and budgeting for them in an Agile manner.  In its latest version, SAFe 4.5, there is an additional, optional, level for Large Solutions (shown below the Portfolio level in the aforementioned diagram) – it is usually relevant to projects with hundreds, or thousands of participants comprising multiple Agile Release Trains.

In contrast, LeSS does not delve into Portfolio Management: it only offers techniques that can be compared to the Program and Team levels of SAFe.

2 Versions of LeSS

LeSS has two versions:  the one we saw earlier for 2 to 8 Teams, and Less Huge for more than 8 Teams, depicted below.

LeSS Huge is formed by having several regular LeSS frameworks working in parallel with each other.  The most notable addition in LeSS Huge is making each regular LeSS belong to a separate Requirements Area with its own Area Product Owner (APO) under the overall Product Owner.

If you were thinking “Well, isn’t an ART the same as a Requirements Area?” you’d be partially right; a similarity is that the Product Backlog relationship to the Area Product Backlog is analogous to the relationship of a Portfolio Backlog to a Program Backlog in the sense that items on the former are coarser grained than items on the latter.  However, one of the differences is that an APO is still only one for 8 Teams, whereas the SAFe PO covers very fewer Teams.

Other Differences between LeSS and SAFe

LeSS can appear to offer one seemingly shocking advice (which is not offered by SAFe):  Don’t scale! (But if you have to scale, use LeSS J). It advocates that even very large software products can be built more successfully by a relatively small Team of co-located master programmers and testers.  They cite at least one example on their website (less.works) of a huge software project that followed a torturous path to completion.  When the overall project director was asked if he were to do it again, what would he do differently, he said that he’d pick the 10 best programmers and have them build it all.  I can cite a more recent example with the Affordable Care Act, where a traditional government contractor put an enormous number of resources on the project that failed miserably.  Later, about a dozen master developers and testers were put together in a house to work on fixing the ACA, which they did within a period of several months. (See http://www.theatlantic.com/technology/archive/2015/07/the-secret-startup-saved-healthcare-gov-the-worst-website-in-america/397784/)

  • Whereas SAFe is generally tool-neutral from a specific vendor perspective, SAFe does encourage early and often automation as much as possible, utilizing the system team to support that.
    LeSS, on the other hand, strongly recommends that you not use automated tools until after your organization becomes quite proficient with LeSS, opting instead to use manual resources like very big white boards and wall charts. Otherwise, LeSS declares that if you automate a mess, you get an automated mess.  And even after the Teams become proficient with LeSS, it recommends that you only use open source tools, which you can easily jettison if they don’t work out for you, without losing high-dollar investments in them.
  • SAFe takes a more customary view of the role of Scrum Master. In SAFe, the SM is pretty much a permanent role with the Scrum Team and does a lot of intra-Team and inter-Team coordination.  In LeSS, the SM is first and foremost a Teacher.  He can fade away from the day-to-day Team dynamics once the Team becomes proficient in the Scrum and LeSS approaches.
  • In SAFe: Epics, Capabilities, Features and Stories are explicitly handled as integral parts of the SAFe backlogs. LeSS, on the other hand, only talks about coarser vs. finer grained Backlog Items, staying faithful to Scrum by relegating Epics, Features and Stories as instruments of XP, which is not part of Scrum proper.

Conclusion

The quick comparison between LeSS and SAFe in this blog is by no means comprehensive.  Yet it shows SAFe to be more wide-ranging in offering processes and roles to handle the development of software from the highest profile levels down to the individual Agile Team for large-scale Agile efforts, while making those roles and levels configurable according the organization’s needs.  Furthermore, for a typical traditional large organization it is perhaps more palatable to begin to adopt SAFe than LeSS, since the latter strongly advocates some major changes to the structure of the organization as early as possible in the adoption of LeSS.


Innovation and efficiency have reached new heights and their combination into cyber physical systems has led to more complex and interdependent systems. How can we sustain such a pace for the future and continually evolve systems in the shortest possible lead time, especially in the context of regulated environments?

Company Logos

The last 10 years have seen the disappearance of well-known products and the arrival of new competitors in the marketplace. Who would have thought in 2007 that Nokia would disappear as one of the great brands in mobile phone and that Apple would take over the Smart Phone Market almost overnight? With the rise of new competitors such as Uber, Tesla, Airbnb, Netflix, Google, Facebook and others, we witness the rise of competitive pressure in all industries and new levels of innovation and efficiency are required. As we combine such systems into cyberphysical systems with the Internet of Things (IoT), Industrie 4.0, Lean Start Up, and Agile, we move into an increasingly complex and interdependent realm.

Creating agile teams has helped to get where we are today, but we also face limitations, as small teams cannot build such complex systems in a timely manner. We also have to face regulatory and organizational environments, which are becoming increasingly demanding. A study by Scott Ambler from Disciplined Agile Delivery (DAD) shows that most agile delivery teams (>65%) are facing compliance requirements, either regulatory and/or organizational. Given this situation, it is clear that we need a strategy and a governance to steer such endeavors.

In the recently published study ‘State of Agile’ by Version One in April 2017, we find that the Scaled Agile Framework (SAFe) has overtaken all other scaling agile methods and remains at the top for existing frameworks in practice.

State of Agile

With regard to agile maturity, the report states that only 18% think that they have reached ahigh competency with agile practices across the organization. The remaining 80% of the companies are aware that they have to improve.

This is a strong indicator that scaling agile is really accelerating. It’s ‘Where the rubber hits the road’; and it’s getting serious because we are building complex and high assurance systems.

Example of a #1 Framework for Scaled Agility – SAFe

The assumptive, one pass, stage-gated, waterfall methods of the past have not scaled to the new challenge. We need more responsive development method to address the demands of the modern technological and cultural landscape. Agile is a major step in that direction. However, Agile was developed for small teams, and by itself, does not scale to the needs of the larger enterprises and the systems they create. That’s where SAFe comes in. It applies the power of Agile, but takes it to the next level by leveraging the more extensive knowledge pools of systems thinking and Lean product development. SAFe provides comprehensive guidance for achieving the benefits of Lean-Agile development at enterprise scale.

When you start with the Framework, it is important to understand the reasons why these approaches work, not just what they are. That’s why SAFe is based on Lean-Agile principles. The better we understanding how things work, the more easily we can apply them to our unique context. SAFe principles apply on each level of the framework to realize complex cyberphysical systems. The SAFe big picture shows the four levels of SAFe, starting with the team level on the bottom, which is a representation of an agile team, the rest is scaled agility including cadence, alignment, feedback and transparency.

The framework adopts principles like:

  • Take an economic view
  • Apply systems thinking
  • Assume variability; preserve options
  • Build incrementally with fast, integrated learning cycles
  • Base milestones on objective evaluation of working systems
  • Visualize and limit WIP, reduce batch sizes, and manage queue lengths
  • Decentralize decision-making
  • Apply cadence, synchronize with cross-domain planning
  • Unlock the intrinsic motivation of knowledge workers

Compliance meets Agile Development

In regulated environments, we usually talk about quality, safety, security, efficacy, specifications, milestones, verification and validation, inspections, audits, sign-offs, documented quality management systems, established processes, full traceability, Metrics – defects, requirements coverage, code coverage and more.
On the other hand, the foundation of Agile, the Agile Manifesto, identifies four fundamental value propositions:

  1. Individuals and interactions over Processes and tools.
  2. Working software over Comprehensive documentation.
  3. Customer collaboration over Contract negotiation.
  4. Responding to change over Following a plan.

Agile methods and regulated environments are often seen as fundamentally incompatible. One observed reason is a misinterpretation of the Agile Manifesto. Agile processes follow a logic in a plan-do-check-act (PDCA) cycle, whereby some development is planned and done, the results are inspected, and adaptations are made to improve the process to solve any problems that have arisen. In regulated environments, a defined logic is needed. Thus, the granularity at which development processes are expressed and adapted requires careful tailoring to the specific regulated environment. For example, some will require full traceability, some will not.

Compliance LogosRegulated domains exhibit varying levels of criticality, from safety-critical to security-critical. A core characteristic of regulated environments is the necessity to comply with formal standards, regulations, directives and guidance.There is a high number of regulations and standards which apply across different regulated domains. These are issued by a number of bodies or associations and/or region specifics (e.g., ISO, FDA CPT11, IEC62304, ISO 26262 …)

Software plays an increasingly important role in regulated environments. The principles of the agile manifesto were identified earlier, and although an overarching set of principles for regulated environments does not exist, a number of core issues for software development in regulated environments may be inferred. These issues include quality assurance, safety and security, effectiveness, traceability, and verification and validation. Taking this into consideration, we see that the various reference models used may differ a lot, but in the end, have a lot of similarities. In our opinion and through the experience of our product, Applied SAFe, to use agile in large scale and to fulfill regulatory requirements, companies have to address the following topics:

Quality: Have a Managed Process

  • Systematic and responsive quality management to enable a controlled professional process; in fact, establish an agile quality management system.
  • Establish Organizational Process Focus: Learn, innovate and improve
  • Reliability and correctness of product; e.g. with emergent design

Safety and Security: Transparency in Execution & Continuous Compliance

  • Responsive planning and risk management. to mitigate safety risks for users
  • Securely protect users from unintentional and malicious misuse

Effectiveness: Manage Process & Solution Variations, Reduce Waste and Do Exactly What is Needed

  • Satisfy user needs, and deliver high value to users with high usability
  • Do exactly what is needed with regard to solution to be built
  • Perform processes and procedures in accordance with their intended use
  • Build quality practices into process as part of the flow

Traceability: Ensure process & product compliance

  • Documentation providing auditable evidence of regulatory compliance and facilitating traceability and investigation of problems
  • Separation of process requirements and product requirements.

Verification and Validation: Engineering based on Principles & Practices

  • Embedded throughout the software development process (user requirements specification, functional specification, design specification, code review, unit tests, integration tests, requirements tests)
  • Product is specified, designed, built and tested in accordance with regulations

In subsequent blogs, we will lay out how these issues can be addressed. Based on our experience derived from Applied SAFe and together with our valued partners, we present the following lessons learned with scaled agile applications or mappings to various reference models.

Lessons learned

Most regulated requirements have a common background. A mapping of scaled agility to reference model is surprisingly straightforward, once the Lean-Agile mindset is understood. But there is also a catch which needs to be addressed: Mapping of compliance elements to value add deliverables! We have seen several times where requirements haven’t been interpreted but taken as simple facts, leading to a demand of unused, unnecessarily created artifacts. For example, in SAFe you are using a PI Planning board were dependencies and features are visualized, teams commit themselves to their own plans; it now would be a waste just to create additionally a Project WBS out of the prioritized items, just to fulfill the requirement of having a project schedule. Because compliance is often a ‘negotiation game’ between stakeholders & appraisers, it is natural that you have to deal with different mindsets and expectations based on past experience. We have also seen, that some reference models ask for process- and product-specific requirements.  Such requirements must be scoped for purpose and concepts and practices like Solution Intent and Agile Design Control needs to be established.

Compliance is best demonstrated in small iterations. It is a common mistake to ‘build in’ compliance at the end of development; this increases the ‘quality depth’ of a solution. It is far better to treat audits as a normal part of a system demo, e.g. defined as part of a ‘Definition of Done’ of a solution.

Compliance
The goal should be be to find real issues rather than to just achieve approval; i.e. to focus on the outcomes of an audit. Automated mechanisms to prove a mapping to a reference model greatly help to reduce discussion time and interpretation games between practitioners and assessors. Commercial frameworks such as SAFe are an excellent starting point to be applied in the development of high assurance systems.

In our experience, a mapping of the Scaled Agile Framework to various requirements of regulated environments and reference models is achievable within a lower number of weeks, once Lean-Agile principles and reference model have been understood. Depending on the attributes of a solution to be build or on existing documentation of a solution the form of ‘DoD’s can vary signiicantly.

The quality of a process model is of extreme importance. Only necessary steps should be modelled in the process and the processes
Process Model
need to build and rely on usage heuristics. A success pattern for us is the separation of ‘What shall be done’ from ‘How something is done’. The how something is done is described in practices and it needs to be ensured that practices can easily be changed/selected by performers.

An easily accessible and easy to use Quality Management System (QMS) greatly helps to get people on track with SAFe. A static representation of the process in the form of a wiki might work as a beginning; but for the long run, teams should be enabled to instantiate and customize their process for their endeavor specific requirements in order to reduce waste. A ‘One size fits all’-Process will almost certainly be too heavy and it is not a wise thing to impose unnecessary work on knowledge workers. The organization needed to maintain the QMS (e.g. a ‘Lean Agile Center of Excellence) also needs to work in an agile manner and enable fast process changes and piloting in appropriate SAFe-levels. We have learned that it is far better to let the responsible person (e.g. the Release Train Engineer for a program) perform the tailoring of their endeavor specific process in a controlled and easy way. Just trust that they will do it well!

Conclusion

Scaled Agility can be successfully applied in regulated environments!

Most available frameworks, especially SAFe, have most of the hooks needed for compliance with high assurance systems. As regulated requirements have a common background, it becomes possible to build a process model which already has most of the content necessary to fulfill those requirements. A tailoring of processes is a must to reflect applicable regulations as not all regulations are as stringent as others. Usually, the regulations do not imply how something shall be done. Companies should use this given freedom and map agile practices to regulations; agile practices like for example: ‘Build the solution incrementally’, ‘Apply fast learning cycles’, ‘Apply objectives milestones’, ‘Demo frequently; routinely deliver objective progress, product, and process metrics’, ‘Organize around value’, ‘Build quality in’, ‘Apply continuous verification and validation’, ‘Include compliance concerns in Definition of Done’, ‘Solution intent as concept for requirements’,  ‘Inspect & Adapt’, etc.

Build your own internal team of Lean Agile Center of Excellence (LACE) and establish a managed process and Quality Management System (QMS). Don’t forget to address live cycle concerns of solutions (e.g. live time & criticality). It is necessary to read and understand the regulations! Strive to map existing agile behavior and don’t impose unnecessary work in your processes.

Last but not least: It is absolutely key to include the executive level in the cultural change. They are ultimately those responsible and need to lead the change, and it won’t be an easy job. To achieve this, we strongly recommend to define governance and responsibilities, also on an Enterprise level and of course: to exchange your experiences with others!

If you want to learn more about an application of scaled agility in regulated environments, please visit www.appliedSAFe.com. In subsequent blogs, the author will discuss in depth how to address the specific topics discussed above.

The author, Peter Pedross, can be reached at peter.pedross@pedco.eu. Twitter: @AppliedSAFE


One of the most common problems with existing well-established agile teams is that they have issues delivering value-added user stories. The team is cross-functional, has established velocity, understands roles, process and cadences. But when is comes to demonstrating the work at the end of the sprint or program increment, the value behind what they develop falls short to the eyes of business owner, product manager or different stakeholders.

Most agile change agents or coaches have seen this scenario before and we all know it is neither the team nor the agile processes. It comes down to alignment and understanding of the work being committed during a sprint or program increment. The work the team has committed to in a sprint must have alignment to the enterprise, organizational or program goals. The team, leadership and stakeholders must all be aligned on the objective of the work and how that work will provide value back to the organization. In coordination with that, there must be a common understanding of the work being committed to by all parties involved. Lack of understanding leads to development of work that does not align to the value or goals we are trying to archive.

As a result of a lack of alignment and understanding there are four core problems that arise at the team level,

  1. Teams are delivering user stories which add no value to the enterprise or organization
  2. Teams are committing to user stories that are complex and large which result in an inability to deliver
  3. Team does not understand what they are trying to achieve with the user story
  4. Completed user stories are not demonstrable

As a result of seeing these common problems too often in my agile coaching experience, I have established the “Rubix Cube to Value Added User Stories”. This blog reviews the foundation, guiding principles and process which make this approach effective in ensuring that an agile team produces valuable user stories.

Foundation to Rubix Cube – Alignment and Understanding

The Rubix Cube is a perfect symbol to help demonstrate how user stories must be aligned and understood in order to ensure successful delivery of valuable user stories. Without the understanding and alignment of the work from the end users to the team, the team is being set up for failure.

As a result, we have set up a three-tier scope decomposition which originates from the Scale Agile FrameworkTM – a decomposition of work from Strategic Themes, Epic, Features and User stories. This framework is ideal for large scaled enterprises and can be scaled up or down, as needed, to fit any size organization. But a three-tiered system is best to illustrate the value alignment through the decomposition of work back to strategic themes and down to user stories. The key to scaling up or down this approach is to ensure that there is alignment from the top to the bottom of the framework, regardless of number of tiers.

To ensure alignment across our Rubix Cube, we should consider three key characteristics of each piece of scope that is critical to ensure we have a complete understanding of the work.

  • Details – Clear description of the “what” and “ Known How’s” of the piece of work. Identification of In Scope, Out of Scope, assumptions and Non‑Functional Requirements help to articulate the work.
  • Benefits – Identify the value behind the work based on three categories,
    • People – Who is benefiting from achieving this work and why?
    • Process – What is the benefit behind the process being enhanced?
    • Capability – What is the benefit behind the business or technical capability being enhanced?
  • Validation – Explanation of how the team, product owner, and other stakeholders will know that the work is complete. Details here can lead to acceptance criteria, Test Cases, etc.

Classification of the work into these categories becomes an effective and efficient way to get alignment and understanding of the work across all stakeholders.

This seems complex to complete and almost as painful as actually completing a rubix cube!  Ensuring there is full alignment and understanding of work across multiple incentives and teams is very difficult. This is why, in the days of waterfall, teams created 709 pages of requirements that would take four months to complete and required signed off by every person possible and baselined so that we ensure alignment and understand of this perfect rubix cube.  But today we can’t do that because market needs change too often and we can’t get the full rubix cube correct as it take too long to complete and the colors keep changing. The question is, what if we just want one side of the rubix cube to be perfect.  What can we do to line up the colors for one side?

Rubix Cube

We are about to move into the principles and processes which will lead us to value-based user stories. Remember that we are not trying to figure out the whole rubix cube, which is impossible in today’s world. The principles and processes below will help to outline how we constantly iterate, collaborate and refine the work so that we can get alignment and understanding for one side of the rubix cube long enough for the team to commit and deliver value.

Guiding Principles to Value Added User Stories

Knowing that we have a structure and foundation to document the work, we needed to establish some guiding principles to drive the alignment and understanding of the work. The guiding principles are all to drive a mindset of continuous iterations of scope decomposition to drive quicker value back to the organization.  

  • Align – Align all work to benefits. The core of this entire approach is alignment to value. The value derives from the benefits the scope is trying to achieve. Value should be identified at the highest level of scope decomposition and then aligned to the lowest leel. Establishing new or decomposition of value at lower levels Align Puzzle Piecesof refinement can lead to misalignment of work and non value added user stories. If, through refinement, new value is identified, it must relate back to an Epic (or highest level of scope decomposition). This could lead to adding it to or establishing a new Epic, which is specific to achieving that piece of value.  Doing this will minimize the risk of gold plating and keep the work aligned to value as it was intended.
  • Ensure – Validate that the work can be achieved by ensuring there is an understanding of the work Alignment to valueacross all stakeholders. Ensure they understand the value that will be achieved after the work is complete. This is how the value is realized. In the process, we identify value by the value to people, process and capability. Validation of that value occurs in the form of people, processes and capability to ensure the value was achieved. Doing this at every level of the scope decomposition ensures that work stays aligned to the value achieved.
  • Demo– Ensure pieces of work are demonstrated as minimal viable product. Sprint or PI Demo are the hardest of the agile process to fully achieve because it comes as an afterthought. Mid Sprint the team suddenly remembers that “we have to demonstrate something….” so they pull together whatever they can and hope it works at the demo.  For demos to be effective, thinking of what pieces of work tied together can be demonstrated at all levels of scope decomposition is critical. Splitting a single Epic into two demonstrable pieces of value allows for easier prioritization, better understanding, effective execution and value added demonstrations. Consider demonstrations when breaking down work.Demo


Process to Value User stories

The process outlined below is not linear and is very iterative. Think back to being a kid working on that rubix cube: If you kept at it for hours, it would eventually aggravate you enough that you would throw it across the room!!  Do not let that happen here. Start writing, have a conversation with someone (get their feedback), revise it, and then step back and see where you are. Then attack it again. It took my whole family an entire summer to get the rubix cube perfect. Don’t expect to lock down scope in a day and throw it over the wall. It takes multiple iterations to get it right.

Below is more detail about the process.

  • Write – Get the information down on paper. The culture of meetings and discussion too often creates more confusion than clarity when it comes to alignment of scope and decisions. Verbal communication can lead to misunderstand if not framed properly. As a best practice, write and allow enough time for people to read, react and ask questions. This aligns to the 3Cs in user story writing. Ensure the card is established first so that there can be an effective conversation which lead to confirmation.
  • Revise – Do not be afraid to adjust and create new Epics, Features or User Stories. Do not think of this as a traditional work breakdown structure.  At the core it may feel like it, but the principles and process enable iterations of determining the right scope. Feedback and conversation between the different levels of scope also helps to articulate the scope and validate true minimal valuable products. The management of these backlogs is an active discussion from top to bottom until the time of commitment at the team level.
  • Reflect –This rubix cube can get tricky as Epics and Features start to get split into 2 or 3 different Epics and Features. There are two key pieces to reflect on to ensure everything is staying aligned. First look across the features details to see if all major pieces of work are lining up to the proper sub pieces. Next ensure that when each piece Is complete is validated the benefits of the macro piece of work. This is critical to help ensure alignment. Constantly reflect and adapt to ensure the sum of the breakdown of work achieves the whole.

Call to Action

In my experience, I have seen teams and groups of teams called failures because they were unable to deliver value added user stories. This was not because the team was not effective or did not have the proper skill sets, it was because the team was unable to get alignment and a full understanding of the work.

In today’s environment, this is becoming the norm. The rubix cube approach to value added user stories helps to manage this unknown by ensuring that work is aligned to value and understood before it is committed to. It also helps to establish an iterative approach to scope refinement. If you are having challenges with delivering value added user stories, I ask you to try it, provide me feedback and improve the process.


Whom Do you Seek?

A major key to success of a sustainable agile transformation is finding the right person or persons to lead the transformation. This person(s) need great prior experience at the Enterprise level coaching and mentoring as a ‘Sherpa guide” through the difficulties of an Enterprise wide agile transformation. This is an “OZish” view of the behind the scenes of how an agile adoption can proceed, leadership and its role and the case for the “determined person”.

Follow the Yellow Brick Road…

I often ask people who their favorite character is in the movie “The Wizard of OZ”. Usually it is Dorothy or the Scarecrow. Those that answer Toto. I worry about. My answer is “I like the Man Behind the Curtain”. Here is the actual wizard working the flash and bang of the big show from a small side stage and he does not mean to be seen or heard (other than through his avatar).

Man behind curtain

Good mentors are like that. Good Enterprise Transformation Coaches are very much like that. They project leadership and strong qualities, and work the invisible pedals and levers of the Transformation within your organization. A great coach or mentor is always taking a back seat to the Employee who leads the Transformation. Even though they may be for this one topic of Transformations “the smartest person in the room” they don’t act it. They work at drawing out people, providing behind the scenes coaching, salting the conversation with ideas that others can latch onto as their own.

If required, they have the strength of character and leadership capabilities to take charge, but as a least acceptable solution. The Enterprise Coach should get their sponsor and lead to step up to the role of leading the Transformation.

What Comprises a Set of Skills?

Defining the Enterprise Agile Transformation Coach role is a mix of:

  • Communicator (up and down the whole chain of command)
  • Agile experience and practices
  • Enterprise Transformation experience
  • Technology savvy across multiple knowledge domains
  • Business experience and savvy to connect with non-IT types
  • Exposure and experience across the IT organization

Basically, someone who has been there done that, built and burned down the T-Shirt factory a few times. So they must have had experience with Enterprise Transformations before. There is no replacement for experience in this realm.

Most important they have a force of will to drive to get things done. Many years ago I read a paper on the concept of “The Determined Person”. The basis of the story was that The Determined person is someone who, despite what seem to be overwhelming odds manages to persist through the effort to its conclusion. Someone of strong conviction and faith that we can solve problems and overcome inertia of culture. This strength of will is coupled with persistence. This persistence of vision from Calvin Coolidge is my favorite quote of all times.

“Nothing in this world can take the place of persistence. Talent will not: nothing is more common than unsuccessful men with talent. Genius will not; unrewarded genius is almost a proverb. Education will not: the world is full of educated derelicts. Persistence and determination alone are omnipotent.”

Do I Hire Someone Who is Certified?

Certifications are not a replacement for experience. I’ll take the seasoned veteran of the fight any time over someone with 5 certs on their resume. An interesting question to ask in an interview to sort out who is experienced is “So tell me about a major transformation failure and how did you get out of it?” An experienced coach will have interesting (and sometimes scary) stories for that one. (FYI, the more interesting part of the answer is how they got out of it).

TrekkersSomeone with little or no experience or who is too cautious to answer is likely not the kind of Mentor you are looking for. You want someone with wisdom, and unlike the catch phrase, “with age comes wisdom” it really should read with EXPERIENCE and failure comes wisdom.  Once you’ve had your metaphorical legs blown off by stepping on the various landmines strewn on the path to Transformation, you learn what a landmine is and how to avoid it. I want to follow that person.

These Coaches Are Really Expensive…

So I got one of these people lined up, now I can’t afford them. Yes, experience comes at a cost. The kinds of consulting we are discussing is expensive. Cheaping out just means you get what you pay for. Besides, transformations are cheaper with a good guide. Experienced coaches save you money by offering up, “I did this in three accounts, it worked in these two and failed here because… This kind of guidance can save lots of time, energy and money.

The value a Mentor brings is the experience and guidance to provide you with the capability to succeed. Just like climbers of Mount Everest hire Sherpas, you need to do the same.

I hope you’ve enjoyed and learned from our “Playing the Agile Transformation Game” Blog series. Did it answer some your questions? We’d love to hear from you…

And stay tuned for our upcoming blogs!


Overview

“Donkey: Are we there yet?
Shrek: No
Donkey: Are we there yet?
Fiona: No, not yet!
Donkey: Are we there yet?
Shrek: Yes!
Donkey: Really?
Shrek: No!!!!”
From Shrek-The Movie

Measuring an agile transformation effort is what we are looking at today. Several important questions present themselves about measures: What to measure? How to Measure? When to measure? Who to report it to? All good questions to help us determine two crucial things:

  • Are we there yet?
  • Are we sustaining the effort?

So here are some Tips and Tricks to picking the right metrics to provide management and record the journey and the capability of sustainability. We can help define this journey by determining when you hit the “Critical Mass” that is, when the transformation starts clicking over on its own.

To define the critical mass, I think there are three valuable metrics:

  • Conversion Rate
  • Hygien
  • Customer Satisfaction

These three metrics help us understand are we there yet, if we’ve reached critical mass and if we are delivering value to the business customer.

What is Critical Mass?

At a point in time, your transformation all of a sudden takes on a life of its own, and the transformation is suddenly less of an uphill trudge and more of an enjoyable experience. What happens is that you reach a “Critical Mass” in conversion. There are more staff educated and using agile techniques than are not.

I borrow the term “Critical Mass” from nuclear technology to represent the phase a nuclear reactor achieves when a just enough sufficient amount of nuclear fissile material is brought together and a chain reaction (the giving off of electrons) takes place within the fissile material. This reaction is then controlled using graphite rods to keep the reaction going at a controlled pace.

Like the nuclear reaction, there is a point in the transformation where we achieve critical mass and we have enough going on that we find the effort is easier than before.

  • Enough staff are educated on agile methods
  • Enough staff are working on agile teams producing valuable software for their customers
  • Enough of the leadership on both the technology and business branches see the value of what we are doing.

Measure the Results of the Transformation

The Conversion Metrics

So we are faced with determining how many and when with regard to the staff in our organization. How do we track their agile journey? Some simple metrics help.

Let’s track two simple metrics:

  • Their participation in agile training
  • Their participation on agile team(s)

Topography - MeasurementThese two simple metrics can be tracked in an inventory of all staff in a division or Information Management department. This gives us the view of how much of the IT unit we have converted. At some point when a minority to majority are converted, that Critical Mass event will take place. I’ve seen it happen as early as 25% conversion and in some very slow organizations as late as 75% conversion. When the event takes place has much to do with the prevailing culture of the organization. Culture and its effect on your agile transformation is a whole different conversation for another time.

FYI, don’t forget to include training management and leadership and recording their compliance too!

Hygiene metrics

When launching new agile teams, we need to be able to determine at some point after they are separated from coaching and mentoring whether or not they are practicing good hygiene regarding the use of the ceremonies, practices, principles and tools.

This can be as simple as the practice of doing self-assessment surveys at regular intervals. I like the idea of once a quarter self-assessment that list three point of view: the team as individuals, the Scrum Master and the Mentor or coach – optionally the Product Owner may also have a separate opportunity to vote on the same categories.

Categories are simple:

  • The Agile Ceremonies
  • Practices
  • Tool Usage

A simple 1 – 5 scale works and a spreadsheet tool is very commonly used. Results are often mapped as spider diagrams like the one below. You define questions around the usage of agile practices and tools and then ask the team to take the survey. Compile the results and publish them for use in a retrospective.

Team Maturity Assessment

Figure 1. Team Maturity Assessment

Don’t forget converting Leadership as a measure of success!

Customer Satisfaction

Are we delivering Valuable Working Software? Measures for valuable software in the customer’s hands could be:

  • Quality
  • What the customer needs
  • Delivered on a timely basis

Doing a customer satisfaction survey to get a baseline, then again measuring quarterly how are we doing? And sharing it with the team is important. Focus groups with the business customer that include all the team is also useful.

And of Course – Budget for and Measure Against a Plan

DO NOT LOSE SIGHT OF DOCUMENTING PROGRESS AGAINST A BUDGET AND A PLAN. These are expensive programs. They are also not easy to accomplish. So like all good agile work, create a vision and translate that into a rolling quarter roadmap. Break the roadmap down into adoptable features and lay them out in the roadmap. Develop a release plan per quarter just prior to that quarter and execute against that plan. Some key components of this:

        • Make sure you have a plan
        • Make sure you document results against the progress (or lack of) against the plan
        • Be prepared to chuck and re-write the plan

The same applies to budgets – by making work visible and measuring your accomplishments against them, it will be a lot easier to get budgets approved so you can move forward with the work.

FYI a great Project Management technique I found when things go wrong (notice I said WHEN not if) Be the first to note you screwed something up, then quickly move forward to figuring out how to fix it. It takes the focus off the finger pointing and the blame game and moves you into define the problem domain in an us against the problem mode.

I hope this has been informative. Stay tuned for the next article in our series that will address the potential landmines and pitfalls to watch out when implementing a large scale agile transformation.


For large software development projects involving up to a 100 to several 100s of software developers, analysts and testers, the inherent techniques of agile methodologies such as Scrum or XP prove inadequate for effectively managing the progress of such enormous effort.

In this blog, we look at a quick comparison between two leading frameworks for scaling the Agile approach for large software development projects: Scaled Agile Framework (SAFeTM) and Large-Scale Scrum (LeSS).  Each has its strong points that may fit different organizational situations of large software product development.

Let’s get started: the “Big Picture”

See below the overview pictures for SAFe (www.scaledagileframework.com) and LeSS (less.works).

SAFe 4.0 Big Pic

 

LeSS Framework

 

Right off the bat, we see that while the SAFe Framework appears more comprehensive, it also appears more process-heavy. In fact, the inventors of the LeSS framework are proud of its acronym indicating less process, less artifacts, and fewer roles, remaining faithful to having only the original Scrum roles of PO, SM and Team.

As an example, SAFe offers the role of Program Manager, who is in charge of setting the priorities and overall scope of functionality to be delivered by a Program containing many Agile teams. The Product Owner in SAFe performs the usual Scrum role for up to a couple of Agile Teams.

In contrast, LeSS offers the regular Scrum role of Product Owner (PO) for up to 8 Teams.  This is because in LeSS, the PO is not a liaison with the end-Customer: the Teams get to interact directly with the end-Customer to understand the details of the requirements, giving the PO the opportunity to focus on the overall priorities and scope for up to 8 Scrum Teams.

Hence if an organization can afford the opportunity for the Agile Teams to interact directly with the end-Customer LeSS can be a good fit in this particular aspect. Otherwise, SAFe can accommodate both the Team-direct and the liaison-PO situations.

Organizational Structure

The inventors of LeSS very much believe that culture follows structure. To that end they offer LeSS not just as a practice to scale up the Scrum approach, but as a direct impetus for change of organizational structure.  The picture below shows what LeSS advocates for organizational structure for up to 8 Scrum Teams working together to develop a software product in order to provide what an Agile culture needs from an organization to succeed.

Organizational Structure of LeSS

In this picture, you can see that there are no functional departments (e.g. development vs. testing) or a PMO.  Instead, in addition to the Scrum Teams, there is the Head of the Product Group, which LeSS views (as it views all other managers similar to the “Toyota Way”) as a teacher of those reporting to him/her, the Product Owner team, which provides a pool of POs for every Scrum (large or small scale) effort, and the Undone Department.

The latter is a curious thing.  In LeSS, a permeating theme is that the Teams are supposed to do everything needed to put a high-quality software product in the hands of end-Customers: from analysis to development to testing to packaging, all while coordinating with other Teams.  This is represented in the Definition of Done of the Teams.  But it may take the Teams a few years to mature to that set of comprehensive capabilities.  Hence the Undone Department is a placeholder for resources that fill in for whatever the Teams are yet to be able to do (e.g. DevOps) until the Teams mature.

In contrast, SAFe does not advocate drastic organizational change as emphatically as LeSS.  It presents its approach for adoption even with the current organizational structure, and lets the organization take its time deciding when it may want to restructure to be more efficient with Agile.  That’s not to say that LeSS presents its approach as an “all or nothing deal” – it just emphasizes structural change in the organization more strongly than SAFe does.

Differences in Planning

SAFe stipulates that sprints should be grouped in sets of 4-5 consecutive sprints, each set being called a Program Increment (PI).  And while the Teams (and the Product as a whole) are expected to demonstrate incremental achievements at the end of each sprint, it is at the end of a PI that complete “Features” of the software product are expected to be available.  SAFe, however, maintains the option of releasing on-demand any time during a PI with the Features that happen to be complete at that point in time.

Planning in SAFe happens in a 2-day session at the beginning of each PI, in addition to the usual sprint planning at the beginning of each sprint.  In the PI planning session all the Teams working together in what SAFe calls an Agile Release Train (ART) attend to commit to delivering a set of Features for the PI, and to have each Team present a plan showing which stories (which are children of Features in SAFe) the Team plans to complete for each sprint in the PI.  Finally, in addition to the usual sprint demos and retrospective, SAFe has an overall PI demo at the end of each PI, and a general Inspect and Adapt session, which is a scaled up version of a sprint retrospective.

In contrast, LeSS remains faithful to just the usual sprints of Scrum, with the following additions:

  • Sprint Planning happens in 2 stages. The 1st stage is attended by 2 representatives of each Team, which do not usually include the Team’s Scrum Master.  This stage decides which items from the common Product Backlog each Team will develop.  It also has cross-team estimations to unify the estimation numbers. This is in contrast to SAFe, which suggests normalizing cross-Team estimations by equating 8 story points to 1 staff-day.The second stage of sprint planning is the same as sprint planning in regular Scrum
  • Each sprint review is held with all Teams as a “science fair”, where each Team has a station to demonstrate its accomplishments for the sprint. Attending stakeholders can visit the stations in which they are interested.
  • The Sprint Retrospective is held in two stages: the first being the same as regular Scrum; the second is for the overall progress of the software product being developed by the Teams.

Portfolio Management

As represented in the top level of the SAFe “Big Picture” shown earlier, SAFe offers a comprehensive approach to prioritizing projects (represented as Epics in SAFe) for the organization and budgeting for them in an Agile manner.  In its latest version, SAFe 4.0, there is an additional, optional, level for Value Streams below the Portfolio level – it is usually relevant to projects with hundreds, or thousands of participants.

In contrast, LeSS does not delve into portfolio management: it only offers techniques that can be compared to the Program and Team layers of SAFe.

2 Versions of LeSS

LeSS has two versions:  the one we saw earlier for 2 to 8 Teams, and Less Huge for more than 8 Teams, depicted below.

LeSS Framework

LeSS Huge is formed by having several regular LeSS frameworks working in parallel with each other.  The most notable addition in LeSS Huge is making each regular LeSS belong to a separate Requirements Area with its own Area Product Owner (APO) under the overall Product Owner.

If you were thinking “Well, isn’t an ART the same as a Requirements Area?”, you’d be partially right; a similarity is that the Product Backlog relationship to the Area Product Backlog is analogous to the relationship of a Portfolio Backlog to a Program Backlog in the sense that items on the former are coarser grained than items on the latter.  However, one of the differences is that an APO is still only one for 8 Teams, whereas the SAFe PO covers very fewer Teams.

Other Differences between LeSS and SAFe

  • LeSS can appear to offer one seemingly shocking advice (which is not offered by SAFe): Don’t scale! (But if you have to scale, use LeSS J) It advocates that even very large software products can be built more successfully by a relatively small Team of co-located master programmers and testers.  They cite at least one example on their website (less.works) of a huge software project that followed a torturous path to completion.  When the overall project director was asked if he were to do it again, what would he do differently, he said that he’d pick the 10 best programmers and have them build it all.  I can cite a more recent example with the Affordable Care Act, where a traditional government contractor put an enormous number of resources on the project that failed miserably.  Later, about a dozen master developers and testers were put together in a house to work on fixing the ACA, which they did within a period of several months. (See http://www.theatlantic.com/technology/archive/2015/07/the-secret-startup-saved-healthcare-gov-the-worst-website-in-america/397784/)
  • Whereas SAFe is generally tool-neutral, LeSS strongly recommends that you not use automated tools until after your organization becomes quite proficient with LeSS, opting instead to use manual resources like very big white boards and wall charts. Otherwise, LeSS declares that if you automate a mess, you get an automated mess.  And even after the Teams become proficient with LeSS, it recommends that you only use open source tools, which you can easily jettison if they don’t work out for you, without losing high-dollar investments.
  • SAFe takes a more customary view of the role of Scrum Master. In SAFe, the SM is pretty much a permanent role with the Scrum Team and does a lot of intra-Team and inter-Team coordination. In LeSS, the SM is first and foremost a Teacher.  He can fade away from the day-to-day Team dynamics once the Team becomes proficient in the Scrum and LeSS approaches.
  • In SAFe: Epics, Features and Stories are explicitly handled as integral parts of the SAFe backlogs. LeSS, on the other hand, only talks about coarser vs. finer grained Backlog Items, staying faithful to Scrum by relegating Epics, Features and Stories as instruments of XP, which is not part of Scrum proper.

Conclusion

The quick comparison between LeSS and SAFe in this blog is by no means comprehensive.  However, it does show SAFe to be more wide-ranging in offering processes and roles to handle the development of software from the highest profile levels down to the individual Agile Team for large-scale Agile efforts, albeit at the cost of perhaps appearing too process-heavy.  However, it is perhaps more palatable for a typical traditional large organization to begin to adopt SAFe than LeSS, since the latter strongly advocates some major changes to the structure of the organization as early as possible in the adoption of LeSS.

 


Agile 2015, the most talked about agile conference, is in its 13th edition and takes place from August 3 to August 7, 2015 in the Nation’s Capital, at the Gaylord Hotel. This conference brings together an international community of agilists who seek to expand their knowledge of agile methods and practices.

Silver_Sponsor_Badge_200px

At Blue Agility, we’re very excited to be sponsoring this event (Silver) and look forward to meeting new agilists and reconnect with old friends.

Blue Agility executives will also be attending the Agile Executive Forum and join a select group of executives in an intensive 1-day event to explore the latest strategic thinking in Lean/Agile principles and practices.

Ok, so here’s my take on Top 10 at Agile 2015!

1. We’re hereby formally inviting you to join us at our booth to talk challenges and solutions with #agiletransformation #scalingagile, #bluejazz and more with our Executives and Team!

2. Dean Leffingwell, creator of Scaled Agile Framework (SAFeTM), will be at our booth on Wednesday, August 6, from 7:45-8:30 pm. Alex Yakyma, SAFe Methodologist at SAI, will join us on Monday, from 9-10 pm. Bring your questions and get ready for interesting discussions!

3. If you’re new to agile and wondering what it’s really all about and how it can benefit your organization, then you might want to sit in on this session: Introduction to Agile: The Genesis – Monday, Aug. 3 – 10:45-12:00 – James Newkirk – http://sched.co/36M

top10

4. Ok, so you’re already doing agile. But now it’s time to scale the benefits! What are some of the activities that can promote success or spell failure? Scaling Agile Patterns and Anti-Patterns – Monday, Aug. 3 – 2-3:15 pm (Monica Yip and David Grabel) – http://sched.co/36Pm

5. It’s always a cool thing to hear the spearhead of something speak of his work. Don’t miss Dean Leffingwell, creator of Scaled Agile Framework (SAFe) talk about the Nine Immutable Principles of Lean/Agile Development. Thursday, August 6, 9-10:15 am – http://sched.co/37oV

6. As we all know, change is a difficult thing, regardless of where you stand on the agile continuum. In this sessions, presenters will discuss practical ways of dealing with the challenges inherent to change in a large organization: Navigating the Complexity of Organizational Change – Monday, August 3 – 10:45-12:00 (Jason Little and Declan Whelan) – http://sched.co/36QW

7. When facing the task of becoming more efficient, Value Stream Mapping can be a little tricky sometimes. In this session, the presenter will show practical ways to uncover bottlenecks, queues and silos in any organizational process: Value Stream Mapping Workshop – Wednesday Aug. 5 – 3:45-5:00 pm (Nayan Hajratwala) – http://sched.co/36sj

8. Coming from a neuroscience background, the behavioral aspect of change and its challenges and how to mitigate these in the agile world always fascinates me. These ones, I wouldn’t want to miss: Six Rules for Change – Monday August 3 – 3:45-5:00 pm (Esther Derby) http://sched.co/36Q4 and The Secret of our DevOps Success: Fostering Human Behavioral Change – Tuesday August 4 – 2:00-3:15 (Mark Nemecek) – http://sched.co/36Z4

9. DevOps on your mind? What is it? How can it benefit your organization? The 10 Myths of DevOps – Tuesday August 4 – 3:45-5:00 pm (Seth Vargo) http://sched.co/36Re

10. Now this one is kinda neat. If you’re interested in having dinner and mingling with your fellow agilists on Tuesday evening, August 4, you can sign up for “Dinner with New Agile Friends.” Restaurant seating is limited, so sign up early! http://sched.co/3xfB

See you in DC!


In 1981 Yamaha launched what became known as the “Honda-Yamaha War”. In the battle for supremacy in the motorcycle marketplace, Yamaha challenged Honda’s leading position by opening a new factory that made Yamaha the world’s largest motorcycle manufacturer. The economics of this war seemed tilted in Yamaha’s favor: a large modern factory enabled Yamaha to exploit economics of scale, reduce per unit costs and flood the market with motorcycles. Honda did not respond by creating an even larger factory to compete with the lower cost structure and battle Yamaha in a war of attrition. Rather, with the battle cry “Yamaha wo tsubusu!” (“We’ll crush, squash, slaughter Yamaha!), Honda rapidly increased the rate at which it changed its product line and used variety to bury Yamaha.
Motorcycle Helmet

George Stalk’s account of the “Honda – Yamaha war.”

“At the start of the war, Honda had 60 models of motorcycles. Over the next 18 months, Honda introduced or replaced 113 models, effectively turning over its entire product line twice. Yamaha also began the war with 60 models; it was able to manage only 37 changes in its product line during those 18 months.

Honda’s new product introductions devastated Yamaha. First, Honda succeeded in making motorcycle design a matter of fashion, where newness and freshness were important attributes for consumers. Second, Honda raised the technological sophistication of its products, introducing four-valve engines, composites, direct drive, and other new features. Next to a Honda, Yamaha products looked old, unattractive, and out of date. Demand for Yamaha products dried up; in a desperate effort to move them, dealers were forced to price them below cost. But even that didn’t work. At the most intense point in the H-Y War, Yamaha had more than 12 months of inventory in its dealers’ showrooms.”
Stalk, George Jr. “Time—The Next Source of Competitive Advantage.” Strategy: Seeking and Securing Competitive Advantage. Eds. Cynthia Montgomery and Michael E. Porter. 39-60. Boston, MA: Harvard Business School Press, 1991. 39-60. Print.

How was the war won? Innovate, Innovate, Innovate.

Honda won the war by out-innovating Yamaha. It exploited fast decision cycles to get product to market faster, learn and adapt. Ok, but what do motorcycles have to do with software development? Imagine for a moment that your company could market new product and services faster than your competitor. What could you learn from consumer preferences? How quickly could you innovate and differentiate your products and services making your competitors’ products look old and inflexible, not serving their customers’ needs? What would happen to your competition if you could out-innovate them?

Lightbulbs

The challenge remains that for many large enterprises (and a surprising number of smaller, supposedly nimbler ones), it can take years to bring a new product or service to market. The risk is to fall victim to a service provider who can deliver product and services faster than we can. Many organizations pursue agile software development methods as a solution to working faster through their development backlog. Many organizations start setting up Scrum teams, train Product Owners, Scrum Masters, and even hire external coaches. Many surveys suggest they often get good results: it takes less time to deliver software and at a lower cost. The software has fewer defects and there is greater customer satisfaction.

But there may lie the problem: we are treating agility as a solution to an IT problem, the perennial problem of budget overruns, late delivery, and defective products. But we’re still executing the same strategy. Only now, it’s a little faster. Rather than taking two years to deploy a new product or service, it might take only 18 months, or just a year. This is certainly good but it leaves so much on the table. Consider this: By some accounts, nearly 60% of software created is never used. That is outright waste! We used up valuable resources to create product and services no one wanted to use in the first place.  Why would we consider it a success if we can now waste effort faster?

Speed… or Lead?

Honda did not defeat Yamaha by executing the same strategy faster or for less cost. Honda beat Yamaha by changing their strategy, learning what was truly valuable to their customers, and also by shaping their customers’ tastes and desires. Attempting to beat a competitor by simply doing the same thing faster and cheaper is a war of attrition and a race to the bottom. Rather, we need to think about how we can outmaneuver our competitors [Adolph, S. “What Lessons Can the Agile Community Learn from a Maverick Fighter Pilot”,  Proceedings Agile Conference 2006, Minneapolis, MN] and make their strategy irrelevant, much as Honda made Yamaha’s irrelevant.

 

Old and New MotorcycleHow this translates into the software development world is that agility is not just a software thing, not just an IT thing. We can no longer think of IT as just a bag on the side organization. Rather IT and business are joined into a common business strategy. “Customer collaboration over contracts” is not just a cute “feel good” article of the Agile manifesto. It truly means IT or product development tightly collaborates with the customer. This is often one of the greatest weakness in most agile transformation because customer collaboration usually ends at about the point where the business throws the requirements document “over the fence” to IT. This also presents the greatest opportunity for improvement.

Often we have observed the Scrum Product Owner or XP customer representative is at best an IT, or engineering, methodological (or perhaps a mythological) figurehead volunteered for the role because the methodology says we need to fill that role. But the role has no real authority to prioritize work and balance the demand for work with the teams’ capacity to deliver that work. Unprioritized work is still tossed at the team from all directions. Individuals at higher pay grades than the hapless Product Owner go directly to team members and demand work. There is little alignment or collaboration on discovering what is truly valuable. There is no opportunity to quickly learn and innovate, much like how Honda defeated Yamaha. This is happening because many organizations fail to see how agility is relevant to them beyond IT. Typical team level Agile methodologies such as Scrum or XP do not provide guidance for how an organization may manage large scale initiatives, where value cannot be represented by small user stories. They do not provide any real guidance for how a large product organization can work a collection of small self organizing development teams.

Scaling Framework as a Bridge to Success

It does not have to be this way. There are numerous scaling frameworks (SAFeTM, DADTM, LeSS, etc.), which all provide mechanisms for the business to engage with IT and engineering at a level and scope which is meaningful and relevant to the business. Some methodologies, like SAFe, employ a hierarchical single line of content authority model, where a Product Manager owns a fairly significant piece of work (expressed in SAFe as a Feature) which is elaborated into smaller user story size chunks that are managed by the Product Owners. The product manager and product owner are aligned on their priorities.

Program Manager

A simple case study of this comes from one electronics client adopting Agile practices. An initial proposal for how the business would step in was for the product manager to serve as the Product Owner for foiur Scrum teams. The product manager was far from enthusiastic about this idea because he saw his job as being out in the field talking to dealers and not day to day management of engineers. That was the role of the engineering leads. Furthermore he had little interest in the incremental value add that a 2 week sprint would create. His expertise was understanding the marketplace for cool new features, not the algorithm for how video amplifier switched between inputs. He understood the importance of this, but his interest was in what he could take to the dealers. His interpretation of collaboratively working with the teams was that he would have to work daily with teams on items that were of little interest to him at the expense of those that were.

What changed his attitude and enabled him to work collaboratively with teams was to bring the process up to a level that was valuable to him. With the introduction of release planning, he could collaborate with the teams on a time and feature scale that was relevant to him. He did not have to collaborate with the teams every day–rather he could be out in the field working with the dealers. Yet there was a clear line of content authority from the Product Manager, through the POs, and the POs could act on behalf of the product managers, much like Dukes of old enforced the King’s will on their domain. The introduction of a product council meant he could collaborate and direct the POs to keep them aligned with his wishes.

These changes enabled the business to work more collaboratively with engineering. Furthermore, the product manager no longer had to write voluminous feature description documents and throw it over the fence to engineering, and hoping that in 6 to 9 months he may see something that might work in the marketplace. Working collaboratively, he wrote smaller Epics and Features, which could be released on a two month cadence. This approach enabled us to find the middle ground that enabled the client to have a shorter cycle time enabling the business to collaborate with the engineering teams.

As any business, we must be out to crush our competitors. It sounds nasty but that is what commerce is about. But it is also this competition that can stimulate the collaboration needed to quickly innovate and outmaneuver our competition, just as Honda outmaneuvered Yamaha. Otherwise, we are using agility to simply do the same thing faster and cheaper. This is a war of attrition and leaving so much competitive advantage on the table. Seriously – do you just want to create waste faster, or would you prefer to out‑innovate your competitor?

 


Any scrum team knows that the target is to get stories to done within the sprint.  Teams establish a Definition of Done (DoD) which includes all of the items that the teams needs to get the story to done.

Often teams that are new to scrum will follow many of the ceremonies, but still operate within a traditional waterfall context, meaning they won’t release to production until everything is done.  Today I will share a painful story on why getting stories to “Done” is not enough and scrum teams need to actually release the code to production more frequently than at the end of the project.

2014-02-05-Definition-of-Done-300x270

The Background

I was working with a team as part of a large company to upgrade some of their systems. The company historically used a waterfall process, but was starting to transition to scrum.  Initially the team was told that since this was an upgrade it would be run as a waterfall project, but as we dove in we saw how it would fit very well into an agile methodology.

Below were some key characteristics for the project:

  • There were 2 key drivers for the upgrade: End of support for the version of the operating system and concerns over performance during heavy usage periods
  • There was a “hard” delivery date needed as a result of the 2 key drivers above, which was ~1 year out
  • The project included the following: OS upgrade, database upgrade, splitting a single server for the application and database into 2 servers, building in BCP and HA to both the application and database server, rewriting a third party software with in house software and replacing the current job scheduler with another (among other items)
  • Additional scope items were added throughout the project and deemed “necessary”

The Plan

So even though we were told this project was not good for scrum, the team decided they would utilize Scrum.  Below was the basic approach:

  • There were 13 different applications on these servers.  The team used each application as an Epic.  Since the team did not have automated testing in place, it was not feasible to completely regression test every single system.  Instead, they evaluated the testing strategy and created stories around each major testing flow that they would perform.
  • We got together with the SMEs and used Planning Poker to estimate the size of each story (testing flow) and compared to our baseline.  After estimating each story (testing flow), we then compared the total rolled up points for each epic (application) to see if they compared relatively, which they generally did.
  • We then worked with the team to estimate their velocity, at which point we created a release plan.  Since we were given the end date, we knew that we could fit 13 3-week sprints.

I recommended that instead of doing one large production release, we break the project up into multiple smaller production releases along the way.  We received the buy-in to work in sprints, but we were told that we could not break up the releases and would release everything to production at once.

And this was where I failed as a ScrumMaster. When I analyzed the effort, I came to the following conclusions:

  • We were going to be standing up a new production server alongside the existing production server.  Our goal was to move all of the applications over to the new server, with the goal of retiring the old server.
  • There was quite a bit of risk with this project.  The code base was developed on the assumption that the application and database would be on the same server.  We were changing a lot (including application and database versions).  Additionally, these systems were critical to the business and the clients.
  • We had an entirely new system that we could release to, run in parallel against and not affect our current production system.

So I proposed that we prioritized the backlog, we pull a system that was low risk from an impact perspective, but contained as many of the characteristics that we could from the other systems (such as programming languages).  Once we finish all of the stories for this application, we then release it to the new production system and stop using the current production system.  This would accomplish several things:

  • It would allow us to test our release process.  Since there were a lot of new things with this system, it would be great to actually do a release and get quick feedback on anything that we had to adjust.
  • The risk of impacting our current production system was low, given that this was a completely new system.  If the release did not go well, we would gain valuable knowledge and could simply restore the previous version of the application.
  • If the application that we released had issues, we could learn from that any apply changes to the other system.  An example would be if the way we were trying to connect from the new application server to the new database was not allowed in production (but was allowed in our test environment), one of the few ways that we could catch it is to actually put it on that production environment.
  • If the release went well, then we would begin reducing the load on our existing production system.  If you remember back, one of the drivers for this project was the fear that the current production system could not keep up during peak times.  Moving applications off one at a time would reduce risk by removing the “hard” date.

Even with laying out these items (and several others), the organization just was not ready to separate from “the way they had always done things”.  They decided to mitigate some of the risk by doing the single big bang release over a 3 day weekend, ensuring that no one on the team would sleep or get to enjoy the weekend.

The Problems

“In software, if something is painful, do it more often.”

The main fear that I could identify is that the company was not structured to do frequent releases.  Release cycles were long, the release process (production and between the various test environments) was manual and the business was expecting a long UAT (User Acceptance Testing) phase at the end instead of several smaller, incremental UATs throughout.

Image of character avoiding obstacle

Problem 1: Maintenance, Merging & Rework

  • You can imagine that 13 different applications have a lot of code.  They also most likely require changes for reasons outside of our effort.  This meant that as our project got further along, we had to constantly merge code in from the other projects.  Not only did this take time away from us, we had the added complexity of changing code to work on the new servers and operating systems.  The code we were merging in was designed for the old servers/systems, so these were complicated merges.
  • Since we were operating in sprints, we focused on developing and testing in small cycles, with the goal to get ach story (testing flow) to done.  This code then sat in version control, and after performing a merge we needed to do some additional testing, which again took time away.

Problem 2: Not Knowing What We Don’t Know

  • One of the powers of getting to “done” is that there should be less surprises.  When we estimate that we are 80% done with something, it is just an estimate.  Especially if we have not completed testing or other items, we don’t know if there are hidden issues that might cause us to actually only be 40% done.
  • Even if we complete all of our testing, until we have gone through all of our steps and are using a system in production, we won’t know if it is truly working.  There could many reasons for this, including our test environments not being identical to our production environment.

Problem 3: Release Planning

  • Since the company did not have release automation in place, releases and everything related to them was done manually.  This means that as time goes on, the “release plan” continued to grow and had to constantly be maintained.  One can imagine the amount of mistakes in this plan, many of which were not caught because of infrequent releases.

Problem 4: Back Out Planning

  • An important aspect of any release plan is the ability to back out the changes and revert to the pre-release state.  Although the team had planned for this, everyone “knew” that this release just had to work, so there was not much focus on making sure the system could be reverted.

10-7-2004-7-02-41-AM-3940637-300x224

The Reality

So after various bumps in the road, many of which we were able to react to and overcome as a result of utilizing scrum, the release weekend was finally upon the team.  They stocked up their Red Bull, coffee and Mountain Dew and planted themselves together in a “command center” (large training room).  It had been a long road to get to this point.  The team expected to have some issues, but from the testing and several dry run releases to a production like environment, expected to be able to get the release out.  Below are a few of the key items that occurred:

  • As expected, the team ended up working all 3 days that weekend to try and get the release out.  The first night some of the team left because they were dependent on other team members to do work, but the second day was an all-nighter for the team.
  • At the end of the second day there were still several issues with the system.  The team decided to continue pushing forward with the release.
  • Team members had a few hours of sleep, then continued working the third day until ~2 am.
  • The first morning “live” on the new system, I ended up having to pull all of the team together.  We took over a large training room so that we could work together and efficiently address the growing list of defects.
  • The team put in on average 80 hours the first week of release (after the long 3 day release weekend).  This included pulling in several resources that were not on the project to assist.
  • The second and third week saw less time spent, but the team was still putting in ~60 hours each week.
  • It took several months and many maintenance efforts in order to get the system stable and to close out the majority of defects.
  • In total, there were 400+ production defects from this release, many of which were critical and the team had to resolve very quick while also being sleep deprived.
  • As a result of a great team the business and the clients had minimal impact (financially and missed SLAs), which was simply amazing given the number of high critical defects.
  • It looked like the project was going to come in under budget before release, and it ended up going 30%+ over budget after all of the extra time was spent after the release.

images

The Lessons

There were many lessons to learn after this effort.  Once the critical bugs were resolved, management began to try and figure out just what went wrong.  Of course there were many failure points that contributed and each person/group were blaming other groups to protect themselves.  A few key items included the following:

  • Just about any project can operate using scrum.  The riskier the project, the more using an incremental and iterative approach makes sense.
  • Getting code to meet the Definition of Done (DoD) is one thing, but it is difficult to truly know if everything is really “Done” until the software is running in production.
  • Think outside the box.  Just because your organization has not done it before does not mean you shouldn’t try.  Releasing the new applications to production on the new server one at a time is a great example of this.
  • Releasing often reduces risk.  Not only does it make sure that you have a great release process (and hopefully show the need for automation), but it also provided feedback to truly let you know if things are working.
  • It is essential to spend time developing a strategy for backing out code if there are issues.  The smaller the release, the easier the back out plan is.
  • Frequent releases (even if code is in a turned off state) prevent the ongoing maintenance of having to merge code from other branches/streams.
  • A “hard” date often doesn’t have to be a hard date.  In this case, we could have reduced the numbers of applications running in the previous system to hedge our risk of peak times.

After the project was finished and we were holding a retrospective, one manager said it best… “Even though we had a lot of issues with this effort, I truly believe operating with Scrum saved us.  If we had treated this as a waterfall project, we would have never hit our date and I fear there would have been a much larger client and financial impact.”


Being new here to the Blue Agility family, and to the Marketing Department, I’ve been itching to write our first blog post of 2014. With the Polar Vortex hitting us once again on the East Coast, I thought, what better time than this to get some writing in? Rest assured, The Vortex hasn’t slowed us down one bit (a few cancelled flights but nothing a few collaboration tools can’t remedy). With February beginning, and line of sight to all of the activities and events 2014 has in store for us, things are heating up in the Blue Agility world.

Oh, allow me to introduce myself… My name is Ashley Bailey, responsible for Marketing and Communications at Blue Agility. That is a bit of a mouthful, so please; feel free to call me Ashley. After about a year far away (except for my sneaky follows on Twitter, Facebook, blogs on Tech Crunch via Flipboard, and Google Alerts to stay connected…), from working in the tech industry, and specifically the Agile space, I am geek’ing out with enthusiasm to be ‘back.’

In 2013 I left Boulder, CO (I have spent most if my life along the front range in CO), made my way to Bar Harbor, ME for about 7 months, then on to South Florida and just shy of a month before the New Year hit, I landed myself in Philadelphia, PA. Yeah, yeah, just in time for this Vortex business… But I’ll tell you what, I couldn’t be happier to be a short drive away from many of my colleagues and Philadelphia is indeed, a beautiful city.

In an effort to get a feel for how 2013 and excitements for the coming year, I asked the leadership team to provide some insight.

  1. In your opinion, what was the best thing in regard to the Agile Industry in 2013?
  2. What do you think was best thing in regard to Blue Agility in 2013?
  3. What is one thing that you are looking forward to related to Blue Agility or the industry in general in 2014?

As you can imagine, I received quite a responses as 2013 was loaded with highs for both the industry and Blue Agility, so I will attempt to summarize

Agile Industry: 2013 Highlights

  1. Emergence of Scaled Agile Framework™, (SAFe) as a scaleable agile framework
  2. Increased acceptance amongst software delivery practitioners and businesses that new methods and frameworks, such as SAFe and DevOps, can be successfully adopted to deliver high-value software to their customers, more efficiently and effectively

Blue Agility: 2013 Highlights

  1. bluejazz goes LIVE (June 2013)
  2. bluejazz is awarded Best of Show at IBM Innovate 2013
  3. Blue Agility participates in the inaugural SAFe Program Consultant course through the Scaled Agile Academy
  4. Blue Agility becomes a SPC Trainer provider, adding SPC certification to our offering
  5. Blue Agility hires Mike Bonamassa as president
  6. Blue Mercury Consulting rebrands and refocuses as Blue Agility
  7. Blue Agility announces focus on DevOps and SAFe
  8. Agile 2013 conference and Dean Leffingwell at our booth

 

innovate2013booth-1024x675 Agile 2013: Left to Right: Pratik Bengali, Dean Leffingwell, Ken France, Eyal Abukasis

 

Excited for in 2014

  1. Attendance at IBM 2014 Partner World Leadership Conference (in less than 2 weeks!)
  2. Attendance IBM Innovate 2014
  3. Attendance Agile 2014
  4. New hires joining the Blue Agility Family (psst…check out open positions)
  5. Furthering our partnership with Scaled Agile, as a Gold Partner
  6. Furthering our long-standing partnership with IBM, as a Premier Partner
  7. Next release of bluejazz

 

bluejazz, our free, role-based, online mentor bluejazz, our free, role-based, online mentor

 

As for me? As an outside looking in on the industry for much of 2013, I think the ‘best’ was the momentum in organizations embracing DevOps concepts. From my perspective, DevOps, where people, process and tools converge, is the foundation needed for any organization to continuously improve their existing products while also continuing to bring new innovative products to market. DevOps gives organizations the breathing time, space and energy to properly gather and vet ideas/requirements for their portfolio from the many sources of input (including their own strategic direction) and, ensure they are building the right product.

And for the best in regard to Blue Agility in 2013… When I read the press release (via Flipboard on my handy iPad) last fall that Blue Mercury made the decision to refocus and rebrand as Blue Agility, I nodded my head up and down (coffee in hand) and said “Yes.” Rebranding as Blue Agility to better represent the commitment to enabling enterprise agility in conjunction with Scaled Agile, Inc. and Development and Operations (DevOps) tools; and forming a strategic partnership with Scaled Agile as a Gold Partner is a 2013 Blue Agility highlight in my book.

For 2014, I’m not sure I can choose just one. For one, the conferences stacked up this year are kick-ass. With IBM’s Partnerworld Leadership Conference just around the corner, followed by Agile and Beyond, I can’t think of a better way to kick-off the year. We then have Innovate 2014 (bluejazz best of show last year) and of course, Agile 2014 – not to mention all of the other conferences, Scrum Gatherings and local Meetups in between. Each providing unique opportunities, jam-packed with compelling agendas and brilliant thought-leaders.

2014 Conferences

 

In addition to conferences, I’d have to say that I am super excited to have front row seats watching organizations embrace change and take the steps to become an empowered enterprise by making an Agile transformation, leveraging the Scaled Agile Framework™ (SAFe) and DevOps solutions.

It’s going to be a good year folks, be part of the Blue Agility Community and join our discussions.

What do you think the best of 2013 was and what are you excited for in 2014? Drop us a line and let us know.