Archive for the ‘Uncategorized’ Category

Scrum and the Theory of Constraints – part 1

January 13, 2015

One of the tenants of scrum is that the group has all the people and resources they need to complete their stories within the designated timebox. You won’t start a scrum unless you have available all the people and resources needed to complete it. In the Theory of Constraints this is called the “complete kit“.

In the real world (especially in large organizations) many times the resources needed are constrained – people get snatched to other tasks, skills are not always available when needed or planned, machines aren’t available or are broken etc. Which means that the “complete kit”assumption of scrum doesn’t hold.

Standard scrum methodology doesn’t address this issue at all, which is one the reasons it is difficult to apply agile to organisations that have teams that support multiple productsservices with conflicting needs.

The way to handle this at scrum zero (and keep it updated as part of the backlog refinement for every sprint)

1. The team must define the “complete kit” for each user story selected from the backlog (e.g. availability of environments, business experts).

2. It is the responsibility of the scrum master is to subordinate user stories assignment to available resources, and plan ahead to assign user stories to future sprints to account for anticipated resource availability.

This is a critical responsibility since if not handled correctly the project will be “starved“. In part 2 we will describe how the scrum master can do this.

Test Driven Development (TDD) vs. Business Value

December 10, 2014

Test Driven Development (TDD) is broken (or at least is implemented very badly by most organizations) see our previous post for a great explanation about why it doesn’t work. What TDD needs to become is business oriented testing that can be used as the “definition of done” as described in scrum.

Our approach is similar to the approach taken in the Sprint Planning Meeting where the Product Owner and the team negotiate which stories a team will tackle during that sprint. The difference is that we believe that rather than just doing an estimation of effort – the first task is to use business value as the main vector in estimating what should be achieved during a sprint. Together the product owners and developers create a chart for each proposed user story.  In that chart the product owner uses the y-axis to indicate the business value and the developers use the x-axis to indicate the associated effort. Then stories are classified into one of four categories as shown in the figure below – Oyster, Pearl, Quick Win and White Elephant.

ease value matrix

The idea is find the mix of stories that maximize business value subject the timeresource constraints. The methodology is to first focus on Pearls and trash the White Elephants  – don’t do them just because they are there. Oysters are usually too difficult to complete in a single sprint and should be subdivided into user stories that can be implemented in a single sprint – i.e. Pearls, Quick Wins or White Elephants. Finally use whatever timeresources you have left for Quick Wins – do as many as possible.

An example is the failed Apple Navigator project. Apple tried to incorporate a cool new story – navigation enhanced by real-time 3D imagery. This story turned out to be the oyster that ruined the project. What Apple should have done is decompose the oyster into its components: navigation and imagery. From a business perspective navigation is a pearl, while imagery is a white elephant.

Once the stories are selected – developers should work on implementation and testers should start working on the business acceptance tests – the tests that define the required business results from the sprint. These should be generated from the sprint’s user stories.

These business acceptance tests are the ones that matter and should live as long as the user story (as regression tests). They are independent of implementation – and therefor also independent of refactoring or other changes to the system.

7 Principles of BuDD – Design by Delivery vs. Delivery by Design

November 29, 2014

Software delivery usually takes a backseat to the development process. Gather requirements, architect the system, implement, test, deliver is the standard order of the steps taken by IT.  Some agile methodologies like TDD (Test Driven Delivery) try to change the order to requirements, architecture, test  and finally deliver. This is what we call delivery by design – delivery is always the last step.

Design by delivery is different. Design by delivery can use either of those cycles – but the idea is to create software in increments that are delivered and verified by actual users. Delivery considerations need to be part of architecture,  implementation and testing – since only delivery can ensure that the system is on track from both a functional and non-functional perspective.This in many ways encapsulates the most important part of scrum methodology. Instead of each sprint creating only an user deliverable, it must deliver the software to actual users and provide value to the business.

This is possible if first two principles of  business driven delivery are applied:

  • Measure, manage and focus on the value to the business.
  • Integrate early, integrate often – and always have something to show the business.

7 Principles of BuDD – Integrate early, integrate often

November 25, 2014

The second tenant of business driven delivery (BuDD) is to integrate early and often. You also need frequently show something to the different stakeholders – e.g. management and customers. The reason for the focus on integration is to make sure that folks aren’t off developing in parallel components that just won’t work together like in this example: http://www.theguardian.com/world/2014/may/21/french-railway-operator-sncf-orders-trains-too-big. This helps make sure you are building things right.

To ensure that you end up delivering the right thing – you need to continually demo the evolving system to the stakeholders. This helps in two ways – you get feedback about what isn’t right, and buy-in to the final product.

Software delivery solvency and subprime technical debt

November 15, 2014

Software delivery solvency is the ability of the IT department to deliver working services that fulfill a business need in a timely fashion. An example of delivery insolvency is the original Obamacare program.

A key factor in delivery insolvency is that software naturally deteriorates over time because of initial kludgy shortcuts, changing and misunderstood requirements, misaligned and misunderstood technical architecture as well as a changing business and technical environment. The deteriorated code becomes so complex and brittle that any attempt to change is so expensive as to render the code difficult to maintain and almost unmodifiable. Legacy is one example of insolvent code, but there are many examples of code becoming insolvent long before it is legacy.

Technical debt describes the cost accrued by code deterioration, and the standard way to repay that debt is refactoring. Refactoring is the act of redesigning code so that it has the exact same functionality and interface, but an improved more robust internal structure that both meets the current requirements and lays the groundwork going forward. Just like in nature this cycle of deterioration and renewal is what keeps software viable. For example gcc (the GNU c compiler still widely used today) dates to 1987 – almost 30 years old.

In finance, some debt is more risky to the lender and more costly to the lendee – usually loans given to people who may have difficulty maintaining the repayment schedule. We believe that the equivalent type of debt is “delivery debt” – essentially problems in the code or architecture (i.e. technical debt) that affect delivery rather than functionality. Because delivery is an after thought in most organizations (which DevOps is trying to fix) – most development organization are happy to take shortcuts with respect to the “deliverability'” of their code. They leave it to Ops to sort things out – and it is not unusual for Ops to have to work around the same bugs for multiple releases. This type of subprime debt causes code to deteriorate and become insolvent much faster than normal debt.

A real life example of technical debt causing insolvency is of a telecommunication company that wanted to introduce prepaid SMS packages but the billing system architecture assumed that SMS is a part of standard billing. It required months to change the architecture and code such that it allowed of a prepaid model.

Agile and test driven development (TDD) can be useful software development paradigms, but there is no doubt that agile and TDD can cause weaker software solvency and can encourage development of subprime software products.

Slide1

Really Interesting Talk on the Problems with Test Driven Development

November 8, 2014

Excellent talk by Ian Cooper at InfoQ – “TDD: Where Did It All Go Wrong?

7 Principles of Business Driven Delivery

November 3, 2014

Software development (Dev) has undergone disruptive changes in the last years – broadly defined as Agile. There have also been disruptive changes in delivery (e.g. Virtualization, Cloud, SaaS) also known as Ops. These changes were expected to empower IT to be closer to customers and deliver functionality faster and set business expectations that IT will no longer be a bottleneck: changes initiated by business leaders will be delivered to customers within weeks rather than years. This was at the peak of the agile and cloud hype curves.

That is the promise of agile, but in many cases the reality is very different. IT needs a holistic view that is focused on the true value of the end result to business and customers. This is a journey, much more than a destination. To guide us on our journey we have come up with these 7 principles:

  1. Measure, manage and focus on the value to the business.
  2. Integrate early, integrate often – and always have something to show the business.
  3. Design by doing vs. doing by design.
  4. Rigorous automated testing.
  5. Repeatable, scalable, automated release processes.
  6. Continuous delivery, not continuous deployment
  7. Plan for exception handling, rollback/rollforward and corrective-and-preventative-operations (COPO).

LectureMonkey as local social network for learning – or “passing the peeing exam”

November 12, 2013

Click here  to see the post on the LectureMoney.com blog.

DevOps and AppOps

June 19, 2012

There is a whole set of new operational pressures on IT operations at the application layer. Business are betting more and more on their applications, users with always available platforms (i.e. mobile) mean that applications really do need to work 24×7, virtualization is making the underlying infrastructure elastic and easily available , and of course agile development enables features to be developed faster and in smaller increments.

All these are putting new pressures on IT operations at the application layer. DevOps is one growing trend that is starting to address these issues. DevOps is related to AppOps but it isn’t AppOps – nor does it replace the need for AppOps. DevOps is the process of streamlining the dev to ops lifecycle for applications, but AppOps is specifically the operational side of application management.  I think that we’ll be seeing a lot more companies starting to use the term AppOps, as way to describe their IT operations at the application layer – since there doesn’t seem to any better term around. AppOps has two separate, but related, components:

  • Application Release Operations – all the operational work needed to make sure production applications and features are deployed in a timely and robust fashion. This goes way beyond just release automation – since the automation component is just a small part of  the operations surrounding a release. Release Operations also includes all the remediation and maintenance operations for making sure imperfect applications and unexpected events don’t cause catastrophic application failure.
  • Application Monitoring Operations – the monitoring needed to make sure that application problems are discovered as quickly as possible.

In many cases it is up to the AppOps folks to notice a problem and then (working with dev) figure out a quick workaround (CAPA) to the problem to make sure things continue to run, then it is up to dev to come up with a longer term fix which gets deployed in the next release.

There has been very little focus on Application Services CAPA (Corrective Action\Preventative Action) – or maybe a better term would be COPO (Corrective Operations\Preventative Operations). I am not sure why Application Services CAPA gets so little attention – maybe because it is the unglamorous daily work of ensuring everything works correctly, as opposed to getting a new release deployed.

If not Server Efficiency, then Why Cloud?

April 21, 2012

It is pretty clear from the data in my previous post, hardware efficiency isn’t the reason for moving to the cloud. In fact there seems to me more hardware “waste” in the cloud than in data centers. So why is the move to the cloud considered inevitable?

The reason that companies are adopting cloud computing (and have already adopted it precursor – virtualization) is to enhance the speed and agility of their IT departments, not hardware cost reduction (Speed, Agility, Not Cost Reduction, Drive Cloud by Esther Shein in Network Computing).  The same is also apparent if you look at where cloud computing is being adopted first – scalable websites and test\dev.

Scalable website tend to be built on stateless applications that don’t have rigorous consistency requirements, but do need massive scale. These types of applications are a perfect fit for today’s cloud. Add to that the fact that the actual scale needed by these websites is very elastic and you have a perfect cloud use case. Dev\Test is also a good fit – the ability to quickly build and tear down environments, where agility and elasticity take precedence over performance and hardening.

The challenge for the cloud will be traditional production and transactional applications. In these applications the application layer has a set of requirements that make achieving the elasticity of the cloud much more of a challenge. The only way the cloud will make real inroads into these environments is when the agility and elasticity of the stateless cloud can be achieved for traditional applications.


Follow

Get every new post delivered to your Inbox.