There have been a number of sessions about hybrid reporting at the House of PMO – it’s all about the challenge that PMOs are facing when it comes to a mixed portfolio of projects, some delivered using methods like SCRUM and others being more traditional like waterfall. You can see a couple of the sessions here and here.
Why is “hybrid reporting” an issue at all?
Reporting is arguably the most common PMO responsibility. After all these years, it should be a smooth process – it is unsettling when it just isn’t. In a hybrid Portfolio, you can end up with approaches to delivery that operate in very different ways.
This variety puts a strain on reporting, as square pegs get hammered into round holes. The set of Portfolio reports becomes fragmented, and there is no reliable single view at the Portfolio level.
- Stakeholders get nervous: what to act on? when? why?
- Delivery managers get angry at being asked to do things that make no sense to them
If you still think that this is a non-issue, please consider this old quote from H.L. Mencken: “For every complex problem, there is an answer that is clear, simple and wrong.”
Some of Your Questions
The following are examples of questions that were logged at a recent House of PMO event on Hybrid Delivery. The reporting aspects of the topic seemed problematic for the PMO people in attendance. As an experiment, I will give quick answers to each question. I have in mind common organisational contexts I have experienced. My hunch is that some of the answers will resonate with you, while in some cases, you will feel that the answer has strange bits mixed in, which you suspect you can safely ignore.
Is it worth reporting on waterfall and agile projects in the same report?
Yes, definitely, because it is a Portfolio-level report, and all change initiatives need to be considered in the round. That includes non-projects but not BAU.
What key 3 items or so would you recommend we report on? For both waterfall and agile?
- The trend in Lead time
- Ranked list of interfaces/handovers/dependencies that cause the most delays
- The trend in the level of satisfaction among the sponsors of the change initiatives
What can waterfall learn from agile and vice-versa?
- ‘Waterfall’ can learn from agile reporting’s emphasis on the immediate application of lessons, and the focus on the early delivery of value
- ‘Agile’ can learn from waterfall reporting’s active use of RAID disciplines to manage constraints outside the team’s area of authority
What should we avoid doing?
Forcing a one-size-fits-all reporting based on either extreme. Portfolio-level reporting needs to focus on trends for the whole, not individual change initiative success
What are the key challenges with hybrid reporting for the report recipients?
Stakeholders will have developed favourite indicators over the years, things they look for in reports to decide whether they need to act and how, and what can be ignored. If a hybrid portfolio is relatively new in the organisation, it is quite likely that senior stakeholders will be looking at the wrong things. Uniformity of change initiatives feeds bad habits, such as micro-management, which become too hard to do once you abstract progress to the Portfolio level. There may be a need for some coaching on what to look for in the new reports to influence things at the macro level
Is there a different way of thinking about this problem/challenge?
Yes, yes, most definitely. That’s what unlocks the other questions. In fact, this whole article is an answer to this question.
What we aim to discuss in this article
The practicalities of hybrid reporting are no more and no less complicated than any change in ways of working. However, the suggestions won’t make sense, and obstacles will seem intractable unless you can first see the wood without being distracted by trees. Around one particular giant tree that blocks the sun. So, we’ll start there
The practicalities will be manageable after that first step, although we haven’t got space to cover the ins and outs. Does it matter? No. Because the ins and outs are totally dependent on your context. You will have started the journey, and I’m happy to travel with you if you wish. You’ll be fine either way.
Figure 1: A generic portfolio-level Delivery Pipeline
Part 1: The Unseen
This is water: That’s the title of David Foster Wallace’s address to the graduating class of Kenyon College (a university in Ohio) in 2005. The extract I’ve linked to here is under 10 minutes, so you could watch it now, and it may help you look past things that don’t get questioned but need to be. The whole address is only about 23 minutes, so worth listening to in full someday or reading it as a very short book. Once you look for it, you will realise that it has struck a chord with thousands of busy people like us. Or, choose not to watch it.
What we need to be prepared to do is to accept at least the possibility that projects (any kind) are not the only way to deliver change in organisations. That it is possible to deliver major change, predictably and safely, using approaches that share almost nothing with projects in any part of their end-to-end lifecycle.
I’m not talking about project sandwiches with a different flavour of filling in the software development phase, the so-called “agile projects”. An oxymoron if I ever heard one. I’m talking about ways of delivering change initiatives that are not projects at all.
This is difficult, I accept, because what is all around, what is given, is not questioned and it becomes unseen, assumed.
To be clear, I’m not saying that “projects are bad” or that they shouldn’t be used. Project Management has wide applicability across a range of contexts, and it is here to stay and evolve.
20th Century Project Management became the accepted and only way of managing change in organisations. It was well suited to the explosion in automation about 70 years ago, as large corporations became multi-national, with their Divisions, Units, Regions and the whole hierarchical MBA schtick. It can work very well, 20th Century Project Management. It’s a wonderful hammer. Suddenly, everything looks like a nail. Until it doesn’t.
Hybrid Portfolios remind you of the natural rich variety of change, but at first, this reminder is nothing but a vague niggle. Easy to dismiss: just use the hammer to get that peg in the hole.
You can’t put your finger on it because you would have to see the assumed, the unseen first. Meaning that “project” is one more way, not the unquestionable one true way of all Change. You must put it in its place. Let me take you on a detour that shows project management excellently matched to its change context.
In his ground-breaking 2020 book “Sooner, Safer, Happier“, Jon Smart tells the story of two big projects. In the context of this brief article, it’s a long story, but it may help to make the point a little more concrete (pun intended):
“In 1992 the representatives of China’s National People’s Congress voted to build a hydroelectric dam at the site of the Three Gorges on the Yangtze River. [..] Ten years after the vote {..] the Yangtze flowed again, filling the reservoir.
During the construction, the Chinese government relocated more than 1.3 million people and over a thousand towns and villages were flooded. By the time the dam opened, construction had cost around $24 billion and, at its peak, employed over 26,000 workers. The dam reached as high as 185 meters, and stretched more than two kilometers across the river. With twenty-six turbines, it could generate twenty times the power of the Hoover Dam. It now has thirty-two turbines and is the world’s most powerful dam. [..]
While the Chinese government was building a giant dam across its most powerful river, the British government was also undertaking an ambitious project of its own: it was trying to computerise post offices so that they could improve benefits payments. The seventeen million people who then collected benefits would be given special “swipe cards”. The system would reduce fraud, lower costs, and be more convenient for both government and claimants.
The card was announced in 1996. The IT project was run by the Department of Social Security (DSS) and by post office counters. Pathway, a subsidiary of International Computers Limited (ICL), won the contract to develop and install the technology. By the time the project was cancelled three years later in 1999, post office counters had lost £571 million, ICL wrote off £180 million, and the DSS has laid out about £127 million. Because the system was supposed to have saved £100 million in fraudulent claims, which didn’t happen, the total cost to the taxpayer of the failed project was put at about £1 billion.
The cancellation of the post office benefits card project was followed by the publication of a report later that year that listed twenty-five government IT projects that had ‘resulted in delay, confusion, and inconvenience to the citizen and, in many cases, poor value for money for the taxpayer’. The report goes on to say that ‘for more than two decades, implementing IT systems successfully has proved difficult’ and that ‘problems continue to occur in areas where recommendations have been made in the past’. [..]
That doesn’t necessarily mean that the Chinese government is better at building things than the [UK government]. Building a dam is knowable. There are more than 57,000 large dams worldwide. China is the most dammed country in the world, with more than 23,000 large dams. Dam-building requires expertise. And having built concrete structures to hold back water 22,999 times before, those building it know what to expect, including what problems or challenges might occur. They might not be able to avoid every problem, but they know where and why delays are likely to occur. There are known-unknowns; people know what they don’t know, due to having performed this activity previously many times.
Digitising benefit payments for seventeen million people and all post offices in the UK had not been done 57,000 times before. It had never been done before. [..]
With this work, which has never been done before, people don’t know what they don’t know. There are unknown-unknowns. [They] tried to force a deterministic-way-of-working peg into an emergent-domain-of-work hole, but that does not magically work. [..]
Instead, it is necessary to optimise for early and often learning in a real environment with real customers or consumers. This lowers the risk of delivery, generates value earlier, enables pivoting to maximise value, and locks in progress as you go. The best part is that, unlike pouring concrete, which sets, with knowledge-based products and services, such as software, this way of working is easy to do. Actually, it’s the easiest to do.”
We, as PMO professionals, beat ourselves up about unsuccessful projects. We read and believe that there are errors of execution to fix and training to be had.
Instead, please consider the possibility that, in some cases, the prescription is for the wrong disease. No matter how well we execute, the patient will only improve by sheer luck.
This is the first point; this is how you see the wood for the trees: Project Management is one way of managing change, not the way of managing change. There are others. That’s why we have hybrid delivery portfolios – they have always been there, whether we admit it or not. It’s absolutely not the case of a new ‘waterfall’ vs. ‘agile’ mismatch, it’s a whole zoo of approaches to Delivery of Change out there. Quite rightly.
I’m very happy to defend this point of view vigorously. However, my only aim here is to get you to suspend disbelief for long enough that you can appreciate the recommendations for hybrid reporting.
The Predictive-to-Adaptive Continuum
- Predictive approaches provide a linear, specific development plan structured around producing a pre-determined set of deliverables with a contracted end result within a specific timeframe
- Adaptive approaches involve breaking a major business outcome into small chunks of value over a flexible timeline (roadmap) to allow the delivery of the most value up front
At a superficial level, recognising this wide continuum allows us to understand the differences between approaches lumped under “traditional” or “waterfall” and approaches lumped under “agile”. After decades of fundamentalism about both by their respective tribes of supporters, the names are stale, and they confuse rather than enlighten.
We can understand the differences and why certain types of approaches work better on certain problems than others by looking at those approaches as a continuum, with many intermediate options placed at various points away from the extremes of totally predictive and fully adaptive.
Their placement in the continuum can be understood by highlighting the risk mitigation strategies that they employ and the different central ideas about how change works that they assume to be true. When you think about it, the way they handle risk is what makes them look and be different.
At a deeper level, we would get a fuller understanding of the predictive/adaptive continuum if we applied sense-making techniques to understand as far as we can the nature of the context in which the change is meant to take place. That would take us on another detour via frameworks like Cynefin. Absolutely worth it and worth doing well and slowly. So, not today.
Part 2: What to Do
On to the second point and the core of this article. The way out of the conundrum of how to report different kinds of change initiatives in a single consistent way is to focus your dashboards and metrics on tracking “work items”. What do I mean by “work items”?
Figure 2: Work Items
Put simply: a work item is a deliverable that brings benefit to a stakeholder. The whole deliverable. In the hands of the stakeholder. Something that can be used to generate some form of benefit for the user. It can also be a contribution that reduces risk, including learning, as long as it can be evidenced. A few examples of work items:
- Predictive examples:
- A sub-deliverable picked from the WBS of a baselined project, such as a Low-Level Design, a signed Supplier Contract, or Version 1 of the Service Model
- These are delivered to internal users, not customers, to enable other siloed teams to add value to the overall project, which delivers customer value only at the end
- Adaptive examples:
- A Minimum Viable Product, a set of features in Private Beta, Release of a new UI
- These qualify if they are released to real customers for feedback
By focusing on the lower level of “work items” rather than the level of authorised change initiatives, we can disregard how the change initiative is being managed. Remember, we are not trying to measure compliance with standards (proxy metrics); we are trying to measure delivery directly.
Broadly speaking, there are two types of dashboards or metric sets, and they report effectively no matter what approach to delivery is used to package and govern the underlying work items. This is because now you are focusing on what really matters to stakeholders and not on the mechanics of delivery, which are very important to those doing the delivery, but, frankly, desperately boring to stakeholders. Unfair? Life is a beach, and then you surf, or something like that.
The two types of dashboards are:
- Type 1: Flow of Value
- measure individual work items
- focus on the aggregate behaviour, how smoothly or otherwise they flow through the delivery pipeline, i.e., your delivery process
- these are problem-solving, action-oriented dashboards showing how change flows through your delivery pipeline, where the bottlenecks are, heatmaps of risk to outcomes, lead times, etc.
- Type 2: Status Snapshots
- measure collections of tagged work items
- these collections can be structured like traditional reports and dashboards
- PoaPs, etc.
- these fit in with existing governance for stakeholder groups that deal in predictive programmes and projects
Remember: at the Portfolio level, a PMO adds value by analysing the performance of the Portfolio and providing decision-support information to senior stakeholders. It’s not about marking the homework of delivery managers. That has only marginal value for senior stakeholders. Obviously, if in your context you need to mark homework, then do it. Just as long as you realise that, in that case, you have a bigger problem than hybrid reporting.
Figure 3: Types of Dashboards for Hybrid Reporting
How to Do It
Start with a good old Project Register kind of spreadsheet, if you have no tooling.
- Add columns that let you record time in/out of each major step in your delivery pipeline (a form of value stream), plus columns that let you flag a stuck work item and the reason why it is stuck
- Add rows for the work items; these are WBS major components in projects, features or user stories from product backlogs, capabilities from the scope of improvement initiatives, and so on, depending on the variety of change initiative types present in your Portfolio
- Add tags for the work items, such as
- final outcome (benefits),
- initiative that they are part of (and level within it),
- business change target area
Later:
- Get a free or paid Kanban tool, with built-in metrics, reflecting the steps in your delivery pipeline; if you already have Jira, use Jira, but it’s overkill at this level. The one advantage of using Jira would be that you could grab very granular flow metrics to be aggregated at the Portfolio level. Still overkill…
- Build your own, with tools available, e.g., MS Dynamics, with Power Apps, PowerBI and Power Automate if you are a Microsoft shop; there are equivalents for other shops
- The possibilities are really varied, and they depend totally on your organisational context
Here are some example metrics, but remember: context is everything – what works for you probably doesn’t work for someone else
- Cumulative Flow Diagram: one to automate as much as possible, but it can be started with a simple spreadsheet. One of the richest sources of Portfolio metrics that make a difference to the value added by the delivery of change.
- Work in Progress (WIP): number of work items that have started but not completed. Reducing WIP is statistically guaranteed to speed up delivery and increase quality. See Little’s Law
- Lead time: indicates how predictably your organisation delivers against the promises made in SLAs, class-of-service definitions, or other expectations of delivery that have been agreed with your stakeholders. For every work item, the time between the item is requested by the customer to the point when the value is received by the customer
- The trend in Lead time across the delivery pipeline
- Active time and Waiting time are subsidiary metrics of Lead time. Introduce them earlier, if your context and tooling allow, as they give better granularity to focus improvement efforts
- Blocked queue: a tally of work items in progress that are waiting for issues or dependencies to be resolved. At the system level, this gives us a rough sense of how much the system could be optimised. At the more immediate level of Portfolio efficiency, it provides targets to improve the effectiveness of escalations to senior stakeholders
- Ranked list of interfaces/handovers/dependencies that cause the most delays
- Throughput: number of work items delivered in each time period. This is a good example of a metric that can evolve as the maturity of the delivery organisation increases. The raw count is fine to start. Later, add some idea of the relative size of work items across the portfolio. The vision is to get to report the equivalent value delivered, but that requires some sophistication.
- The trend in the level of satisfaction among the sponsors of the change initiatives
Next Step, Maturing the Metrics
Examples of more powerful Flow metrics, such as:
- Flow distribution: after classifying work items into types show the proportion of each type in a given delivery pipeline over time. What is ‘good’ in terms of the proportion depends on the organisation and what it intends to do over a span of time that responds to the strategy. It’s a different topic, but the proportion has to be guided by OKRs, alignment to strategic goals, etc.
- Flow time: the duration that it takes for a work item to go from being accepted (Demand Management) to exiting the delivery pipeline as value to the customer – it includes active and waiting time. (Confusingly, this is called “Lead Time” in some good and widespread frameworks)
- Flow efficiency: to pinpoint delivery pipelines with excessive waiting times – calculated as Flow Efficiency = Total Active Time / Flow Time
- Patterns and anti-patterns, which are not so much a metric but the gradual formalising of insights, perhaps derived from lessons learned and metrics, to feed back to the delivery teams and to senior stakeholders, showing what seems to give benefit and what seems to be mostly cost (waste)
There are many more metrics that are agnostic of the approach to development, but the key thing to remember is that to decide what is right and what is needed, you must look at your organisation, its maturity, its challenges, and its goals. The right mix suits you – there is no such thing as best practice. Well, not outside of fantasy tales anyway, where a magic/superpower works everywhere, no matter what.
Risk Management is worth a special mention: it is a discipline that works well across all approaches in the sense of risk to outcomes. Do use it in the mix of metrics and dashboards.
Keep in mind that all approaches to the delivery of change are, in large part, different ways of managing the uncertainty inherent in change. Different types of change have different types of “uncertainty nets”, and therefore different types of risk mitigation suit some better than others. One size does not fit all. A hammer is not the only tool.
Stakeholders – Your Customers, Your Biggest Challenge
Typically, whether a Portfolio is hybrid or not, the purpose of analysing it is the same:
You must enable stakeholders to answer the question: are we doing the right things, and how much better can we do them with what we have?
As you work with your stakeholders to get the greatest possible benefit to them from your reporting, it’s worth keeping a few things in mind:
- If your reports don’t help them make the decisions they need to make, the reports are no good. It doesn’t matter how objectively good they are or that other organisations rave about them
- If they don’t like your dashboards, they are wasted. I do mean “like” as in: an emotional reaction
- Ask: what do your stakeholders/customers need to do? Why? Specifically, what decisions are they making based on Portfolio data?
You may also have to coach them to consider new questions to ask of the data that will support their decision-making. You can give examples based on your knowledge of the pain points at their level.
As with so many things PMO, success in hybrid reporting is down to how mutually rewarding and productive your relationships with your stockholders are and, to a lesser extent, on metrics and tooling.
Hello Lain,
Thank you for posting this. It is a rare pleasure to find a lengthy, well thought through piece of explanatory writing on a topic you have obviously reflected on in depth. As opposed to so much of what passes for wisdom on the internet…
There is a lot of food for thought here but I have a couple of questions if I may:
1. Your top three things for reporting – and I realise three is restrictive – don’t mention cost. Most senior stakeholders I have worked with have been keen to know and control cost. Not the most uplifting topic but always important. Was there a particular reason you didn’t include it or was it just that you were picking three from many?
2. Under ‘predictive v adaptive continuum’, have I understood correctly that predictive involves knowing your deliverables up front whereas adaptive is more of a journey of discovery but both involve having a business goal (increased revenue, better customer satisfaction scores, efficiency savings or whatever)? If so, in an adaptive model, do you think reporting on progress towards the goal is what is needed, and would you advocate interim targets such as get 25% of the way there after 6 months or something? This is something we are trialling where I work so I am interested in your experience of whether it works or not. Assuming I interpreted your article correctly.
Thanks,
Phil
Thank you very much for your kind feedback. Regarding your questions, here are my initial thoughts:
1) Top 3 things
It is almost as simple as you have guessed: if I can have only three, then in general (not always), those would be my top three. I would consider others depending on the context.
In the public sector context, cost control becomes more important: it’s not just a matter of private profit & loss where a level of justifiable “gambling” can be a good thing, but also a matter of compliance: you most definitely cannot be seen to have “gambled” with public funds.
That would be a good argument for pushing cost up to the top three. On the other hand, these are my arguments against:
• Cost is the ultimate lagging indicator Throughput accounting is more progressive, but that is another topic
• Keep in mind the question of levels: Delivery Managers absolutely must concern themselves with tracking and reporting cost, but we are talking about the Portfolio level
• The final version of cost in the Portfolio, the trusted version of record, is usually the domain of Finance (or equivalent body in the organisation). And if the Portfolio Office does track costs and its version diverges from Finance, then the Finance version is the true version. Always. Reporting costs is not something that only the Portfolio Office can do; therefore (being brutal), tracking costs would be mostly classed as waste.
• The reason senior stakeholders want to know and control cost is the fear of variance. Therefore, we must be prepared to give some insight into the reasons for variance. I’m generalising wildly here, apologies, but it seems to me that in most contexts, variance is driven by delays and the causes of that. When the variance is due to suppliers and pricing, that is usually a matter for Procurement and covered by contracts. Therefore, prioritising the metrics that have to do with time and flow provide the answers more actionable.
When context dictates that cost is most important, at Portfolio level, I would do this instead: bring the Finance Partner on board, or a suitable management accountant, so that they can be an “associate member” of the Portfolio team when reporting. Let them lead with the presentation of cost status and speak the right language to senior stakeholders. Use the metrics that only Portfolio Office produces to expand the discussion around the most beneficial corrective action, if any are needed.
2) Progress towards an (adaptive) goal
This is an excellent question, thanks, and yes: you have interpreted my article correctly.
The answer is not easy to understand if all our reference points are predictive; it may need a whole other article (gasp in horror…), but here are the basic points:
• “When a measure becomes a target, it ceases to be a good measure”. This wording is known as Goodhart’s Law after Charles Goodhart, a British economist and one-time member of the Bank of England’s Monetary Policy Committee. He actually wrote, “Any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes”, in a 1975 paper on monetary policy. Later writers generalised the concept to other fields. In the Portfolio context, it is a reminder to be wary of goal measures that are optimisation targets because people can manipulate them to meet the target.
• OKRs (Objectives and Key Results) are one way of avoiding the single metric/target associated with Management by Objectives (MBO) and KPIs. When used well, they work by creating a loose relationship between the business goal (such as those in your examples), which would be the Objective in OKR, and the metrics that underpin it – the Key Results in OKR. Key Results are not the “milestones”, they are attributes of the goal with an associated quantifiable measure. There’s much more to OKRs, but I’ll leave it at that for now.
• Another way of looking at the “looseness” that is so essential to travel adaptive environments is to compare one of my favourite tools, the WBS, to the emergent backlog.
• A WBS works by stepwise decomposition of the goal into outputs, deliverables, sub-deliverables, and the work required to build them. At every step, the components comprehensively define the level above. Nothing more and nothing less is required to build that upper level. Therefore, as time passes and the work is done, it is logical to quote a percentage complete based on the work done, which is a true reflection of progress toward any given interim target. But there is an assumption buried in there that better be true: that the steps to the goal can be predicted perfectly. The shakier the justification for the assumption, the shakier the report of completion.
• An emergent backlog sets expectations only for what can be known at a given point in time, it doesn’t attempt to predict a path to the goal. At the end of that experiment, you have something or value, or you don’t, but now you know more and can go another (deeper) round. At some point, you find that you have unexpectedly reached the goal earlier than your initial gut feel indicated. Or you find that so much new work to be done emerges at every round that it is best to abandon the initiative and minimise losses. Or you find that weaving here and there, you arrive at your goal more or less as expectations indicated. The approach responds to complexity and emergence by being adaptive. Very nice, but you can’t calculate the percentage complete based on work done so far.
The answer to your first sub-question (“in an adaptive model, do you think reporting on progress towards the goal is what is needed”) is: Yes. However, it needs to be clear that this progress cannot be linear; it is not a cumulative quantity but a trend over time.
The answer to your second sub-question (“would you advocate interim targets such as get 25% of the way after 6 months or something”) is: No. When the context demands adaptive approaches, it means that you cannot use predictive tools. Well, you can, and people do, but the numbers may be meaningless. The way out is to use well-crafted OKRs, which can work equally well for predictive and adaptive work.
I hope that goes a little way to giving you an answer, Phil. I would be happy to continue the conversation.