Do you have a clear idea of what success looks like as a business analyst? It’s not an easy question to answer.
Here are my thoughts for evaluating a business analyst’s success within an project based on my own BA manifesto:
- Does the project deliver the anticipated value? Does the project meet the objectives of the business case?
- Are the stakeholders aligned around the project concept? If you asked each of them individually about what is to be achieved, would you get the same or at least consistent answers?
- Are the stakeholders satisfied that the scope being delivered is the best possible solution to the problem they are trying to solve?
- Does the implementation team deliver on the requirements without a lot of wasted effort? Did they understand what needed to be accomplished?
- Is the test team able to validate that the final application met all the requirements or do they come across areas of ambiguity that need to be addressed?
- Are there big surprises at the end of the project? Do unexpected requirements come up? Every project will experience a bit of churn toward the end as you flesh out the final details, but missing a big piece of functionality or a critical business process is a sign that the BA failed.
- Is the business happy? Do they find value in what was delivered? (A no answer can have many root causes, but a yes answer is typically the sign of good business analysis work.)
At the end of the day, business analysts add value by bringing clarity to project outcomes and getting the business to own the solution. Just because this seems difficult to measure, doesn’t mean we shouldn’t measure it to the best of our abilities.
Learn How to Measure BA Performance
Adriana Beal has address this challenging topic in Measuring the Performance of Business Analysts, a practical guide to finding meaningful KPIs that can be measured without unnecessary overhead.
Click here to learn more about Measuring the Performance of Business Analysts
Pingback: Measuring the analyst | karlblum.net
Hi Marcelo,
Great question. I think the answer to your question starts with how the measures itself and how success is evaluated. There can be conflicting expectations from these various groups, but if they have bought in on some initial measures then they may be in a position to provide more objective feedback. At the end of the day, it’s the leader of the BA group that makes the final call, whether that’s the CIO, COO, or CEO.
Laura
I agree with almost all statements here but run surveys or quantify BA results require to know WHO will provide data for that: PMs? Developers? Business? Customers? Partners? Stakeholders?
I see here possible conflicts that can make this more and more difficult to measure.
I think it might help to go back to the basic question here: how do we measure the success of the BA? All the discussion thus far provides categories but no way of measuring success within those categories. If we were to apply a QA process to this as we do (or should) to our requirements, are these ‘measures’ measurable and/or testable and in the long term are they repeatable? For years it has been cited that the primary cause for project failure is the requirements, so presumably there has been a way to measure when we as BAs get it wrong. Can we apply the same sort of criteria to measuring how we got it right? I am talking about a reduction in the amount of rework during development, the number of defects found in testing due to missing, incomplete or ambiguous requirements; the number of change requests due to similar requirements issues (i.e. not a change of requirements due to an outside issue). Admittedly this is micro level stuff and you need to have process and tools in place to ensure that the information can be captured but it is measureable and can be used to track improvements (hopefully) over time. And this is regardless of the complexity of the project, the unpredictability of the stakeholders or how unrealisteic their expectations.
Just a thought.
Hi Annie,
Thank you for your comment. It is a good one. I completely understand the perspective of picking some very quantitative measurements such as number of changes and defects found due to missing requirements and my original thinking went down that path. However, as I kept on thinking I found that it would be possible to have a business analyst who excelled at those measures but did not deliver on a successful project. You can have well-analyzed requirements that end up in the delivery of software that absolutely fails to solve a business problem!
So while I agree these are useful statistics, I don’t think they go far enough to measure the BA’s success. I am trying, albeit with a bit of a stumble, to reach beyond these internal metrics of success to how we, as business analysts, actually help our organizations create value.
Hi Craig,
That’s an interesting way to look at it and definitely keeps the BA dually focused on both business AND IT.
Below is a summary I ended up with for the podcast. I focused on project outcomes and then discussed how a successful BA could impact those outcomes. Ironically, the evening after we also discussed this topic at the Denver IIBA meeting, so I’ll likely be publishing some notes from that meeting as well. I like your idea about pulling it altogether and running a poll. I will keep that in mind.
First is alignment, which I mentioned above. But in general, are the business stakeholders aligned around the project concept. If you asked each of them individually about what is to be achieved, would you get the same answer? A BA impacts this measure by facilitating discussions about the project, obtaining multiple perspectives, and helping negotiate the final feature set or requirements.
A second measure is the overall effort expended on the project. For any given project, it’s difficult to assess the BA’s impact on this measure, as there are multiple variables. A BA impacts the effort expended by helping eliminate wasted efforts that are the result of miscommunications, unclear requirements, and the like.
A third measure is return on investment. Of course, the BA cannot be solely responsible for ROI as this is ultimately the responsibility of the business owner. But the BA makes a contribution to ROI by helping find the best possible solutions and ensuring the team is focused on solving the right problems. A key component of this is prioritization and the BA is responsible for helping the business select the right requirements to implement to achieve the best possible ROI. The BA impacts the “return” component of ROI by defining and solving the right problems and impacts the “investment” component of ROI by helping reduce the overall effort expended.
What do you guys think?
Laura
Your list is great. My view is that there are two dimensions to success and many to failure.
What you need to have done to call yourself successful;
1. Satisfy the project sponsor and stakeholders
2. Comply with architectural contraints so that you are building enterprise capability (except when it’s an explicit exeption to the rules)
The pathway to these two outcomes can vary.
As you gys note above, there is also the personal-professional performance beyond project contexts.
Once you get enough contributions to your list maybe you should run your own survey to see what the order of importance is?
Regards
Craig
Maybe acoountable is the right word. All team members need to feel accountable for the success of the project. As the BA I may not be responsible for the QA testing, but to ensure the project is successful I must support the QA analyst appropriately by reviewing test plans and providing support as necessary.
Just because the BA responsible for the requirements, the other team members are equally accountable that they are complete. This attitude removes the throwing things over the fence scenario.
Great conversation everyone. let’s keep it going!
Thanks for all the comments. I am already feeling better prepared for tomorrow!
@Kupe Great survey topic. Can’t wait to see the results.
@Jonathan Upon further consideration, I do agree, that the BA cannot be solely responsible for this particular outcome. However, a “yes” answer would be a good measurement of BA success. A “no” answer may or may not tie back to a poor BA. On a side note, I think this is what makes it so difficult to evaluate BAs…they deal with so much ambiguity that any given success or failure may or may not be really their fault.
Regarding wasted effort, what I was thinking was about effort on the “wrong” requirements. So if we ask for things one way and then change it in such a way that the work needs to be redone. There is always some of this because you learn through implementation too, but there are aspects of this that a successful BA validates for upfront. Of course, there are other causes here too, such as the requirements not being followed, lazy developers, or poor communication. So again, back to the ambiguity issues.
@Oshun That is a great point. I think many of the “failures” we hear about relate purely to budget and schedule–i.e. late or over-budget projects are easy to call out as failures. It’s much harder to measure the failure of doing exactly what was expected and it still being the wrong thing. And when this happens, the root causes are much deeper than the responsibilities of most BAs out there, unless they are operating at an enterprise level.
@Kupe 2 I think you nailed why this was a tough question for me. Answering it involves identifying what the BA is responsible for and how to measure it. And what a BA is responsible varies from organization to organization. But I’m not sure I would hold the entire project team accountable for the analysis effort, at least not equally. The BA “owns” this effort, but if you are the BA and you are dealing with a weak team (from the business and/or IT sides) it might be difficult to do a great job because you just don’t get the input necessary. On the other hand, if you have a great team, you’d better help them analyze to their best. So maybe there is a notion of varying levels of ownership and accountability here amongst teams…..
I think we may need to think about this differently. In my opinion, the entire project team is equally responsible for the success of a project. Drilling down, the entire team is responsible for the analysis effort of the project.
What we should measure is the analysis process for projects, not the person with the title business analyst. What we are after is excellent analysis which should lead to project successes.
So when JB says “The tough part about this as a measure of BA success is, how would we know/measure the effect that the analyst had on whether or not stakeholders bought in? ” related Laura’s point 3, we are not as concerned about did Joe BA ensure this, but the processes in place worked or did not work.
Are you digging me?!
Hi Laura –
I think J.Babcock hit on a key theme stating that ‘interpersonal and political items’ are factors into measurements for success. Although this is a separate topic from measuring the success of a BA on a specific project, how projects themselves are determined to be a success or failure (in real life; obviously in theory we have a clear definition) is a bit of a mystery, to me anyway.
I know we have industry surveys from reputable research firms (Chaos??) indicating that X% of projects are failures. But naming a project a failure is a ‘taboo’ it would seem. If Project Sponsor X indicated that his project was a failure, who else would he be able to point the blame? I may be way off base here, and I definitely am not within the scope of this discussion, so I’ll hush now, except to say, that I have not come to any conclusions here.
I liked your list, Laura, and it’s a great topic! A couple of minor points:
RE: “Are the stakeholders satisfied that the scope being delivered is the best possible solution to the problem they are trying to solve?”
I don’t know that the BA can ensure that all stakeholders think a solution is the “best possible”, but we can certainly try to ensure that it is a satisfactory and mutually beneficial solution given the known set of constraints. The tough part about this as a measure of BA success is, how would we know/measure the effect that the analyst had on whether or not stakeholders bought in? There are other factors – not the least of which being interpersonal and political – that weigh-in here.
RE:”Does the implementation team deliver on the requirements without a lot of wasted effort?”
You’d have to be very precise on the metrics for “wasted effort.” There are lots of potential causes for wasted implementation team effort, of which the BA is only one. I think you might have covered this area well enough with points 5 & 6 about ambiguous and missing requirements.
Oops! Lunch break is over! Gotta cut this short, but I have some other ideas for potential metrics, too. I’ll give them some more thought and will check back in with them later.
Laura,
Sounds like you have nailed the key measurements. Part of the angst we have about measurements is that many of them are subjective. Like is the business happy. As you said, the no answers need more analysis before you can find the root cause of the unhappiness.
In this vain we just launch a survey to ask folks in the industry how they are measuring their BA practice. If interested you can fillout the survey here, http://www.surveymonkey.com/s.aspx?sm=NdSxgO51WlIdcAe48SxN4Q_3d_3d.
Good luck with your interview!