Archive for the ‘Success Criteria’ Category

Success in IT projects: A matter of definition? (Thomas & Fernández, in press)

Donnerstag, Oktober 9th, 2008

Success in IT projects: A matter of definition? (Thomas & Fernández, in press)

Thomas, Graeme; Fernández, Walter: Success in IT projects – A matter of definition?; in: International Journal of Project Management, in press (2008).
http://dx.doi.org/10.1016/j.ijproman.2008.06.003

A lot of IT projects are not successful. One of the problems seems to be that success is a tricky concept. The GAO considers (IT) projects as challenged if they are about to exceed their budget and/or schedule by 10%. As such most projects fail.

Several best practice studies have pointed out that agreeing on success criteria is better down upfront, they should be consistently measured, which includes a proper baseline, and their measurement results must be used.

Thomas & Fernández collect examples for IT project success criteria in three broad categories: project management, technical, and business criteria.

Project Management Criteria

  • On-time
  • On-budget
  • Sponsor satisfaction
  • Steering group satisfaction
  • Project team satisfaction
  • Customer/user satisfaction
  • Stakeholder satisfaction

Technical Criteria

  • Customer satisfaction
  • Stakeholder satisfaction
  • System implementation
  • Meeting requirements
  • System quality
  • System use

Business Criteria

  • Business continuity
  • Meeting business objectives
  • Delivery of benefits

A multicriteria satisfaction analysis approach in the assessment of operational programmes (Ipsilandis et al., 2008)

Montag, September 22nd, 2008

A multicriteria satisfaction analysis approach in the assessment of operational programmes (Ipsilandis et al., 2008)

Ipsilandis, Pandelis G.; Samaras, George; Mplanas, Nikolaos: A multicriteria satisfaction analysis approach in the assessment of operational programmes; in: International Journal of Project Management, Vol. 26 (2008), No. 6, pp. 601-611.
http://dx.doi.org/10.1016/j.ijproman.2007.09.003

Satisfaction measurement was one of my big things for a long time, when I was still working in market research. I still believe in the managerial power of satisfaction measurements, although you might not want to do it every 8 weeks rolling. Well, that’s another story and one of these projects where a lot of data is gathered for no specific decision-making purpose and therefore the data only sees limited use.

Anyway, Ipsilandis et al. design a tool to measure project/programme satisfaction for European Union programmes. First of all they give a short overview (for all the non-knowing) into the chain of actions at the EU. On top of that chain sit the national/european policies, which become operational programmes (by agreement between the EU and national bodies). Programmes consists of several main lines of actions called axis, which are also understood as strategic priorities. The axis are further subdevided into measures, which are groups of similar projects or sub-programmes. The measures itself contain the single projects, where the real actions take place and outputs, results, and impact is achieved. [I always thought that just having a single program management body sitting on top of projects can lead to questionable overhead.]

Ipsilandis et al. further identify the main stakeholders for each of the chain of policies –> projects. The five stakeholders are – policy making bodies, programme management authority, financial beneficiaries, project organisations, immediate beneficiaries. The authors go on to identify the objectives for each of these stakeholder groups. Then Ipsilandis et al. propose a MUSA framework (multi criteria satisfaction analysis) in which they measure satisfaction (on a five point scale, where 1=totally unsatisfied, and 5=very satisfied)

  • Project results
    • Clarity of objectives
    • Contribution to overall goals
    • Vision
    • Exploitation of results
    • Meeting budget
  • Project management authority operations
    • Submission of proposals
    • Selection and approval process
    • Implementation support
    • MIS support
    • Timely payments
    • Funding ~ Scope
    • Funding ~ Budget
  • Project Office support
    • Management support
    • Admin/tech support
    • Accounting dept. support
    • MIS support
  • Project Team
    • Tech/admin competence
    • Subproject leader
    • Staff contribution
    • Outsourcing/consultants
    • Diffusion of results

The authors then run through a sample report, which contains the typical representations of satisfaction scores, but they have 3 noteworthy ideas – (1) the satisfaction function, (2) performance x importance matrix, and (3) demanding x effectiveness matrix. The satisfaction function is simply the distribution function of satisfaction scores.
[I still do not understand why the line between 0% at score 1 and 100% at score 5 should represent neutrality – Such a line would assume uniform distribution of scores, where I think normal distribution is more often the case, which is also acknowledged by the authors, when they try to establish beta-weights via regression analysis, where normality is a pre-requisite for.]

Furthermore Ipsilandis et al. continue to establish the relative beta-weights for each item and calculate the average satisfaction index accordingly (satisfaction is indexed at 0% to 100%). Cutting-off at the centroid on each axis they span a 2×2 matrix for importance (beta-weight) vs. performance (satisfaction index). The authors call these diagrams „Action diagrams“.
[Centroid of the axis is just a cool way of referring to the mean.]

The third set of diagrams, the so called „Improvement diagrams“, are demanding vs. effectiveness matrices. The demanding dimension is defined by the beta-weights once more. The rational behind this thinking is, that a similar improvement leads to higher satisfaction at items with a higher beta-weight. The effectiveness dimension is the weighted dissatisfaction index. Simply put it is beta-weight*(100%-satisfaction index %). Reasoning behind this is to identify the actions with a great marginal contribution to overall satisfaction and only little effort needed.
[I still don’t understand why this diagram is needed, since the same message is conveyed in the ‚action diagrams‘ – anyway, a different way of showing it. Same, same but different.
What I previously tried to fiddle around with are log-transformations, e.g. logit, to model satisfaction indeces and their development in a non-linear fashion, instead of just weighting and normalising them. Such a procedure would put more importance on very low and very high values, following the reasoning, that fixing something completely broken is a big deal, whereas perfecting the almost perfect (think choosing the right lipstick for Scarlett Johannson) is not such a wise way to spend your time and money (fans of Ms. Johannson might disagree).]

Best Project Management and Systems Engineering Practices in the Preacquisition Phase for Federal Intelligence and Defense Agencies (Meier, 2008)

Dienstag, August 12th, 2008

 Best Project Management and SE Practices

Meier, Steven R.: Best Project Management and Systems Engineering Practices in the Preacquisition Phase for Federal Intelligence and Defense Agencies; in Project Management Journal, Vol. 39 (2008), No. 1, pp. 59-71.

Scope Creep! Uncontrolled growth in programs, especially public acquisitions is nothing new. [I highly suspect that we only look down on public projects because private companies are much better in hiding their failures.] Meier analyses the root causes for scope creep in intelligence and defense projects and proposes counter actions to be taken.

The root causes for creeping scope are

  • overzealous advocacy
  • immature technology
  • lack of corporate technology road maps
  • requirements instability
  • ineffective acquisition strategies, i.e. no incentives to stick to the budget
  • unrealistic baselines and a high reliance on contractor baselines
  • inadequate systems engineering, e.g. no concept of operations, system requirements document, statement of work, request for proposal, contact data requirements list
  • workforce issues, e.g. high staff turnover, no PMO

Meier’s remedies for this predicament are quite obvious. Have a devil’s inquisitor or a third party review to get rid of the optimism bias. Wait until technology maturity is achieved or factor in higher contingencies. Set investment priorities. Put incentives into the contracts. Estimate own costs prior to RfP. Follow systems engineering standards, e.g. INCOSE’s. Manage your workforce.

Effective Project Sponsorship – An Evaluation of the Role of the Executive Sponsor in Complex Infrastructure Projects by Senior Managers (Helm & Remington, 2005)

Montag, August 11th, 2008

Success Factors for Project Sponsors

Helm, Jane; Remingtone, Kayne: Effective Project Sponsorship – An Evaluation of the Role of the Executive Sponsor in Complex Infrastructure Projects by Senior Managers; in: Journal of Project Management, Vol. 36 (2005), No. 3, pp. 51-61.

Helm & Remington used a Grounded Theory approach to explore the role of Project Sponsors in semi-structured in-depth interviews. They identified 9 success factors:

  1. Seniority
  2. Political knowledge & savvy
  3. Connect project and organisation
  4. Battle for the project
  5. Motivate team
  6. Partner with project team
  7. Communication skills
  8. Compatibility with project team
  9. Provide objectivity and challenge project

Project management: cost, time and quality, two best guesses and a phenomenon, its time to accept other success criteria (Atkinson, R. 1999)

Montag, Juli 7th, 2008

SCC-Thumb

Atkinson, Roger: Project management – cost, time and quality, two best guesses and a phenomenon, its time to accept other success criterin; in: International Journal of Project Management, Vol. 17 (1999), No. 6, pp. 337-342.

Once I spent some hours discussing with colleagues what the magic triangle might be. PMI says it is Time-Cost-Scope and Quality is a product of these three. My colleague argued it should be Time-Cost-Quality since Quality is defined as meeting or exceeding the expectations of the customer, which includes that the customer gets what he asked for, aka the scope.

Similarly Atkinson argues that this is only asking the question of the project is ‚Doing it right‘, which automatically focuses mainly on the delivery system. Thus leaving huge gaps in the ‚Getting it right‘ part unanswered. Which leads, as many IT project examples show, to a nice but unusable/unwanted/unaccepted piece of software. In order to get it right by doing it right Atkinson proposes the ‚Square Route‘ of success criteria – (1) The Time-Cost-Quality-Triangle, (2) The Information System itself, (3) Organisational benefits, and (4) Stakeholder/community benefits.