Archive for the ‘Decision Making’ Category

Research Featured in Harvard Business Review

Donnerstag, Juli 26th, 2012

After 2 years of researching ICT projects the on-going research has been picked up by the Harvard Business Review and is on the cover of their September 2011 issue.

„Why your IT projects may be riskier than you think?“

By now, I collected a database of nearly 1,500 IT projects – in short we argue that the numbers in the hotly debated Standish Report are wrong, but their critics don’t get it quite right either. We found that while IT projects perform reasonably well on average the risk distribution has very fat tails in which a lot of Black Swan Events hide. 1 in 6 IT projects turned into a Black Swan – an event that can take down companies and cost executives their jobs.

Enjoy the read!

More background reading on the HBR article can be found in this working paper.

Integrating the change program with the parent organization (Lehtonen & Martinsuo, 2009)

Dienstag, April 28th, 2009


Lehtonen, Päivi; Martinsuo, Miia: Integrating the change program with the parent organization; in: International Journal of Project Management, Vol. 27 (2009), No. 2, pp. 154-165.


Lehtonen & Martinsuo analyse the boundary spanning activities of change programmes.  They find five different types of organisational integration – internal integration 1a) in the programme, 1b) in the organisation; external integration 2a) in the organisation, 2b) in the programme, and 3) between programme and parent organisation.

Furthermore they identify mechanisms of integration on these various levels.  These mechanisms are

  Mechanism of integration
Structure & Control Steering groups, responsibility of line managers
Goal & content link Programme is part of larger strategic change initiative
People links Cross-functional core team, part-time team members who stay in local departments
Scheduling & Planning links Planning, project management, budgeting, reporting
Isolation Abandon standard corporate steering group, split between HQ and branch roll-out


Among most common are four types of boundary spanning activities – (1) Information Scout, (2) Ambassador, (3) Boundary Shaping, and (4) Isolation.  Firstly, information scouting is done via workshops, interviews, questionnaire, data requests &c.  Secondly, the project ambassador presents the programme in internal forums, focuses on quick wins and show cases them, publishes about the project in HR magazines &c.  Thirdly, the boundary shaping is done by negotiations of scope and resources, and by defining responsibilities.  Fourthly, isolation typically takes place through withholding information, establishing a separate work/team/programme culture, planning inside; basically by gate keeping and blocking.   

The influence of checklists and roles on software practitioner risk perception and decision-making (Keil et al., 2008)

Freitag, April 24th, 2009


Keil, Mark; Li, Lei; Mathiassen, Lars;  Zheng, Guangzhi: The influence of checklists and roles on software practitioner risk perception and decision-making; in: Journal of Systems and Software, Vol. 81 (2008), No. 6, pp. 908-919. 

In this paper the authors present the results of 128 role plays they conducted with software practitioners.  These role plays analysed the influence of checklists on the risk perception and decision-making.  The authors also controlled for the role of the participant, whether he/she was an insider = project manager or an outsider = consultant.  They found the role having no effect on the risks identified.

Keil et al. created a risk checklist based on the software risk model which was first conceptualised by Wallace et al. This model distinguishes 6 different risks – (1) Team, (2) Organisational environment, (3) Requirements, (4) Planning and control, (5) User, (6) Complexity. 
In their role plays the authors found that checklists have a significant influence on the number of risks identified. However the number of risks does not influence the decision-making.  Decision-making is influenced by the fact whether the participants have identified some key risks or not.  Therefore the risk checklists can influence the salience of the risks, i.e., whether they are perceived or not, but it does not influence the decision-making.

The resource allocation syndrome (Engwall & Jerbant, 2003)

Mittwoch, April 22nd, 2009


Engwall, Mats; Jerbant, Anna: The resource allocation syndrome – the prime challenge of multi-project management?; in: International Journal of Project Management, Vol. 21 (2003), No. 6, pp. 403-409.

Engwall & Jerbant analyse the nature of organisations, whose operations are mostly carried out as simultaneous or successive projects. By studying a couple of qualitative cases the authors try to answer why the resource allocation syndrome is the number one issue for multi-project management and which underlying mechanisms are behind this phenomenon.

The resource allocation syndrome is at the heart of operational problems in multi-project management, it’s called syndrome because multi-project management is mainly obsessed with front-end allocation of resources.  This shows in the main characteristics: projects have interdependencies and typically lack resources; management is concerned with priority setting and resources re-allocation; competition arises between the projects; management focuses on short term problem solving.

The root causes for these syndromes can be found on both the demand and the supply side.  On the demand side the two root causes identified are the effect of failing projects on the schedule, the authors observed that project delay causes after-the-fact prioritisation and thus makes management re-active and rather unhelpful; and secondly over commitment cripples the multi-project-management.

On the supply side the problems are caused by management accounting systems, in this case the inability to properly record all resources and projects; and effect of opportunistic management behaviour, especially grabbing and booking good people before they are needed just to have them on the project.

Understanding the Nature and Extent of IS Project Escalation (Keil & Mann, 1997)

Dienstag, Januar 13th, 2009


Keil, Mark; Mann, Joan: Understanding the Nature and Extent of IS Project Escalation – Results from a Survey of IS Audit and Control Professionals; in: Proceedings of The Thirtieth Annual Hawaii International Conference on System Sciences, 1997.

"Many runaway IS projects represent what can be described as continued commitment to a failing course of action, or "escalation" as it is called in management literature."  Keil & Mann argue that escalation of projects can be explained with four different factors – (1) project factors, (2) social factors, (3) psychological factors, and (4) organisational factors.

  1. Project Factors – cost & benefits, duration of the project
  2. Social Factors – rivalry between projects, need for external justification, social norms
  3. Organisational Factors – structural and political environment of the project
  4. Psychological Factors – managers previous experience, sunk-cost effects, self-justification

In 1995 Keil & Mann conducted a survey among IS-Auditors, among their most interesting findings are

  • 38.3% of all SW development projects show some level of escalation (Original question: ‚In your judgement, how many projects are escalated?‘).
  • When asked ‚From your last 5 projects how many escalated?‘ 19% of the auditors said none, 28% said 1, 20% said 2, 16% said 3, 10% said 4 and 8% said all 5.
  • Escalation of schedule 38% of projects 1-12 months, 36% 13-24 months, maximum in the sample was 21 years.
  • Average budget overrun of projects was 20%, when projects escalate average budget overrun is 158%.
  • 82% of all escalated projects run over their budget, whilst only 48% of all non-escalated projects run over their budget
  • Success rate of escalated projects is devastating – of all escalated projects 23% were completed successfully, 18% were abandoned, 5% were completed but never implemented, 23% were partially completed, 18% were unsuccessfully completed, and 8% were completed and then withdrawn.

Furthermore, Keil & Mann test for the reasons for escalation behaviour, based on their 4 factor concept.  They found the main reasons for project escalations were

  • Underestimation of time to completion
  • Lack of monitoring
  • Underestimation of resources
  • Underestimation of scope
  • Lack of control
  • Changing specifications
  • Inadequate planning

Institutional reform (Stone, 2008)

Freitag, Januar 9th, 2009


Stone, Alastair: Institutional reform – A decision-making process view; in: Research in Transportation Economics, Vol. 22 (2008), No. 1, pp. 164-178.

Stone analyses the fundamentals of the DMP (decision-making process) and proposes a model which he then applied to the example of urban land passenger transport.   This summary focuses on the analysis of the DMP.  Firstly, Stone draws a model which combines DMP costs with product scale and assumes a positive correlation between these two = the larger the product the higher the DMP costs. He outlines the entity involved, analytical mechanisms, and analytical techniques, e.g.,  for large-scale products the entity is usually a large collective, an expert analyst is the mechanism of choice for analysis, and a product specific discounted cash flow is the technique for the analysis.  Secondly, Stone shows that over time in the DMP the number of options decreases, whilst the average value of each option increases.

More interestingly he looks at different decisions and identifies 4 levels of societal decision processes

  1. Embeddedness = informal institutions, customs and traditions, norms and religion (happens every 100 to 1.000 years; is explained by social theory; and the purpose of it is non-calculative and spontaneous)
  2. Institutional environment = formal rules of the game (happens every 10 to 100 years; is explained by economics of property rights; and the purpose of it is to get the institutional environment right)
  3. Governance = play of the game (happens every 1 to 10 years; is explained by transaction cost economics; and the purpose of it is to get the governance structures right)
  4. Resource allocations (happen continuously; is explained by neo-classical economics; and have the purpose of getting the marginal conditions right)

Higher level impose constraints on lower level decisions, whilst these return feedback to the top. Finally, he outlines the DMP:
Input Variables (utilities, motivation, rights & power, and information) go into the Choice Mechanism (analysis & judgement) and result in a Resource Demand.

Uncertainty Sensitivity Planning (Davis, 2003)

Donnerstag, Januar 8th, 2009


Davis, Paul K.: Uncertainty Sensitivity Planning; in: Johnson, Stuart; Libicki, Martin; Treverton, Gregory F. (Eds.): New Challenges – New Tools for Defense Decision Making, 2003, pp. 131-155; ISBN 0-8330-3289-5.

Who is better than planning for very complex environments than the military?  On projects we set-up war rooms, we draw mind maps which look like tactical attack plans, and sometimes we use a very militaristic language.  So what’s more obvious than a short Internet search on planning and military.

Davis describes a new planning method – Uncertainty Sensitivity Planning.  Traditional planning characterises a no surprises future environment – much like the planning we usually do.  The next step is to identify shocks and branches.  Thus creating four different strategies

  1. Core Strategy = Develop a strategy for no-surprises future
  2. Contingent Sub-Strategies = Develop contingent sub-strategies for all branches of the project
  3. Hedge Strategy = Develop capabilities to help with shocks
  4. Environmental Shaping Strategy = Develop strategy to improve odds of desirable futures

Uncertainty Sensitivity Planning combines capabilities based planning with environmental shaping strategy and actions. 
Capabilities based planning plans along modular capabilities, i.e., building blocks which are usable in many different ways.  On top of that an assembly capability to combine the building blocks needs to be planned for.   The goal of planning is to create flexibility, adaptiveness, and robustness – it is not optimisation.  Thus multiple measurements of effectiveness exist. 
During planning there needs to be an explicit role for judgements and qualitative assessments.  Economics of choice are explicitly accounted for. 
Lastly, planning requirements are reflected in high-level choices, which are based on capability based analysis.

Application of Multicriteria Decision Analysis in Environmental Decision Making (Kiker et al., 2005)

Donnerstag, Januar 8th, 2009


Kiker, Gregory A.; Bridges, Todd S.; Varghese, Arun; Seager, Thomas P.; Linkov, Igor: Application of Multicriteria Decision Analysis in Environmental Decision Making; in: Integrated Environmental Assessment and Management, Vol. 1 (2005), No. 2, pp. 95-108.

Kiker et al. review Multi-Criteria Decision-Making (MCDA).  The authors define MCDA as decisions, typically group decisions, with multiple criteria with different monetary and non-monetary values.  The MCDA process follows two steps – (1) construct a decision-matrix, (2) synthesis by ranking alternative by different means.

What are solutions/methods to apply MCDA in practice?

    multi-attribute utility theory & multi-attribute value theory
    = each score is given a utility, then utilities are weighted and summed up to choose an alternative
  • AHP
    analytical hierarchy process
    = pairwise comparison of all criteria to determine their importance
  • Outranking
    = pairwise comparison of all scenarios
  • Fuzzy
  • Mental Modelling
  • Linear Programming

A Principal Exposition of Jean-Louis Le Moigne’s Systemic Theory (Eriksson, 1997)

Dienstag, Januar 6th, 2009


Eriksson, Darek: A Principal Exposition of Jean-Louis Le Moigne’s Systemic Theory; in: „Cybernetics and Human Knowing“. Vol. 4 (1997), No. 2-3.

When thinking about complexity and systems one sooner or later comes across Le Moigne.  Departing point is the dilemma of simplification vs. intelligence.  Therefore systems have to be distinguished to be either complicated = that is they are reducible, or to be complex = show surprising behaviour.

Complicated vs. Complex
This distinction follows the same lines as closed vs. open systems, and mono- vs. multi-criteria optimisation.  Closed/mono-criteria/complicated systems can be optimised using algorithms, simplifying the system, and evaluating the solution by its efficacy.  On the other hand, open/multi-criteria/complex systems can only be satisfied by using heuristics, breaking down the system into modules, and evaluating the solution by its effectivity.

In the case of complex systems simplification only increases the complexity of the problem and will not yield a solution to the problem.  Instead of simplification intelligence is needed to understand and explain the system, in other words it needs to be modelled.  As Einstein already put it – defining the problem is solving it.
Secondly, to model a complex system is to model a process of actions and outcomes. The process definition consists of three transfer functions – (1) temporal, (2) morphologic, and (3) spatial transfer.  In order to make the step from modelling complicated system to modelling complex systems some paradigms need to change:

  • Subject –> Process
  • Elements –> Actors
  • Set –> Systems
  • Analysis –> Intelligence
  • Separation –> Conjunction
  • Structure –> Organisation
  • Optimisation –> Suitability
  • Control –> Intelligence
  • Efficacy –> Effectiveness
  • Application –> Projection
  • Evidence –> Relevance
  • Causal explanation –> Understanding

The model itself follows a black box approach.  For each black box, its function, its ends = objective, its environment, and its transformations need to be modelled.  Furthermore the modelling itself understands and explains a system on nine different levels.  A phenomena is

  1. Identifiable
  2. Active
  3. Regulated
  4. Itself
  5. Behaviour
  6. Stores
  7. Coordinates
  8. Imagines
  9. Finalised

Framed! (Singh, 2006)

Dienstag, Januar 6th, 2009


Singh, Hari: Framed!; HRD Press, 2006, ISBN: 0874258731

I stumbled upon this book somewhere in the tubes.  I do admit that I felt appealed to combine a fictional narrative with some scientific subtext.  Unfortunately for this book I put the bar to pass at Tom DeMarco’s Deadline.  On the one hand Singh delivers, what seems to be his own lecture on decision-making as the alter-ego of Professor Armstrong; on the other hand the fictional two-level story of Larry the first person story-teller and the crime mystery around Laura’s suicide turn murder does not really deliver.  Let alone the superficial references to Chicago, which I rather found off putting, I think a bit more of research and getting off the beaten track could have done much good here.  Lastly, I don’t fancy much the narrative framework driven style so commonly found in American self-help books – and so brilliantly mocked in Little Miss Sunshine.

Anyhow let’s focus on the content.  Singh calls his structure for better decision-making FACTNET

  • Framing/ conceptualising the issue creatively
  • Anchoring/ relying on reference points
  • Cause & effect
  • Tastes for risk preference & role of chance
  • Negotiation & importance of trust
  • Evaluating decisions by a process
  • Tracking relevant feedback

Frame – Identify the problem clearly, be candid about your ignorance, question presumptions, consider a wide set of alternatives
Anchoring – Anchor your evaluations with external reference points and avoid group thinking
Cause & effect – Recognise patterns and cause-effect-relationships, try to regress to the mean, be aware of biases such as the halo effect
Tastes for risk & Role of chance – be aware of compensation behaviour, satisficing behaviour, cognitive dissonance, signaling of risks, gambler’s fallacy, availability bias – all deceptions which negatively impact decision-making
Negotiation & Trust – just two words: Prisoner’s dilemma
Evaluating decisions by a process – Revisit decisions, conduct sensitivity analyses
Tracking relevant feedback – Continuously get feedback & feed-forward, be aware of overlooked feedback, treatment of effects, split up good news and bundle bad news, think about sunk costs, man & machine, and engage in self-examination

Three methods for decision-making are presented in the book – (1) balance sheet methods with applied weighting, (2) WARS = weighting attributes and scores, and (3) scenario strategies.

Lastly, Singh reminded me again of the old motto „Non Sequitur!“ – making me aware of all the logic fallacies that occur if something sounds reasonable but ‚does not really follow‘.

Decision Making Within Distributed Project Teams (Bourgault et al., 2008)

Montag, November 3rd, 2008

ecision Making Within Distributed Project Teams (Bourgault et al., 2008)

Bourgault, Mario; Drouin, Nathalie; Hamel, Émilie: Decision Making Within Distributed Project Teams – An Exploration of Formalization and Autonomy as Determinants of Success; in: Project Management Journal, Vol. 39 (2008), Supplement, pp. S97–S110.
DOI: 10.1002/pmj.20063

Bourgault et al. analyse group decision making in virtual teams. Their article is based on the principles of limited rationality, i.e. deciding is choosing from different alternatives, and responsible choice, i.e. deciding is anticipating outcomes of the decision.

Existing literature controversially discusses the effects of virtualising teams. Some authors argue that virtual teams lack social pressure and thus smaler likelihood of showing escalation of committment behaviour, whilst making more objective and faster decisions. Other authors find no difference in working style between virtual and non-virtual teams. Generally literature explains that decision-errors are mostly attributed to break-downs in rationality, which are caused by power and group dynamics. Social pressure in groups also prevents efficiency. In any team with distributed knowledge the leader must coordinate and channel the information flow.

Bourgault et al. conceptualise that Formalisation and Autonomy impact the quality of decision-making, which then influences the team work effectiveness. All this is moderated by the geographic dispersion of the team.
They argue that formalisation, which structures and controls the decision making activities, helps distributed teams to share information. Autonomy is a source of conflict, for example with higher management due to a lack of understanding and trust, ultimately it weakesn a project decision-making because it diverts horizontal information flow within the team to vertical information flow between project and management.
Quality of decision-making process – the authors argue that groups have more information resources and therefore can make better decisions, but this comes at an increased cost for decision-making. Geographical distributed teams lack signals and have difficulties in sharing information. Thus high quality teamwork benefits from more dispersed knowledge but low quality teamwork suffers from a lack of hands-on leadership.
Teamwork effectiveness – this construct has mostly been measured using satisfaction measurements and student samples. Other measures are the degree of taks completion, goal achievement, self-efficacy (intent to stay on the team, ability to cope, percieved individual performance, perceived team performance, satisfaction with the team). Bourgault et al. measure teamwork effectiveness asking for the perceived performance on taks completion, goal achievement, information sharing, conflict resolution, problem solving, and creating a prefereable and sustainable environment.

The authors‘ quantitative analysis shows that in moderated teams all direct and indirect effects can be substantiated, with exception of the autonomy influencing the quality of decision-making. Similarily in highly dispersed teams all direct and indirect effects, but the direct influence of formalisation on teamwork effectiveness, could be proven.

Bourgault et al. conclude with three points of recommendation for the praxis – (1) Distribution of a team contributes to high quality of decisions, although it seems to come at a high cost. (2) Autonomous teams achieve better decisions – „despite the fear of an out of sight out of control syndrome“. (3) Formalisation adds value to teamwork especially the more distributed the team is.

Tailored task forces: Temporary organizations and modularity (Waard & Kramer, 2008)

Montag, Oktober 20th, 2008

Tailored task forces: Temporary organizations and modularity (Waard & Kramer, 2008)

Waard, Erik J. de; Kramer, Eric-Hans: Tailored task forces – Temporary organizations and modularity; in: International Journal of Project Management, Vol. 26 (2008), No. 5, pp. 537-546.

As a colleague once put it: Complex projects should be organised like terrorist organisations – Autonomous cells of highly motivated individuals.

Waard & Kramer do not analyse projects but it’s fast paced and short lived cousin – the task force. The task force is THE blueprint for an temporary organisation. The authors found that the more modularised the parent company is, the easier it is to set-up a task force/temporary organisations. Waard & Kramer also found that the temporary organisations are more stable if set-up by modular parent companies. They explain this with copying readily available organisational design principles and using well excercised behaviours to manage these units.

The more interesting second part of the article describes how a company can best set-up task forces. Waard & Kramer draw their analogy from Modular Design.

„Building a complex system from smaller subsystems that are independently designed yet function together“

The core of modular design is to establish visible design rules and hidden design parameters. The authors describe that rules need to be in place for (1) architecture, (2) interfaces, and (3) standards. The remaining design decisions is left in the hands of the task force, which is run like a black box.
In this case Architecture defines which modules are part of the system and what each modules functionality is. Interface definition lays out how these modules interact and communication. Lastly, the Standards define how modules are tested and how their performance is measured.

An experimental investigation of factors influencing perceived control over a failing IT project (Jani, 2008)

Montag, Oktober 20th, 2008

An experimental investigation of factors influencing perceived control over a failing IT project (Jani, 2008)

Jani, Arpan: An experimental investigation of factors influencing perceived control over a failing IT project; in: International Journal of Project Management, Vol. 26 (2008), No. 7, pp. 726-732.

Jani wants to analyse why failing projects are not terminated, a spiralling development also called escalation of commitment (I posted about a case discussion of the escalation of commitment on the TAURUS project).  Jani performed a computer simulated experiment to show the antecedents of a continuation decision.

He rooted the effect of escalating commitment on the self-justification theory, prospect theory, agency theory, and also on sunk cost effects & project completion effects.

Self-justification motivates behaviour to justify attitudes, actions, beliefs, and emotions. It is an effect of cognitive dissonance and an effective cognitive strategy to reduce the dissonance. An example for this behaviour is continuing with a bad behaviour, because stopping it would question the previous decision (= escalation of commitment).

Another example is bribery. People bribed with a large amount of money, tend not to change their attitudes, which were unfavourable otherwise there was nor reason to bribe them in the first place. But Festinger & Carlsmith reported that bribery with a very small amount of money, made people why they accepted the bribe although it had been that small, thus thinking that there must be something to it and changing their attitude altogether. Since I did it, and only got 1 Dollar is a very strong dissonance. Here is a nice summary about their classic experiments. Here is one of their original articles.

Jani argues that all these theoretical effects fall into two factors – (1) self-serving bias and (2) past experience. These influence the judgement on his two antecedents – (1) project risk factors (endogeneous and exogeneous) and (2) task specific self-efficacy. The latter is measured as a factor step high vs. low and describes how you perceive your capability to influence events that impact you (here is a great discussion of this topic by Bandura).

The two factors of project risk and task specific self-efficacy then influence the perceived control over the project which influences the continuation decision. Jani is able to show that task specific self-efficacy moderates the perceived project control. In fact he manipulated the project risks to simulate a failing projects, at no time participants had control over the outcome of their decisions. Still participants with a higher self-efficacy judged their perceived control significantly higher than participants with lower self-efficacy. This effect exists for engogenous and exogenous risk factors alike.

The bottom-line of this experiment is quite puzzling. A good project manager, who has a vast track record of completing past projects successfully, tends to underestimate the risks impacting the project. Jani recommends that even with great past experiences on delivering projects, third parties should always review project risks. Jani asks for caution using this advice since his experiment did not prove that joint evaluation corrects for this bias effectively.

Lee, Margaret E.: E-ethical leadership for virtual project teams; in: International Journal of Project Management, in press (2008).

I quickly want to touch on this article, since the only interesting idea which stroke me was that Lee did draw a line from Kant to Utilitarism to the notion of Duty. She then concludes that it is our Kantesian, Utalitarian duty to involve stakeholders.

Project management approaches for dynamic environments (Collyer, 2009)

Donnerstag, Oktober 9th, 2008

 Project management approaches for dynamic environments (Collyer, in press)Collyer, Simon: Project management approaches for dynamic environments; in: International Journal of Project Management, in press (2008). this article has been published in: International Journal of Project Management, Vol. 27 (2009), No. 4, pp. 355-364. There it is again: Complexity, this time under the name of Dynamic Project Environments. I admit that link is a bit of a stretch. Complexity has been described as situations, where inputs generate surprising outputs. Collyer on the other hand focuses special project management strategies to succeed in changing environments. The author’s example is the IT project, which inherently bears a very special dynamic.He discusses eight different approaches to cope with dynamics. (1) Environment manipulation, which is the attempt to transform a dynamic environment into a static environment. Examples commonly employed are design freezes, extending a systems life time, and leapfrogging or delaying new technology deployment.(2) Planning for dynamic environments. Collyer draws a framework where he classifies projects on two dimensions. Firstly, if their methods are well defined or not, and secondly if the goals are well defined or not. For example he classifies the System Development project as ill-defined and ill-defined. This is a point you could argue about, because some people claim that IT projects usually have well-defined methodologies, but lack clear goals. Collyer suggest scaling down planning. Plan milestones according to project lifecycle stages, and detail when you get there. He recommends spending more time on RACI-matrices than on detailed plans.(3) Control scope, which is quite the obivious thing to try to achieve – Collyer recommends to always cut the project stages along the scope and make the smallest possible scope the first release.(4) Controlled experimentation. The author suggest that experimentation supports sense-making in a dynamic environment. Typical examples for experimentation are prototyping (Collyer recommends to always develop more than one prototype), feasibility studies, and proofs of concept.(5) Lifecycle strategies, although bearing similarities to the scope control approaches he proposes this strategy deals with applying RuP and agile development methods, to accelerate the adaptability of the project in changing environments.(6) Managment control, as discussed earlier in this post every project uses a mix of different control techniques. Collyer suggest deviating from the classical project management approach of controlling behaviour by supervision, in favour for using more input control, for example training to ensure only the best resources are selected. Besides input control Collyer recommend on focussing on output control as well, making output measurable and rewarding performance.Collyer also discusses a second control framework, which distinguishes control by the abstract management principle. Such as diagnostic control (=formal feedback), control of beliefs (=mission, values), control of interactions (=having strategic, data-based discussions), and boundary control (=defining codes of conduct).Lastly the author discusses two more approaches to succeeding with dynamic environments which are (7) Categorisation and adaptation of standards and (8) Leadership style.

Multicriteria cash-flow modeling and project value-multiples for two-stage project valuation (Jiménez & Pascual, 2008)

Dienstag, Oktober 7th, 2008

 Multicriteria cash-flow modeling and project value-multiples for two-stage project valuation (Jiménez & Pascual, 2008)

Jiménez, Luis González; Pascual, Luis Blanco: Multicriteria cash-flow modeling and project value-multiples for two-stage project valuation; in: International Journal of Project Management, Vol. 26 (2008), No. 2, pp. 185-194.

I am not the expert in financial engineering, though I built my fair share of business cases and models for all sorts of projects and endeavours. I always thought of myself as being not to bad at estimating and modelling impacts and costs, but I never had a deep knowledge of valuation tools and techniques. A colleague was claiming once that every business case has to work on paper with a pocket calculator in your hands. Otherwise it is way to complicated. Anyhow, I do understand the importance of a proper NPV calculation, to say the least even if you do fancy shmancy real options evaluation as in this article here, the NPV is one of the key inputs.

Jiménez & Pascual identify three common approaches to project valuation NPV, real options, and payback period calculations. Their article focusses on NPV calculation. They argue that a NPV calculation consists of multiple cash flow components and each of these has different underlying assumptions, as to it’s risk, value, and return.

The authors start with the general formula for a NPV calculation
NPV = V0 = -I0 + ∑Qi ∏e-rk = -I0 + ∑Q e-∑rk
This formula also gives the internal rate of return (IIR) if V0=0 and the profitability index (PI) is defined as PI = V0/I0. Furthermore Jiménez & Pascual outline two different approaches on how to model the expected net cash flow Qi either as cash flow Qi = ∑qj,i or as value based period gj,k = ln (qj,k/qj,k-1).

The next question is how to model future values of the cash flow without adjusting your assumptions for each and every period. The article’s authors suggest four different methods [the article features a full length explanation and numerical example for each of these]

  • Cash Flow = Cash Flow + independent variable
  • Cash Flow = Cash Flow + function of the cash flow
  • Cash Flow = Function of a stock magnitude
  • Cash Flow = Change in stock magnitude

Finally the authors add three different scenarios under which the model is tested and they also show the managerial implications of the outcome of each of these scenarios

  • Ratios, such as operating cost and expenses (OPCE) to turnover (T/O), labour costs to T/O, depreciation to TIO, not fixed assets to T/O, W/C to T/O
  • Valuation multiples, such as Sales, Ebitda
  • Financing structure, such as short term, long term debt, after tax interests

Resource allocation under uncertainty in a multi-project matrix environment: Is organizational conflict inevitable? (Laslo & Goldberg, in press)

Mittwoch, September 24th, 2008

Resource allocation under uncertainty in a multi-project matrix environment: Is organizational conflict inevitable?

Laslo, Zohar; Goldberg, Albert I.: Resource allocation under uncertainty in a multi-project matrix environment: Is organizational conflict inevitable?; in: International Journal of Project Management, Article in Press (2008).

Laslo & Goldberg investigate the conflicts associated with resource allocation. They find that the matrix structure has become the organisational from to go for, when it comes to efficiently managing a multi-project firm. Unfortunately the matrix structure has been criticised (see the PMBOK Guide) to inherently spark conflicts between line and project managers about resources. In a mulit-project firm, the authors conclude, this drawback is even worsened by the resource allocation fights between projects.

In order to model the problem Laslo & Goldberg analyse the three most common resource allocation policies, which are (1) Profit & Cost Centres (PC), (2) Comprehensive Allocation Planning (CA), and (3) Directed Priorities (DP). Profit & cost centre are synonymous with the maximum project autonomy. In comprehensive allocation planning each inidvidual project is only part of an organisation wide optimisation of resource allocation. Thirdly, in the policy of directed priorities, projects whose objectives are closer to the organisation’s strategic goals receive addtional resources/funding.

As often claimed 3 types of conflict are inherent in the matrix structure, when managers would fight for the right policy.
(1) DP vs. PC – internal Project would favour DP, whilst sponsored projects and functional units favouring PC
(2) PC vs. CA – functional units favouring PC, whilst sponsored and internal projects favour CA
(3) DP vs. CA – internal projects and functional units favour DP, whilst sponsored project favour CA

Using Foster’s theory of systems dynamics, the authors model these conflicts as a resource flow. The model contains feedback loops to model information flow into planning and runs several what if simulations.
In order to simulate with the model three organisational objectives are defined
a) Minimise delay losses (conflict DP vs. CA)
b) Minimise direct labour costs (favouring CA policy)
c) Minimise functional unit total costs (favouring PC)

As the authors put it: „Findings from the simulation suggest that not all conflict is realistic. For some project objectives, higher organizational performance can be achieved when managers learn that they have no basic differences in real interests and they can agree upon a resource allocation policy.“ That said, the results also show, that alliances between functional managers, internal venture project managers, and sponsored project managers are unstable. If resource scarcity comes into the equation, e.g, if resources can not be obtained externally, conflicts arise for sure.

Enterprise information system project selection with regard to BOCR (Liang & Li, in press)

Dienstag, September 23rd, 2008

 Enterprise information system project selection with regard to BOCR (Liang & Li, in press)

Liang, Chao; Li, Qing: Enterprise information system project selection with regard to BOCR; in: International Journal of Project Management, Article in Press, Corrected Proof

Lots of consultants earn their money with selecting the right IT system. I have seen the most bizarre total-cost-of-ownership (TOC) calculations to get to it and witnessed the political madness which comes with buying-center decisions in never ending rounds of assessment workshops.

Liang & Li claim that „a comprehensive and systematic assessment is necessary for executives to select the most suitable project from many alternatives.“ Furthermore they claim that „This paper first proposes a decision method for project selection.“ However, the authors apply a analytical hierarchy/network process (AHP/ANP) to this decision-making predicament. They suggest breaking down the decision unsing their mulit-criteria BOCR framework, with the dimensions of benefits (B), opportunities (O), costs (C), and risks (R).

In the case of an manufacturing system, described in that article, the benefits consist of time gained, costs saved, service improvements, capacity increase, and quality improvements. The opportunities are an increased market share, fast ROI and payback period, and the ability for agile manufacturing. The risks associated with this MES are budget overruns, time delays, and several technological risks, e.g., reliability, flexibility, ease of use. Lastly Liang & Li break down the costs into software, implementation, taining, maintenance, upgrade, and costs for existing systems.

Information Systems implementation failure – Insights from PRISM (Pan et al., 2008)

Donnerstag, September 18th, 2008

 Information Systems implementation failure - Insights from prism

Pan, Gary; Hackney, Ray; Panc, Shan L.: Information Systems implementation failure – Insights from prism; in: International Journal of Information Management, Vol. 28 (2008), pp. 259–269.

The authors apply a „antecedents — critical events — outcomes“-framework to analyse the implementation process. Pan et al. postulate a recursive process interaction model which repeats throughout critical events in the project. The Process modell flows circular between Project Organisation-(innovates)-IS-(serves)-Supporters-(support)-Project Organisation and so on.

Furthermore the authors identify critical events (positive and negative) to analyse the system failure of the whole project. Generalizing from the chain of negative events, Pan et al. found that recursive interactions lead to a drift of the project. Subsequently that drift leads to a sequence of decision mistakes which accelerated into project failure.

Strategic management accounting and sense-making in a multinational company (Tillmann & Goddard, 2008)

Mittwoch, September 17th, 2008

 Strategic management accounting and sense-making in a multinational company

Tillmann, Katja; Goddard, Andrew: Strategic management accounting and sense-making in a multinational company; in: Management Accounting Research, Vol. 19 (2008), No. 1, pp. 80-102.

Tillmann & Goddard analyse in a large German multi-national corporation how strategic management accounting is used and perceived. This is interesting as insofar they explore how managers work and get decisions made. The authors follow an open systems paradigm, which conceptualises the organisation as a set of ambiguous processes, structures, and environments. In such the manager is operating. Furthermore Tillmann & Goddard identify 4 major typically managerial activities (1) Scanning, (2) Sense-Making, (3) Sense-Giving, and (4) Decision-Making.

Sense-Making is of key interest to the authors. Sense-Making can be understood as constructing and re-constructing meaning, or simply as understanding the situation and getting the picture. Understanding is inherently linked to interpretations of real world events. In order to make-sense of events, simplification strategies are employed, such as translating, modelling, synthesis, and conceptualising/frameworking.

Moreover the authors propose a 3 step process model of sense-making.

  • Input – internal context, multiplicity of aspects, and external context which are individually internalised as information sets and ‚a feel for the game‘
  • Sense-Making – structuring & harmonising, compromising & balancing, and bridging & contextualising
  • Output – sense communication and decision-making

Flexibility at Different Stages in the Life Cycle of Projects: An Empirical Illustration of the “Freedom o Maneuver“ (Olsson & Magnussen, 2007)

Dienstag, August 12th, 2008

Flexibility and Funding in Projects

Olsson, Nils O. E.; Magnussen, Ole M.: Flexibility at Different Stages in the Life Cycle of Projects: An Empirical Illustration of the “Freedom o Maneuver“; in: Journal of Project Management, Vol. 38 (2007), No. 4, pp. 25-32.

The conceptual model, that uncertainty and degrees of freedom decrease during the life cycle of a project whilst the actual costs increase, is nothing new. New is the empirical proof. Olsson & Magnussen are the first to measure the degrees of freedom. They use the governmentally required reduction lists as a measure for the degrees of freedom in public projects.

Moreover they recommend a funding system which gives the project manager control over the basic budget and the expected additional costs (e.g. the value of the risk register). On top of this funding go the reserves or contingencies, which typically are about 8% of the total budget and which are managed by the agencies. Then comes the reduction list, which usually is 5.9% of the budget in the beginning of the project and reduces to 0.8% at half time. The authors argue that such a funding system has 85% probability of being kept.