Author Archive

Success Factors in IT Projects (Fortune & White 2006 and Nasir & Sahibuddin 2011)

Donnerstag, Juli 26th, 2012

Fortune, J. & White, D., 2006. Framing of project critical success factors by a systems model. International Journal of Project Management, 24(1), pp.53-65. Available at: http://linkinghub.elsevier.com/retrieve/pii/S0263786305000876.

Nasir, N.H.M. & Sahibuddin, S., 2011. Critical success factors for software projects : A comparative study. Scientific Research and Essays, 6(10), pp.2174-2186.

I have come across many bits of work in progress now that try to make a selection of what are the success factors in IT projects. Well, these two articles of note publish a fairly comprehensive literature review that spans the plethora of studies on this topic. Fortune & White reviewed 63 articles in total, Nasir & Sahibuddin 43. The first one includes also reviews of hard to find conference papers and case studies that have not made it on the web yet, so the details of some of these studies are hard to verify.

Here is a google spreadsheet with the key comparison between what are the success factors in both reviews ordered by the amount of evidence the authors found to support each factor.

Link to the spreadsheet

Standish Chaos Report 2009

Dienstag, April 28th, 2009

Standish 2009

The Standish Group just published their new findings from the 2009 CHAOS report on how well projects are doing.  This years figures show that 32% of all projects are successful, 44% challenged, and 24% fail.  As the website says startling results especially that they still do not hold up to scientific standards and replications of this survey (e.g. Sauer, Gemino & Reich) find exactly the opposite picture.

Integrating the change program with the parent organization (Lehtonen & Martinsuo, 2009)

Dienstag, April 28th, 2009

DSC02128

Lehtonen, Päivi; Martinsuo, Miia: Integrating the change program with the parent organization; in: International Journal of Project Management, Vol. 27 (2009), No. 2, pp. 154-165.
http://dx.doi.org/10.1016/j.ijproman.2008.09.002

 

Lehtonen & Martinsuo analyse the boundary spanning activities of change programmes.  They find five different types of organisational integration – internal integration 1a) in the programme, 1b) in the organisation; external integration 2a) in the organisation, 2b) in the programme, and 3) between programme and parent organisation.

Furthermore they identify mechanisms of integration on these various levels.  These mechanisms are

  Mechanism of integration
Structure & Control Steering groups, responsibility of line managers
Goal & content link Programme is part of larger strategic change initiative
People links Cross-functional core team, part-time team members who stay in local departments
Scheduling & Planning links Planning, project management, budgeting, reporting
Isolation Abandon standard corporate steering group, split between HQ and branch roll-out

 

Among most common are four types of boundary spanning activities – (1) Information Scout, (2) Ambassador, (3) Boundary Shaping, and (4) Isolation.  Firstly, information scouting is done via workshops, interviews, questionnaire, data requests &c.  Secondly, the project ambassador presents the programme in internal forums, focuses on quick wins and show cases them, publishes about the project in HR magazines &c.  Thirdly, the boundary shaping is done by negotiations of scope and resources, and by defining responsibilities.  Fourthly, isolation typically takes place through withholding information, establishing a separate work/team/programme culture, planning inside; basically by gate keeping and blocking.   

Web 2.0 what is it really about

Montag, April 27th, 2009

DSC02124

This is beautiful, and done by some very clever colleagues of mine.  It is thought provoking and you since a lot of people out there run wild for ‚2.0‘ postfixes to whatever they do; this is the ultimate checklist.  It is in essence what Jeff Jarvis wrote in What would Google do? although only few people like the book and I have not even started reading it.

Orthodoxies New freedoms Example
Role of companies and customers are distinct Customers are integral part of the operations customers as designers, customers as clerks
Companies size gives them an edge over individuals Access to better information and cheaper communications reduce advantage of size Newspapers vs. blogs
Competitive advantage derives from control over unique asset Orchestration trumps ownership Linux, wikipedia
Hierarchies are best organising framework Reduced cost of information and communication enable adaptive, loosely coupled organisations Open Source
Business processes are batch-driven Continuous information flow drives operations to resemble continuous processes Services
The best people trust their gut Data ubiquity reduces subjectivity Google
You pay for what you get Consumers get valuable services for free ("Free is a better price than cheap") Music, Google Aps
Fat tails, short tails Long tails can be served and offer attractive margins amazon

Failure at the Speed of Light (Hickerson, 2006)

Samstag, April 25th, 2009

[Hot stuff! Not because it is fresh or hot of the press, no but surprisingly "Failed Projects" is the most read category on this blog this year with more than 700 readers, runner-up is "Critical Success Factors" and it is catching up quickly. ]

DSC02123

Hickerson, Thomas B.: Failure at the Speed of Light – Project Escalation and De-escalation in the Software Industry, Master of Arts in Law and Diplomacy Thesis, Tufts University, Medford, Massachusetts, 2006.
http://fletcher.tufts.edu/research/2006/Hickerson.pdf

 

Hickerson analyses three case studies – (1) California’s Department of Social Securities implementation of the California Automated Child Support System (CACSS), (2) Denver International Airport, and (3) FBI’s Virtual Case File.

Firstly, CACSS was planned and started in 1992, the tender went to Lockheed Martin for $75m with a go-live in December 1995.  The whole project was cancelled in 1997 after direct expenses of $100m were incurred.  A new version of the system started development in 2000 and finally went live in June 2008 (see this news snippet) and it is online here.

Second case is the Denver International Baggage Handling system.  This case is infamous to the extend that it made it on wikipedia, the full GAO report can be found here.  In short: BAE won the tender for $193m.  The system development had so many problems that in order to open the airport an alternative manual handling system was installed, which came at a price ticket of only $51m; at the time of opening the airport (Feb 1995, 2 years behind schedule anyway) the delays caused by the baggage handling system were estimated to be $360m.  Before United axed the system it cost about $1m per month in maintenance.

Third case is the FBI’s Virtual Case File.  The project started in 2001, quickly reached the typical 90% complete.  After 4 years the solution was replaced by a commercial off the shelf software and another 3 months later the project was cancelled altogether after $607m were spent.

 

Drawing from Social Justification Theory, Agency Theory, Approach Avoidance Theory Hickerson argues that ‚Internal inadequacies in dealing with external threats‘ was the main reason for the failures.  In case of DIA some project factors contributed to the failure such as the investment character of the project, long-term pay-offs, large size of the pay-offs, resources seen as plentiful, setbacks seen as secondary.

Apart from project factors psychological factors contributed to the failure of these cases, e.g., personal responsibility, ego importance, prior success and reinforcement, irreversibility of prior expenditures.  Thirdly social factors were responsible, such as the responsibility for failure, norms for consistency, hero effects, public identification with the course of action, and job insecurity.  Fourthly, structural factors played a role, e.g., political support, institutionalisation, and bureaucracy.

 

What were the tipping points that tipped the projects into escalation?

  • Change in top management support
  • External shocks to the organisation
  • Change in project champion
  • Organisational tolerance for failure
  • Presence of publicly stated resources
  • Alternate use of funds
  • Awareness of problems
  • Visibility of costs
  • Clarity of failure & success criteria
  • Organisational procedures of decision-making
  • Regular evaluation of projects
  • Separation of responsibility for approving and evaluation projects

What needs to be done to prevent escalation?

  1. Strict timeline
  2. Clear acceptance criteria
  3. Daily meetings between CIO and Managers
  4. Adherence to baseline requirements

The influence of checklists and roles on software practitioner risk perception and decision-making (Keil et al., 2008)

Freitag, April 24th, 2009

DSC02122

Keil, Mark; Li, Lei; Mathiassen, Lars;  Zheng, Guangzhi: The influence of checklists and roles on software practitioner risk perception and decision-making; in: Journal of Systems and Software, Vol. 81 (2008), No. 6, pp. 908-919.
http://dx.doi.org/10.1016/j.jss.2007.07.035 

In this paper the authors present the results of 128 role plays they conducted with software practitioners.  These role plays analysed the influence of checklists on the risk perception and decision-making.  The authors also controlled for the role of the participant, whether he/she was an insider = project manager or an outsider = consultant.  They found the role having no effect on the risks identified.

Keil et al. created a risk checklist based on the software risk model which was first conceptualised by Wallace et al. This model distinguishes 6 different risks – (1) Team, (2) Organisational environment, (3) Requirements, (4) Planning and control, (5) User, (6) Complexity. 
In their role plays the authors found that checklists have a significant influence on the number of risks identified. However the number of risks does not influence the decision-making.  Decision-making is influenced by the fact whether the participants have identified some key risks or not.  Therefore the risk checklists can influence the salience of the risks, i.e., whether they are perceived or not, but it does not influence the decision-making.

The resource allocation syndrome (Engwall & Jerbant, 2003)

Mittwoch, April 22nd, 2009

DSC02120

Engwall, Mats; Jerbant, Anna: The resource allocation syndrome – the prime challenge of multi-project management?; in: International Journal of Project Management, Vol. 21 (2003), No. 6, pp. 403-409.
http://dx.doi.org/10.1016/S0263-7863(02)00113-8

Engwall & Jerbant analyse the nature of organisations, whose operations are mostly carried out as simultaneous or successive projects. By studying a couple of qualitative cases the authors try to answer why the resource allocation syndrome is the number one issue for multi-project management and which underlying mechanisms are behind this phenomenon.

The resource allocation syndrome is at the heart of operational problems in multi-project management, it’s called syndrome because multi-project management is mainly obsessed with front-end allocation of resources.  This shows in the main characteristics: projects have interdependencies and typically lack resources; management is concerned with priority setting and resources re-allocation; competition arises between the projects; management focuses on short term problem solving.

The root causes for these syndromes can be found on both the demand and the supply side.  On the demand side the two root causes identified are the effect of failing projects on the schedule, the authors observed that project delay causes after-the-fact prioritisation and thus makes management re-active and rather unhelpful; and secondly over commitment cripples the multi-project-management.

On the supply side the problems are caused by management accounting systems, in this case the inability to properly record all resources and projects; and effect of opportunistic management behaviour, especially grabbing and booking good people before they are needed just to have them on the project.

Human Effort Dynamics and Schedule Risk Analysis (Barseghyan, 2009)

Dienstag, April 21st, 2009

DSC02127

Barseghyan, Pavel: Human Effort Dynamics and Schedule Risk Analysis; in:  PM World Today, Vol. 11(2009), No. 3.
http://www.pmforum.org/library/papers/2009/PDFs/mar/Human-Effort-Dynamics-and-Schedule-Risk-Analysis.pdf

Barseghyan researched extensively the human dynamics within project work.  He has formulated a system of intricate mathematics quite similar to Boyle’s law and the other gas laws.  He establishes a simple set of formulas to schedule the work of software developers.

T = Time, E = Effort, P = Productivity, S = Size, and D = Difficulty
Then W = E * P = T * P = S * D, and thus T = S*D/P

But and now it gets tricky S~D~P are correlated!

The author has collected enough data to show the typical curves between Difficulty –> Duration and Difficulty –> Productivity.  To schedule and synchronise two tasks the D/P ratio has to be constant between these two tasks.

 

Barseghyan then continues to explore the details between Difficulty and Duration.  He argues that the common notion of bell-shaped distributions is flawed because of the non linear relationship between Difficulty –> Duration  [note that his curves have a segment of linearity followed by some exponential part.  If the Difficulty bell curve is transformed into the Diffculty–>Duration probability function using that non-linear transformation formula it looses is normality and results in a fat-tail distribution.  Therefore Barseghyan argues, the notion of using bell-shaped curves in planning is wrong.  

Please Vote – Projects: Living People or Black Swans?

Sonntag, April 19th, 2009

Everyone, please vote which approach you think is right.  Let me outline that for you.

Yesterday I read the Black Swan by Nassim Nicholas Taleb.  It is a great book.  In short it is about our (as in the human mankind) inability to predict rare events.  He details a lot of psychological reasons, e.g., tunneling, narrative fallacy; for us not being able to predict these Black Swans and he also shows what we can do about it.  Great book – highly recommended.  Anyway, yesterday I was reading page 159 (for the ones who have a copy handy); and there he makes the hypothetical argument that we think about project deadlines as if they were probabilistically the same as our life expectancy – partly because that’s how we evolved.  So what do you think – is that really true?  But let’s understand that distinction in detail first.

1) Living People

Life expectancy figures are the very centre of an actuary’s daily life, at least the ones who insure health and life &c.  When you meet one at a party you’ll understand how exciting this topic can be.  What life expectancy figures do is to look at the dying age distribution with in a population; ages are ordered neatly by age and then the actuaries compute the probability of dying before your next birthday.  That also gives you then an expected age which is the year by which 50% of your fellow birthday-boys and girls will be dead.  The expected age is per definition an average. 

If you look up the tables and they chart quite nicely as well (cf. the graph below) you’ll see that in 2004 in the US the expected age for a newborn is 77.8 years.  Some of them die (a saddening 680 that is) before they reach their first birthday. So if you made it there then you can expect to live for another 77.4 years which allows you to expect your death shortly after your 78th birthday.  When you turn 30 then you can expect to live for 49.3 more years (or 79.3 in total), when you reach 50 you could expect another 30.9 years (80.9 in total) and so on. 

cdc

Source: CDC Life table for the total population:  United States, 2004

What if project delays would be like a living population?

In the book Taleb argues that we typically think of project delays having the same probabilistic properties as life expectancy curves.  That is on average all projects are delayed by 3 weeks, and when a project is delayed by 2 weeks it will be delayed by an additional 1.5 weeks and so on.  Since I don’t have nice raw data and my own data pool is not yet big enough for nice analysis like that, once again I plundered the cost overrun figures from the Standish Group report.  When computed and charted it looks like this:

projects

What does it say?  Well, when the project is still immaculate with 0% cost overrun, you would expect it to overrun its budget by +98%.  If your project already shows a budget overrun of 100% it will need another +63% (so in total it will be +163%), if it has overrun its budget by +300% you would put aside another +27% (totaling it at +327%) and so on.

2) The Black Swan

So what are these Black Swans all about?  An excerpt from the McKinsey Quarterly (No. 1, 2009) where the author summarises his central thesis beautifully (cf. the full article here):

Before Europeans discovered Australia, we had no reason to believe that swans could be any other color but white. But they discovered Australia, saw black swans, and revised their beliefs. My idea in The Black Swan is to make people think of the unknown and of the potency of the unknown, particularly a certain class of events that you can’t imagine but can cost you a lot: rare but high-impact events.

So my black swan doesn’t have feathers. My black swan is an event with three properties. Number one, its probability is low, based on past knowledge. Two, although its probability is low, when it happens it has a massive impact. And three, people don’t see it coming before the fact, but after the fact, everybody saw it coming. So it’s prospectively unpredictable but retrospectively predictable.

Now that we’re in this financial crisis, for example, everybody saw it coming. But did they own bank stocks? Yes, they did. In other words, they say that they saw it coming because they had some thoughts in the shower about this possibility—not because they truly took measures to protect themselves from it.

Now, a black swan can be a negative event like a banking crisis. It also can be positive: inventing new technology, making new discoveries, meeting your mate, writing a best seller, or developing a cure for cancer, baldness, or bad breath. In The Black Swan, I say that in the historical and socioeconomic domain, black swans are everything. If you ignore black swans, you’ve got nothing. And I showed that the computer, the Internet, and the laser—three recent technological black swans—came out of nowhere. We didn’t know what they were, and when we had them right before our eyes we didn’t know what to do with them. The Internet was not built as something to help people communicate in chat rooms; it was a military application and it evolved.

So these things have a life of their own. You cannot predict a black swan. We also have some psychological blindness to black swans. We don’t understand them, because, genetically, we did not evolve in an environment where there were a lot of black swans. It’s not part of our intuition.

In the book The Black Swan he argues on page 159 that when we make predictions of project schedules we tend to make them without looking for external events.  In his example of a publishing deadline – it may be the sick grandmother, sudden financial troubles which force the author to take on a night shift job, or a terrorist attack that troubles your mind for some months.  These things happen, yet we never acknowledge them in the first place.  So he argues a project that is late by 3 months should be expected to be late by another 5 weeks, if it then is not ready after 5 months you would expect another 6 weeks, at a year delay you would rather expect it to be delayed another 5 years than expecting it to be ready within the next 2 weeks – he argues that in reality the marginal expected project delay increases and does not decrease.

So, if we go ahead and compute the same Standish Figures with these probabilistic assumptions then we get the following picture:

projects2

So what do these numbers tell us?  Well, if the project is in budget, we better expect +98% budget overrun.  If it however is +100% over budget you better expect +226% more (that is a +326% total budget overrun); and when you get there at +326% you would expect +441% additional costs adding up to a whooping +767%.  You get the idea.

So if we chart that as "expectation of total budget at completion at cost overrun of"-diagram the curves look like that:

bac

 

3) Your turn – the vote

What do you think is true for Projects – do cost overruns of projects show probabilistic features of living people or are they Black Swans?

Online Surveys & Market Research

Thanks. 

The Definite List of Project Management Methodologies

Freitag, Januar 16th, 2009

This is a gem.
Craig Brown from the ‚Better Projects‘-Blog (here) created a presentation on Jurgen Appelo’s Definite List of Project Management Methodologies.  Jurgen published his list first in his blog over at noop.nl and now moved it into a Google Knol here.  Craig put it into a great tongue in cheek presentation.  I very much enjoyed it, as such here it is:

Software Project Methods

View SlideShare presentation or Upload your own. (tags: software agile)

Understanding software project risk – a cluster analysis (Wallace et al., 2004)

Mittwoch, Januar 14th, 2009

DSC01461

Wallace, Linda; Keil, Mark; Rai, Arun: Understanding software project risk – a cluster analysis; in: Information & Management, Vol. 42 (2004), pp. 115–125.

Wallace et al. conducted a survey among 507 software project managers worldwide.  They tested a vast set of risks and tried to group these risks into 3 clusters of projects: high, medium, and low risk projects. 

The authors assumed 6 dimensions of software project risks –

  1. Team risk – turnover of staff, ramp-up time; lack of knowledge, cooperation, and motivation
  2. Organisational environment risk – politics, stability of organisation, management support
  3. Requirement risk – changes in requirements, incorrect and unclear requirements, and ambiguity
  4. Planning and control risk – unrealistic budgets, schedules; lack of visible milestones
  5. User risk – lack of user involvement, resistance by users
  6. Complexity risk – new technology, automating complex processes, tight coupling

Wallace et al. showed two interesting findings.  Firstly, the overall project risk is directly correlated to the project performance – the higher the risk the lower the performance!  Secondly, they found that even low risk projects have a high complexity risk.

Understanding the Nature and Extent of IS Project Escalation (Keil & Mann, 1997)

Dienstag, Januar 13th, 2009

DSC01460

Keil, Mark; Mann, Joan: Understanding the Nature and Extent of IS Project Escalation – Results from a Survey of IS Audit and Control Professionals; in: Proceedings of The Thirtieth Annual Hawaii International Conference on System Sciences, 1997.

"Many runaway IS projects represent what can be described as continued commitment to a failing course of action, or "escalation" as it is called in management literature."  Keil & Mann argue that escalation of projects can be explained with four different factors – (1) project factors, (2) social factors, (3) psychological factors, and (4) organisational factors.

  1. Project Factors – cost & benefits, duration of the project
  2. Social Factors – rivalry between projects, need for external justification, social norms
  3. Organisational Factors – structural and political environment of the project
  4. Psychological Factors – managers previous experience, sunk-cost effects, self-justification

In 1995 Keil & Mann conducted a survey among IS-Auditors, among their most interesting findings are

  • 38.3% of all SW development projects show some level of escalation (Original question: ‚In your judgement, how many projects are escalated?‘).
  • When asked ‚From your last 5 projects how many escalated?‘ 19% of the auditors said none, 28% said 1, 20% said 2, 16% said 3, 10% said 4 and 8% said all 5.
  • Escalation of schedule 38% of projects 1-12 months, 36% 13-24 months, maximum in the sample was 21 years.
  • Average budget overrun of projects was 20%, when projects escalate average budget overrun is 158%.
  • 82% of all escalated projects run over their budget, whilst only 48% of all non-escalated projects run over their budget
  • Success rate of escalated projects is devastating – of all escalated projects 23% were completed successfully, 18% were abandoned, 5% were completed but never implemented, 23% were partially completed, 18% were unsuccessfully completed, and 8% were completed and then withdrawn.

Furthermore, Keil & Mann test for the reasons for escalation behaviour, based on their 4 factor concept.  They found the main reasons for project escalations were

  • Underestimation of time to completion
  • Lack of monitoring
  • Underestimation of resources
  • Underestimation of scope
  • Lack of control
  • Changing specifications
  • Inadequate planning

Why Software Projects Escalate – An Empirical Analysis and Test of Four Theoretical Models (Keil et al., 2000)

Montag, Januar 12th, 2009

DSC01459

Keil, Mark; Mann, Joan; Rai, Arun: Why Software Projects Escalate – An Empirical Analysis and Test of Four Theoretical Models; in: MIS Quarterly, Vol. 24 (2000), No. 4, pp. 631-664.

The authors describe that: "Software projects can often spiral out of control to become ‚runaway systems’…".  Keil et al. define escalation behaviour as constantly adding resources to the project.  Thus escalated projects typically overrun their schedule and budget.  The authors describe the case of the Statewide Automated Child Support System (SACSS) of California’s Dept. of Social Services.  The project was started in 1992 with a projected budget of USD 75.5m and a go-live date in 1995.  The project escalated and did cost an estimated USD 345m and was finally terminated in 1997 without any deliverables in place. 

Based on a large scale survey of IS-Auditors Keil et al. found that 30-40% of all software development projects show some degree of escalation.  Then the authors analyse four different theoretical models how to explain escalation – (1) Self-Justification Theory, (2) Prospect Theory, (3) Agency Theory, (4) Approach Avoidance Theory.

Self-Justification Theory – SJT is grounded in Feistinger’s cognitive dissonance, most easily self-justification can be characterised as a retrospectively rationalising behaviour which is found to violate internal or external beliefs, attitudes, or norms.  As the wikipedia article on SJT describes it self-justification typically manifests in two forms internal SJT or external SJT.  Internal SJT strategies are changing the violated attitude, downplaying or denying the negative consequences; whilst external SJT strategies are all sorts of external excuses from bad luck, to lack of competencies.  Keil et al. argue that two effects are relevant for escalation behaviour – social and psychological self-justification.  Whilst psychological self-justification is a strategy to overcome dissonance, social pressures increase the need for self-justification, e.g., saving your face.

Prospect Theory – Kahneman & Tversky’s Prospect Theory and their Cumulative Prospect Theory describes decision-making under uncertainty and risk.  Keil et al. argue that Prospect Theory explains escalation behaviour, because the theory postulates that when choosing between two adverse events, deciders are seeking greater risks.  Commonly this is also referred to as ’sunk cost effect‘.

Agency Theory – Jensen & Meckling’s concept of principal-agent relationships covers many examples of principals delegating decision competencies or execution of tasks to agents.  A principal-agent relationships turns sour (aka principal-agent problem) when goal incongruence and information asymmetry create a constellation of imperfect contracting and monitoring.  In general terms – the agents maximise their self-interest at the expense of the principal.  Simplest real-world example – the software integrator who you hired to do most of your work never wants your project to finish.

Approach Avoidance Theory – Rubin & Brockner’s concept of approach avoidance which is described that every approach vs. avoidance decisions is driven by the iconic little angel vs. little devil.  In the case of escalated projects, these forces either encourage persistence or abandonment of the project.  Three factors explain why projects are not terminated – (1) size of reward for goal attainment, (2) withdrawal costs, and (3) goal proximity.  Keil et al. argue that especially the third factor ‚goal proximity‘ creates a completion effect which is explained by the need for task closure.  They argue that this is a better conceptualisation of escalation symptoms than the sunk cost effect, aka throw good money after bad money.  They describe that completion effects pull an individual towards the goal whilst sunk cost effects push an individual further.  A beautiful real life example is the 90%-completion syndrome:

This syndrome refers to the tendency for estimates of work completed to increase steadily until a plateau of 90% is
reached. Thereafter, programmer estimates of the fraction of work completed increase very slowly. In some cases, inaccurate estimation leads to situations in which software projects are reported to be 90% complete for half of the entire duration of the project, an obvious impossibility (Brooks 1975).

Keil et al. test 6 constructs taken from these 4 theories which were previously connected to escalation behaviour – psychological self-justification, social self-justification, sunk cost effect, goal incongruence, information asymmetry, and completion effect.
With a series of pairwise logistic regression models between the groups of escalated vs. non-escalated projects – all 6 constructs and therefore all 4 theories can empirically be proven.  However the best classifier for escalation vs. non-escalation is the completion effect.

Institutional reform (Stone, 2008)

Freitag, Januar 9th, 2009

DSC01437

Stone, Alastair: Institutional reform – A decision-making process view; in: Research in Transportation Economics, Vol. 22 (2008), No. 1, pp. 164-178.

Stone analyses the fundamentals of the DMP (decision-making process) and proposes a model which he then applied to the example of urban land passenger transport.   This summary focuses on the analysis of the DMP.  Firstly, Stone draws a model which combines DMP costs with product scale and assumes a positive correlation between these two = the larger the product the higher the DMP costs. He outlines the entity involved, analytical mechanisms, and analytical techniques, e.g.,  for large-scale products the entity is usually a large collective, an expert analyst is the mechanism of choice for analysis, and a product specific discounted cash flow is the technique for the analysis.  Secondly, Stone shows that over time in the DMP the number of options decreases, whilst the average value of each option increases.

More interestingly he looks at different decisions and identifies 4 levels of societal decision processes

  1. Embeddedness = informal institutions, customs and traditions, norms and religion (happens every 100 to 1.000 years; is explained by social theory; and the purpose of it is non-calculative and spontaneous)
  2. Institutional environment = formal rules of the game (happens every 10 to 100 years; is explained by economics of property rights; and the purpose of it is to get the institutional environment right)
  3. Governance = play of the game (happens every 1 to 10 years; is explained by transaction cost economics; and the purpose of it is to get the governance structures right)
  4. Resource allocations (happen continuously; is explained by neo-classical economics; and have the purpose of getting the marginal conditions right)

Higher level impose constraints on lower level decisions, whilst these return feedback to the top. Finally, he outlines the DMP:
Input Variables (utilities, motivation, rights & power, and information) go into the Choice Mechanism (analysis & judgement) and result in a Resource Demand.

Decision-Making for New Technology – A Multi-Actor, Multi-Objective Method (Cunningham & Lei, 2007)

Freitag, Januar 9th, 2009

DSC01433

Cunningham, Scott W.; van der Lei, Telli E.: Decision-Making for New Technology: A Multi-Actor, Multi-Objective Method; in: Management of Engineering and Technology, 2007, No. 5-9, pp. 1176-1185.

Cunningham & Lei present a method that does not aggregate the individuals‘ preferences but instead considers strategic and economic factors in the assessment of multi-criteria decision making (MCDA).

They explicitly model exogenous and intrinsic values into their criteria. The exogenous values are based on MCDA (value at risk & trade-offs), and game theory (strategies & values).  The intrinsic values are based on cooperative game theory (negotiations), and preferences (revealed preferences from historic choices & value elicitation).

Using weighted linear value functions they model the system of a decision-maker on new technology.  Then the authors expand their system model to include alliances and outside options.  Their results show are somehow unexpected, the system does not agree on an equilibrium price (aka exchange rate), because individual companies in the alliance profit from raising prices locally within the network.  Thus they ask: "Given scarce resources, where must alliances trades be made to maximally enhance profitability?"

Uncertainty Sensitivity Planning (Davis, 2003)

Donnerstag, Januar 8th, 2009

DSC01432

Davis, Paul K.: Uncertainty Sensitivity Planning; in: Johnson, Stuart; Libicki, Martin; Treverton, Gregory F. (Eds.): New Challenges – New Tools for Defense Decision Making, 2003, pp. 131-155; ISBN 0-8330-3289-5.

Who is better than planning for very complex environments than the military?  On projects we set-up war rooms, we draw mind maps which look like tactical attack plans, and sometimes we use a very militaristic language.  So what’s more obvious than a short Internet search on planning and military.

Davis describes a new planning method – Uncertainty Sensitivity Planning.  Traditional planning characterises a no surprises future environment – much like the planning we usually do.  The next step is to identify shocks and branches.  Thus creating four different strategies

  1. Core Strategy = Develop a strategy for no-surprises future
  2. Contingent Sub-Strategies = Develop contingent sub-strategies for all branches of the project
  3. Hedge Strategy = Develop capabilities to help with shocks
  4. Environmental Shaping Strategy = Develop strategy to improve odds of desirable futures

Uncertainty Sensitivity Planning combines capabilities based planning with environmental shaping strategy and actions. 
Capabilities based planning plans along modular capabilities, i.e., building blocks which are usable in many different ways.  On top of that an assembly capability to combine the building blocks needs to be planned for.   The goal of planning is to create flexibility, adaptiveness, and robustness – it is not optimisation.  Thus multiple measurements of effectiveness exist. 
During planning there needs to be an explicit role for judgements and qualitative assessments.  Economics of choice are explicitly accounted for. 
Lastly, planning requirements are reflected in high-level choices, which are based on capability based analysis.

Application of Multicriteria Decision Analysis in Environmental Decision Making (Kiker et al., 2005)

Donnerstag, Januar 8th, 2009

DSC01431

Kiker, Gregory A.; Bridges, Todd S.; Varghese, Arun; Seager, Thomas P.; Linkov, Igor: Application of Multicriteria Decision Analysis in Environmental Decision Making; in: Integrated Environmental Assessment and Management, Vol. 1 (2005), No. 2, pp. 95-108.

Kiker et al. review Multi-Criteria Decision-Making (MCDA).  The authors define MCDA as decisions, typically group decisions, with multiple criteria with different monetary and non-monetary values.  The MCDA process follows two steps – (1) construct a decision-matrix, (2) synthesis by ranking alternative by different means.

What are solutions/methods to apply MCDA in practice?

  • MAUT & MAVT
    multi-attribute utility theory & multi-attribute value theory
    = each score is given a utility, then utilities are weighted and summed up to choose an alternative
  • AHP
    analytical hierarchy process
    = pairwise comparison of all criteria to determine their importance
  • Outranking
    = pairwise comparison of all scenarios
  • Fuzzy
  • Mental Modelling
  • Linear Programming

Hierarchy of inquiring systems in meta-modelling (Gigch & Pipino, 1986)

Mittwoch, Januar 7th, 2009

DSC01429

This concludes our little journey into constructivism, complex system thinking, and the big question: "What do we really really really know?"

Inputs   Philosophy of Science   Outputs
Evidence, epistemological questions —> Epistemology —> Paradigm
Evidence, scientific problems —> Science —> Theories & Models
Evidence, managerial problems —> Practice —> Solution to problems

System of Systems (Flood & Jackson, 1991) Decision making process (Simon, 1976)

Mittwoch, Januar 7th, 2009

Not really a summary of two article, but rather a summary of two constructivists‘ concepts.

Firstly, Flood and Jackson propose a System of Systems and point out the modelling approaches suitable for these specific systems:

  Unitary Pluralist Coercive
Simple Operation Research, Systems Analysis, Systems Engineering, System Dynamics Social Systems Design, Strategic Assumption Surfacing and Testing Critical Systems Heuristics
Complex Viable Systems Model, General Systems Theory, Socio-Technical Systems Thinking, Contingency Theory Interactive Planning, Soft Systems Methodology  

Secondly, because at some point in time I just had to write it down again, Simon’s constructivist process of decision-making, originally published in 1979:

Intelligence (Is vs. Ought situation) —> Design (Problem Solving) —> Choice —> Implementation —> Evaluation

With the extension of decision loops if no choice can be made as proposed by Le Moigne, revisiting the Design, Intelligence, or even the Initial step:

  • Re-Design – the How
  • Re-Finalisation – the What
  • Re-Justification – the Why

Decision Making Within Distributed Project Teams (Bourgault et al., 2008)

Montag, November 3rd, 2008

ecision Making Within Distributed Project Teams (Bourgault et al., 2008)

Bourgault, Mario; Drouin, Nathalie; Hamel, Émilie: Decision Making Within Distributed Project Teams – An Exploration of Formalization and Autonomy as Determinants of Success; in: Project Management Journal, Vol. 39 (2008), Supplement, pp. S97–S110.
DOI: 10.1002/pmj.20063

Bourgault et al. analyse group decision making in virtual teams. Their article is based on the principles of limited rationality, i.e. deciding is choosing from different alternatives, and responsible choice, i.e. deciding is anticipating outcomes of the decision.

Existing literature controversially discusses the effects of virtualising teams. Some authors argue that virtual teams lack social pressure and thus smaler likelihood of showing escalation of committment behaviour, whilst making more objective and faster decisions. Other authors find no difference in working style between virtual and non-virtual teams. Generally literature explains that decision-errors are mostly attributed to break-downs in rationality, which are caused by power and group dynamics. Social pressure in groups also prevents efficiency. In any team with distributed knowledge the leader must coordinate and channel the information flow.

Bourgault et al. conceptualise that Formalisation and Autonomy impact the quality of decision-making, which then influences the team work effectiveness. All this is moderated by the geographic dispersion of the team.
They argue that formalisation, which structures and controls the decision making activities, helps distributed teams to share information. Autonomy is a source of conflict, for example with higher management due to a lack of understanding and trust, ultimately it weakesn a project decision-making because it diverts horizontal information flow within the team to vertical information flow between project and management.
Quality of decision-making process – the authors argue that groups have more information resources and therefore can make better decisions, but this comes at an increased cost for decision-making. Geographical distributed teams lack signals and have difficulties in sharing information. Thus high quality teamwork benefits from more dispersed knowledge but low quality teamwork suffers from a lack of hands-on leadership.
Teamwork effectiveness – this construct has mostly been measured using satisfaction measurements and student samples. Other measures are the degree of taks completion, goal achievement, self-efficacy (intent to stay on the team, ability to cope, percieved individual performance, perceived team performance, satisfaction with the team). Bourgault et al. measure teamwork effectiveness asking for the perceived performance on taks completion, goal achievement, information sharing, conflict resolution, problem solving, and creating a prefereable and sustainable environment.

The authors‘ quantitative analysis shows that in moderated teams all direct and indirect effects can be substantiated, with exception of the autonomy influencing the quality of decision-making. Similarily in highly dispersed teams all direct and indirect effects, but the direct influence of formalisation on teamwork effectiveness, could be proven.

Bourgault et al. conclude with three points of recommendation for the praxis – (1) Distribution of a team contributes to high quality of decisions, although it seems to come at a high cost. (2) Autonomous teams achieve better decisions – „despite the fear of an out of sight out of control syndrome“. (3) Formalisation adds value to teamwork especially the more distributed the team is.