Uncertainty Sensitivity Planning (Davis, 2003)

Januar 8th, 2009


Davis, Paul K.: Uncertainty Sensitivity Planning; in: Johnson, Stuart; Libicki, Martin; Treverton, Gregory F. (Eds.): New Challenges – New Tools for Defense Decision Making, 2003, pp. 131-155; ISBN 0-8330-3289-5.

Who is better than planning for very complex environments than the military?  On projects we set-up war rooms, we draw mind maps which look like tactical attack plans, and sometimes we use a very militaristic language.  So what’s more obvious than a short Internet search on planning and military.

Davis describes a new planning method – Uncertainty Sensitivity Planning.  Traditional planning characterises a no surprises future environment – much like the planning we usually do.  The next step is to identify shocks and branches.  Thus creating four different strategies

  1. Core Strategy = Develop a strategy for no-surprises future
  2. Contingent Sub-Strategies = Develop contingent sub-strategies for all branches of the project
  3. Hedge Strategy = Develop capabilities to help with shocks
  4. Environmental Shaping Strategy = Develop strategy to improve odds of desirable futures

Uncertainty Sensitivity Planning combines capabilities based planning with environmental shaping strategy and actions. 
Capabilities based planning plans along modular capabilities, i.e., building blocks which are usable in many different ways.  On top of that an assembly capability to combine the building blocks needs to be planned for.   The goal of planning is to create flexibility, adaptiveness, and robustness – it is not optimisation.  Thus multiple measurements of effectiveness exist. 
During planning there needs to be an explicit role for judgements and qualitative assessments.  Economics of choice are explicitly accounted for. 
Lastly, planning requirements are reflected in high-level choices, which are based on capability based analysis.

Application of Multicriteria Decision Analysis in Environmental Decision Making (Kiker et al., 2005)

Januar 8th, 2009


Kiker, Gregory A.; Bridges, Todd S.; Varghese, Arun; Seager, Thomas P.; Linkov, Igor: Application of Multicriteria Decision Analysis in Environmental Decision Making; in: Integrated Environmental Assessment and Management, Vol. 1 (2005), No. 2, pp. 95-108.

Kiker et al. review Multi-Criteria Decision-Making (MCDA).  The authors define MCDA as decisions, typically group decisions, with multiple criteria with different monetary and non-monetary values.  The MCDA process follows two steps – (1) construct a decision-matrix, (2) synthesis by ranking alternative by different means.

What are solutions/methods to apply MCDA in practice?

    multi-attribute utility theory & multi-attribute value theory
    = each score is given a utility, then utilities are weighted and summed up to choose an alternative
  • AHP
    analytical hierarchy process
    = pairwise comparison of all criteria to determine their importance
  • Outranking
    = pairwise comparison of all scenarios
  • Fuzzy
  • Mental Modelling
  • Linear Programming

Hierarchy of inquiring systems in meta-modelling (Gigch & Pipino, 1986)

Januar 7th, 2009


This concludes our little journey into constructivism, complex system thinking, and the big question: "What do we really really really know?"

Inputs   Philosophy of Science   Outputs
Evidence, epistemological questions —> Epistemology —> Paradigm
Evidence, scientific problems —> Science —> Theories & Models
Evidence, managerial problems —> Practice —> Solution to problems

System of Systems (Flood & Jackson, 1991) Decision making process (Simon, 1976)

Januar 7th, 2009

Not really a summary of two article, but rather a summary of two constructivists‘ concepts.

Firstly, Flood and Jackson propose a System of Systems and point out the modelling approaches suitable for these specific systems:

  Unitary Pluralist Coercive
Simple Operation Research, Systems Analysis, Systems Engineering, System Dynamics Social Systems Design, Strategic Assumption Surfacing and Testing Critical Systems Heuristics
Complex Viable Systems Model, General Systems Theory, Socio-Technical Systems Thinking, Contingency Theory Interactive Planning, Soft Systems Methodology  

Secondly, because at some point in time I just had to write it down again, Simon’s constructivist process of decision-making, originally published in 1979:

Intelligence (Is vs. Ought situation) —> Design (Problem Solving) —> Choice —> Implementation —> Evaluation

With the extension of decision loops if no choice can be made as proposed by Le Moigne, revisiting the Design, Intelligence, or even the Initial step:

  • Re-Design – the How
  • Re-Finalisation – the What
  • Re-Justification – the Why

A Principal Exposition of Jean-Louis Le Moigne’s Systemic Theory (Eriksson, 1997)

Januar 6th, 2009


Eriksson, Darek: A Principal Exposition of Jean-Louis Le Moigne’s Systemic Theory; in: „Cybernetics and Human Knowing“. Vol. 4 (1997), No. 2-3.

When thinking about complexity and systems one sooner or later comes across Le Moigne.  Departing point is the dilemma of simplification vs. intelligence.  Therefore systems have to be distinguished to be either complicated = that is they are reducible, or to be complex = show surprising behaviour.

Complicated vs. Complex
This distinction follows the same lines as closed vs. open systems, and mono- vs. multi-criteria optimisation.  Closed/mono-criteria/complicated systems can be optimised using algorithms, simplifying the system, and evaluating the solution by its efficacy.  On the other hand, open/multi-criteria/complex systems can only be satisfied by using heuristics, breaking down the system into modules, and evaluating the solution by its effectivity.

In the case of complex systems simplification only increases the complexity of the problem and will not yield a solution to the problem.  Instead of simplification intelligence is needed to understand and explain the system, in other words it needs to be modelled.  As Einstein already put it – defining the problem is solving it.
Secondly, to model a complex system is to model a process of actions and outcomes. The process definition consists of three transfer functions – (1) temporal, (2) morphologic, and (3) spatial transfer.  In order to make the step from modelling complicated system to modelling complex systems some paradigms need to change:

  • Subject –> Process
  • Elements –> Actors
  • Set –> Systems
  • Analysis –> Intelligence
  • Separation –> Conjunction
  • Structure –> Organisation
  • Optimisation –> Suitability
  • Control –> Intelligence
  • Efficacy –> Effectiveness
  • Application –> Projection
  • Evidence –> Relevance
  • Causal explanation –> Understanding

The model itself follows a black box approach.  For each black box, its function, its ends = objective, its environment, and its transformations need to be modelled.  Furthermore the modelling itself understands and explains a system on nine different levels.  A phenomena is

  1. Identifiable
  2. Active
  3. Regulated
  4. Itself
  5. Behaviour
  6. Stores
  7. Coordinates
  8. Imagines
  9. Finalised

Framed! (Singh, 2006)

Januar 6th, 2009


Singh, Hari: Framed!; HRD Press, 2006, ISBN: 0874258731

I stumbled upon this book somewhere in the tubes.  I do admit that I felt appealed to combine a fictional narrative with some scientific subtext.  Unfortunately for this book I put the bar to pass at Tom DeMarco’s Deadline.  On the one hand Singh delivers, what seems to be his own lecture on decision-making as the alter-ego of Professor Armstrong; on the other hand the fictional two-level story of Larry the first person story-teller and the crime mystery around Laura’s suicide turn murder does not really deliver.  Let alone the superficial references to Chicago, which I rather found off putting, I think a bit more of research and getting off the beaten track could have done much good here.  Lastly, I don’t fancy much the narrative framework driven style so commonly found in American self-help books – and so brilliantly mocked in Little Miss Sunshine.

Anyhow let’s focus on the content.  Singh calls his structure for better decision-making FACTNET

  • Framing/ conceptualising the issue creatively
  • Anchoring/ relying on reference points
  • Cause & effect
  • Tastes for risk preference & role of chance
  • Negotiation & importance of trust
  • Evaluating decisions by a process
  • Tracking relevant feedback

Frame – Identify the problem clearly, be candid about your ignorance, question presumptions, consider a wide set of alternatives
Anchoring – Anchor your evaluations with external reference points and avoid group thinking
Cause & effect – Recognise patterns and cause-effect-relationships, try to regress to the mean, be aware of biases such as the halo effect
Tastes for risk & Role of chance – be aware of compensation behaviour, satisficing behaviour, cognitive dissonance, signaling of risks, gambler’s fallacy, availability bias – all deceptions which negatively impact decision-making
Negotiation & Trust – just two words: Prisoner’s dilemma
Evaluating decisions by a process – Revisit decisions, conduct sensitivity analyses
Tracking relevant feedback – Continuously get feedback & feed-forward, be aware of overlooked feedback, treatment of effects, split up good news and bundle bad news, think about sunk costs, man & machine, and engage in self-examination

Three methods for decision-making are presented in the book – (1) balance sheet methods with applied weighting, (2) WARS = weighting attributes and scores, and (3) scenario strategies.

Lastly, Singh reminded me again of the old motto „Non Sequitur!“ – making me aware of all the logic fallacies that occur if something sounds reasonable but ‚does not really follow‘.

10 Mistakes that Cause SOA to Fail (Kavis, 2008)

Januar 6th, 2009


Kavis, Mike: 10 Mistakes that Cause SOA to Fail; in: CIO Magazine, 01. October 2008.
I usually don’t care much about these industry journals. But since they arrive for free in my mail every other week, I could help but noticing this article, which gave a brief overview of two SOA cases – United’s new ticketing system and Synovus financial banking system replacement.

However, the ten mistakes cited are all too familiar:

  1. Fail to explain SOA’s business value – put BPM first, then the IT implementation
  2. Underestimate the impact of organisational change – create change management plans, follow Kotter’s eight step process, answer everyone the question ‚What’s in it for me?‘
  3. Lack strong executive sponsorship
  4. Attempt to do SOA on the cheap – invest in middleware, invest in governance, tools, training, consultants, infrastructure, security
  5. Lack SOA skills in staff – train, plan the resources, secure up-front funding
  6. Manage the project poorly
  7. View SOA as a project instead of an architecture – create your matrices, war-room; engage collaboration
  8. Underestimate SOAs complexity
  9. Fail to implement and adhere to SOA governance
  10. Let the vendor drive the architecture

Comment function temporarily disabled

November 27th, 2008

Sorry everyone, on this wonderful Thanksgiving day this Blog has been flooded with Spam in the comments section. I don’t know why it should be that Thanksgiving increases the need for medication for back pains, restless leg syndrom, or the old-time classic e.d.

Decision Making Within Distributed Project Teams (Bourgault et al., 2008)

November 3rd, 2008

ecision Making Within Distributed Project Teams (Bourgault et al., 2008)

Bourgault, Mario; Drouin, Nathalie; Hamel, Émilie: Decision Making Within Distributed Project Teams – An Exploration of Formalization and Autonomy as Determinants of Success; in: Project Management Journal, Vol. 39 (2008), Supplement, pp. S97–S110.
DOI: 10.1002/pmj.20063

Bourgault et al. analyse group decision making in virtual teams. Their article is based on the principles of limited rationality, i.e. deciding is choosing from different alternatives, and responsible choice, i.e. deciding is anticipating outcomes of the decision.

Existing literature controversially discusses the effects of virtualising teams. Some authors argue that virtual teams lack social pressure and thus smaler likelihood of showing escalation of committment behaviour, whilst making more objective and faster decisions. Other authors find no difference in working style between virtual and non-virtual teams. Generally literature explains that decision-errors are mostly attributed to break-downs in rationality, which are caused by power and group dynamics. Social pressure in groups also prevents efficiency. In any team with distributed knowledge the leader must coordinate and channel the information flow.

Bourgault et al. conceptualise that Formalisation and Autonomy impact the quality of decision-making, which then influences the team work effectiveness. All this is moderated by the geographic dispersion of the team.
They argue that formalisation, which structures and controls the decision making activities, helps distributed teams to share information. Autonomy is a source of conflict, for example with higher management due to a lack of understanding and trust, ultimately it weakesn a project decision-making because it diverts horizontal information flow within the team to vertical information flow between project and management.
Quality of decision-making process – the authors argue that groups have more information resources and therefore can make better decisions, but this comes at an increased cost for decision-making. Geographical distributed teams lack signals and have difficulties in sharing information. Thus high quality teamwork benefits from more dispersed knowledge but low quality teamwork suffers from a lack of hands-on leadership.
Teamwork effectiveness – this construct has mostly been measured using satisfaction measurements and student samples. Other measures are the degree of taks completion, goal achievement, self-efficacy (intent to stay on the team, ability to cope, percieved individual performance, perceived team performance, satisfaction with the team). Bourgault et al. measure teamwork effectiveness asking for the perceived performance on taks completion, goal achievement, information sharing, conflict resolution, problem solving, and creating a prefereable and sustainable environment.

The authors‘ quantitative analysis shows that in moderated teams all direct and indirect effects can be substantiated, with exception of the autonomy influencing the quality of decision-making. Similarily in highly dispersed teams all direct and indirect effects, but the direct influence of formalisation on teamwork effectiveness, could be proven.

Bourgault et al. conclude with three points of recommendation for the praxis – (1) Distribution of a team contributes to high quality of decisions, although it seems to come at a high cost. (2) Autonomous teams achieve better decisions – „despite the fear of an out of sight out of control syndrome“. (3) Formalisation adds value to teamwork especially the more distributed the team is.

Governance Frameworks for Public Project Development and Estimation (Klakegg et al., 2008)

November 3rd, 2008

 Governance Frameworks for Public Project Development and Estimation (Klakegg et al., 2008)

Klakegg, Ole Jonny; Williams, Terry; Magnussen, Ole Morten; Glasspool, Helene: Governance Frameworks for Public Project Development and Estimation; in: Project Management Journal, Vol. 39 (2008), Supplement, pp. S27–S42.
DOI: 10.1002/pmj.20058

Klakegg et al. compare different public governance frameworks, particularly the UK’s Ministry of Defense, UK’s Office of Government Commerce, and Norway’s framework. The authors find that „the frameworks have to be politically and administratively
well anchored. A case study particularly looking into cost and time illustrates how the framework influences the project through scrutiny. The analysis shows the governance frameworks are important in securing transparency and control and clarifies the role of sponsor“ (p. S27)

Their analysis starts with the question of „Who are governance relevant stakeholders?„. The authors show two different general approaches to public governance stakeholders – Shareholder Value Systems and Communitarian Systems. The Shareholder Value System is based on the principle that only shareholders are legitimate stakeholders – a system which is used in the US, UK, and Canada. On the other hand the Communitarian System is based on the idea that all impacted communities and persons are relevant stakeholders – a system typically found in Norway, Germany, and numerous other countries. A secondary line of thought is the difference between Western and Asian stakeholder ideas, whereas the Asian idea is underlining the concept of family and the Western idea is underlining the relationship concept.

To pin down the idea of public project governance the authors draw parallels to corporate governance with it’s chain of management ↔ board ↔ shareholder ↔ stakeholder. The APM defines project governance as the corporate governance that is related to projects with the aim that sustainable alternatives are choosen and delivered efficiently. Thus the authors define a governance framework as an organised structure, authoritive in organisation with processes and rules established to ensure the project meets its purpose.

The reviewed governance frameworks show interesting differences – for example in the control basis, reviewer roles, report formats, supporting organisation, and mode of initiation. The principles they are based on range from management of expectations, to establishing hurdles to cross, to making recommendations. Focus of the reviews can be the business case, outputs, inputs, or used methods.

Protecting Software Development Projects against Underestimation (Miranda & Abran, 2008)

November 3rd, 2008

Protecting Software Development Projects against Underestimation (Miranda & Abran, 2008)

Miranda, Eduardo; Abran, Alain: Protecting Software Development Projects Against Underestimation; in: Project Management Journal, Vol. 39, No. 3, 75–85.
DOI: 10.1002/pmj.20067

In this article Miranda & Abran argue „that project contingencies should be based on the amount it will take to recover from the underestimation, and not on the amount that would have been required had the project been adequately planned from the beginning, and that these funds should be administered at the portfolio level.“

Thus they propose delay funds instead of contingencies. The amount of that fund depends on the magnitude of recovery needed (u) and the time of recovery (t).  t and u are described using a PERT-like model of triangular probability distribution, based on a best, most-likely, and worst case estimation.

The authors argue that typically in a software development three effects occur that lead to underestimation of contingencies. These three effects are (1) MAIMS behaviour, (2) use of contingencies, (3) delay.
MAIMS stands for ‚money allocated is money spent‘ – which means that cost overruns usually can not be offset by cost under-runs somewhere else in the project. The second effect is that contingency is mostly used to add resources to the project in order to keep the schedule. Thus contingencies are not used to correct underestimations of the project, i.e. most times the plan remains unchanged until all hope is lost. The third effect is that delay is an important cost driver, but delay is only acknowledged as late as somehow possible. This is mostly due to the facts of wishful thinking and inaction inertia on the project management side.

Tom DeMarco proposed a simple square root formula to express that staff added to a late project makes it even later. In this paper Miranda & Abran break this idea down into several categories to better estimate these effects.

In their model the project runs through three phases after delay occurred:

  1. Time between the actual occurence of the delay and when the delay is decided-upon
  2. Additional resources are being ramped-up
  3. Additional resources are fully productive

During this time the whole contingency needed can be broken down into five categories:

  1. Budgeted effort, which would occur anyway with delay or not = FTE * Recovery time as orginally planned
  2. Overtime effort, which is the overtime worked of the original staff after the delay is decided-upon
  3. Additional effort by additional staff, with a ramp-up phase
  4. Overtime contibuted by the additonal staff
  5. Process losses du to ramp-up, coaching, communication by orginal staff to the addtional staff

Their model also includes fatigue effects which reduce the overtime worked on the project, with the duration of that overtime-is-needed-period. Finally the authors give a numerical example.

Managerial complexity in project-based operations – A grounded model and its implications for practice (Maylor et al., 2008)

November 3rd, 2008

 Managerial complexity in project-based operations - A grounded model and its implications for practice (Maylor et al., 2008)

Maylor, Harvey; Vidgen, Richard; Carver, Stephen: Managerial complexity in project-based operations – A grounded model and its implications for practice; in: Journal of Project Management, Vol. 39 (2008), No. S1, pp. S15-S26.
DOI: 10.1002/pmj.20057

Maylor et al. investigate the question – What makes a project complex? More specifically this question asks for managerial complexity of projects, which is neither technical nor environmental complexity which has been looked at in depth in research surrounding the whole areas of function point estimation.

The literature review finds several previous approaches to measure complexity

  • Number of physical elements and interdependencies (Baccarini, 1996)
  • Structural uncertainty (number of project elements), uncertainty of goals and objectives (Williams, 1999)
  • Static dimension – assembly-system-array (Shenhar, 2001)
  • Organisational complexity, technical novelty, scale complexity (Maylor, 2003)
  • Observer-dependent, time-dependent, problem-dependent projects (Jaafari, 2003)
  • Organisational x technological complexity (Xia & Lee, 2004)
  • Communication and power relationships, amibguity, change (Cicmil & Marshall, 2005)

The authors then propose the MODeST model with the dimensions of mission, organisation, delivery, stakeholder, and team. In this qualitative focus group based research, the authors break down the dimesions into

– Objectives
– Scale
– Uncertainty
– Constraints

– Time & Space
– Organisational setting

– Process
– Resources

– Stakeholder attributes
– Inter-stakeholder relationships

– Project staff
– Project manager
– Group

This Complexity Measurements Table shows their full set of questions with the questions stricken out that were not mentioned sufficiently in the focus group discussions.

Governance and support in the sponsoring of projects and programs (Crawford et al., 2008)

November 3rd, 2008

 Governance and support in the sponsoring of projects and programs (Crawford et al., 2008)

Crawford, Lynn; Cooke-Davies, Terry; Hobbs, Brian; Labuschagne, Les; Remington, Kaye, Chen, Ping: Governance and support in the sponsoring of projects and programs; in: Project Management Journal, Vol. 39 (2008), No. S1, p. S43-S55.

Sponsoring of projects and programs is increasingly getting attention in project management research. The authors argue that this is due to two factors – (1) recognition of contextual critical success factors and (2) push for corporate governance.
[I personally think that riding that dead horse Sarbox is questionable to say the least and I can think of so many reasons why corporations want some of their projects controlled thightly.]

This article presents findings from a qualitative survey, in which 108 interviews from 36 projects in 9 organisations were collected. Crawford et al. propose a general model of project sponsorship – as they put it: „The conceptual model has significant potential to provide organizations and sponsors with guidance in understanding and defining the effective contextual conduct of the sponsorship role.“

Their general model consists of two dimensions – Need for Governance and Need for Support. In this model each sponsor can find his/her spot in the matrix by assessing what his/her focus of representation is. Sponsors either represent the need of the permanent organisation (need for governance) or they represent the need of the temporary organisation (need for support). In the interviews conducted, they identified typical situations which require a shift in emphasising one or the other dimension.

When to emphasise governance?
Among the resons and examples given during the interviews were: the project is high risk for the parent organisation, project performs poorly, markets are changing rapidly, governance or regulation call for increased oversight, project team behaved illegaly or non-compliant, the project is mission-critical, or the project’s objective is to re-align the company to a new strategy.

When to emphasise support?
Typical situations given were: parent organisation fails to provide resources, project faces resistance in the organisation, different stakeholders impose conflicting objecitve on the project, lack of decision-making by the parent organisation, project team is weak or inexperienced, or the project shows early signs of difficulities.

Among the many open research questions not yet addressed are –
What are the essential attributes to effective sponsoring?
Which influence does one or the other strategie has on project success?
Which competencies are required in a sponsor?
What are the factors contributing to effective sponsorship performance?
What does the role of the sponsor in different contexts of programmes/projects/organisations look like?

The Complexity of Self–Complexity: An Associated Systems Theory Approach (Schleicher & McConnell, 2005)

Oktober 28th, 2008

The Complexity of Self–Complexity: An Associated Systems Theory Approach (Schleicher & McConnell, 2005)

Schleicher, Deidra J.; McConnell, Allen R.: The Complexity of Self–Complexity: An Associated Systems Theory Approach; in: Social Cognition, Vol. 23 (2005), No. 5, pp. 387-416.
doi: 10.1521/soco.2005.23.5.387

In my search for complexity measurements of intangible projects I came across this approach to measure the most complex thing I could think of – our beautiful mind.

In this article Schleicher & McConnell describe the commonly used trait sorting exercise to measure self-complexity. Participants are presented 25-40 traits or roles on cards. Then they are asked to group them so that they best describe the aspects of their selfs. For example a participant might group well-dressed, anxious, mature into as traits describing the student aspect of her self.
To measure the self-complexity redundancy and relatedness of the groupings need to be assessed using following formula:

H = log2n – ( ∑i ni log2ni ) / n
n = total number of attributes for sorting, ni = number of attributes in each group/self-aspect, i = number of groups/self-aspects

Studies have confirmed that participants with a higher self-complexity are better in managing stress, well-being, physical illness, and depression.

Schleicher & McConnell propose a two dimensional concept of self-complexity – (1) target-reference: concrete vs. abstract, (2) self-reference: public vs. private self.

Concrete ← target-reference → Abstract
Visual System Verbal System Public Self
Visual appearance Social Categories Personality Traits
Behavioural Observations Evaluations self-reference
Behavioural Responses Orientations Affective Responses
Action System Affective System Private Self

Balance Sheet Analysis

Oktober 28th, 2008

 Balance Sheet Analysis

After the monstrous write-up of the Bredillet article on the MAP method, I just wanted to quickly write this down. I did a quick overview of the usual suspects when it comes to Balance Sheet Analysis. There are four major categories I) Liquidity, II) Profitability Ratios, III) Financial Leverage Ratio, IV) Efficiency Ratio.

I) Liquditiy

  • Working Capital = current assets – current liabilities
  • Acid Test = (cash + marketable securities + accounts receivable) / current liabilities
  • Current Ratio = current assets / current liabilities
  • Cash Ratio= (cash equivalents + marketable securities) / current liabilities

II) Profitability

  • Net Profit Margin = net income / net sales
  • Return on Assets = net income / ((beginning of period + end of period assets)/2)
  • Operating Income Margin = operating income / net sales
  • Gross Profit Margin = gross profit / net sales
  • Return on Equity = net income / equity
  • Return on Investment = net income / (long-term liabilities + equity)

III) Financial Leverage

  • Capitalisation = long-term debt / (long-term debt + owner’s equity)
  • Interest Coverage Ratio = EBIT / interest expense
  • Long-term debt to net working capital = long-term debt / (current assets – current liabilities)
  • Debt to Equity = total debt / total equity
  • Total debts of assets = total liabilities / total assets

IV) Efficiency

  • Cash Turnover = net sales / cash
  • Sales to Working Capital = net sales / average working capital
  • Total Assets Turnover = net sales / average total assets
  • Fixed Assets Turnover = net sales / fixed assets

Learning and acting in project situations through a meta-method (MAP) a case study: Contextual and situational approach for project management governance in management education (Bredillet, 2008)

Oktober 28th, 2008


Bredillet, Christophe N.: Learning and acting in project situations through a meta-method (MAP) a case study – Contextual and situational approach for project management governance in management education; in: International Journal of Project Management, Vol. 26 (2008), No. 3, pp. 238-250.

[This is a relatively complex post that follows – the article goes into epistemology quite deep (What is knowledge? How do we acquire it?) without much explanation given by the author. I tried to put together some explanatory background to make the rationale for the article more accessible. If you are just interested in the curriculum Bredillet proposes for learning project management on the job, skip these parts and jump right to the end of the post.]

In this article Bredillet outlines his meta-method used to teach project management. This method’s goal is to provide a framework in terms of processes and structure for learning in situ, namely on projects, programmes and alike. Bredillet argues that this method is best in accounting for complex, uncertain and ambiguous environments.

[Skip this part if you’re only interested in the actual application of the method.]

The authors starts with reviewing the three dominant project perspectives. a) Instrumental Perspective, which defines a project as a temporary endeavour to create something. b) Cognitive Perspective, which defines projects as exploitation of constraints and human/monetary capital in order to achieve an outcome. c) Political Perspective, which define projects as spatial actions which are temporarily limited, thus interacting with their environment. Bredillet argues that project management education does not reflect these perspectives according to their importance in the real world.

Bredillet argues that project management, knowledge creation and production (epistemology) have to integrate classical scientific aspects (Positivism) as well as fuzzy symbolisms (Constructivism). He says: „that the ‚demiurgic‘ characteristic of project management involves seeing this field as an open space, without ‚having‘ (Have) but rather with a raison d’être (Be), because of the construction of Real by the projects“ (p. 240).
Without any prior indulgence into epistemology (‚What is knowledge?‘ E. v. Glaserfeldt, Simon, Le Moigne etc.) this sentence is rather cryptic. What Bredillet wants to achieve is to unify the Positivist and Constructivist epistemology. Positivist epistemology can shortly be summarised to be our approach to understand the world quantitatively (= have = materialism, with only few degrees of freedom, e.g., best practices, OR, statistical methods). On the other hand Constructivist epistemology tries to understand the world with a qualitative focus (=be = immateriality, with many degrees of freedom, e.g., learning, knowledge management, change management). Bredillet summarises the constructivist epistemology citing Comte as „from Science comes Prevision, from Prevision comes Action“, and the positivist epistemology according to Le Moigne’s two hypothesis of reference – phenomenological („an existing and knowledgeable reality may be constructed by its observers who are then its constructors“) and teleological („knowledge is what gets us somewhere and that knowledge is constructed with an aim“).

Bredillet then argues that most research follows the positivist approach, valuing explicit over tacit knowledge, individual knowledge over team/organisational knowledge. To practically span the gap between Constructivism and Positivism Bredillet suggests to acknowledge tacit, explicit, team and individual knowledge as „distinct forms – inseparable and mutually enabling“ (p. 240).

How to unify Constructivsm and Postivitsm in Learning of Project Management?
Practically he explores common concepts always from both views, from the positivist and the constructivist standpoint, for instance, Bredillet describes concepts of organisational learning using the single-loop model (Postivism) vs. double-loop model, and system dynamics theory (Constructivsm).  Secondly, Bredillet stresses that learning and praxis are integrated, which is what the MAP method is all about:

„The MAP method provides structure and process for analysing, solving and governance of macro, meso, and micro projects. It is founded on the interaction between decision-makers, project team, and various stakeholders.“ (p. 240)

The three theoretical roots for the map method are (1) Praxeological epistemology, (2) N-Learning vs. S-Learning, (3) Theory of Convention. Thus the map method novelty is that it

  • Recognises the co-evolution of actor and his/her environment,
  • Enables integrated learning,
  • Aims at generating a convention (rules of decision) to cope with the uncertainty and complexity in projects.

Ad (1): The basic premises of Praxeological epistemology [in Economics] taken from Block (1973):

  • Human action can only be undertaken by individual actors
  • Action necessarily requires a desired end and a technological plan
  • Human action necessarily aims at improving the future
  • Human action necessarily involves a choice among competing ends
  • All means are necessarily scarce
  • The actor must rank his alternative ends
  • Choices continually change, both because of changed ends as well as means
  • Labour power and nature logically predate, and were used to form, capital
  • Technological knowledge is a factor of production

Ad (2): I don’t know whether n-Learning in this context stands for nano-Learning (constantly feeding mini chunks of learning on the job) or networked learning (network over the internet to learn from each other – blogs, wikis, mail etc.). Neither could I find a proper definition of S-Learning. Generally it seem to stand for supervised learning. Which can take place most commonly when training Neural Networks, and sometimes on the job.
Sorry – later on in the article Bredillet clarifies the lingo: N-Learning = Neoclassical Learning = Knowledge is cumulative; and S-Learning = Schumpeterian Learning = creative gales of destruction.

Ad (3): Convention Theory (as explained in this paper) debunks the notion that price is the best coordination mechanism in the economy. It states that there are collective coordination mechanisms and not only bilateral contracts, whose contingencies can be foreseen and written down.
Furthermore Convention Theory assumes Substantive Rationality of actors, radical uncertainty (no one knows the probability of future events), reflexive reasoning (‚I know that you know, that I know‘). Thus Convention Theory assumes Procedural Rationality of actors – actors judge by rational decision processes & rules and not by rational outcome of decisions.
These rules or convention for decision-making are sought by actors in the market. Moreover the theory states that

  • Through conventions knowledge can be economised (e.g., mimicking the behaviour of other market participants);
  • Conventions are a self-organising tools, relying on confidence in the convention
  • Four types of coordination exist – market, industry, domestic, civic

[Start reading again if you’re just interested in the application of the method.]

In the article Bredillet then continues to discuss the elements of the MAP meta model:

  • Project situations (entrepreneurial = generating a new position, advantage) vs. operations situations (= exploiting existing position, advantage)
  • Organisational ecosystem [as depicted on the right of my drawing]
  • Learning dynamics and praxis, with the three cornerstones of knowledge management, organisational learning, and learning organisation

Thus learning in this complex, dynamic ecosystem with its different foci of learning should have three goals – (1) individual learning, e.g., acquire Prince 2/PMP methodology; (2) Team learning, e.g., acquire team conventions; and (3) organisational learning, e.g., acquire new competitive position.

The MAP model itself consists of the several project management theories and concepts [theories are depicted on the left side of my drawing], the concepts included are

  • Strategic Management
  • Risk Management
  • Programme Management
  • Prospective Analysis
  • Projects vs. Operations
  • Ecosystem project/context
  • Trajectory of projects/lifecycles
  • Knowledge Management – processes & objects; and individual & organisational level
  • Systems thinking, dynamics
  • Organisational design
  • Systems engineering
  • Modelling, object language, systems man model
  • Applied sciences
  • Organisational Learning (single loop vs. double loop, contingency theory, psychology, information theory, systems dynamics)
  • Individual learning – dimensions (knowledge, attitudes, aptitudes) and processes (practical, emotional, cognitive)
  • Group and team learning, communities of practice
  • Leadership, competences, interpersonal aspects
  • Performance management – BSC, intellectual capital, intangible assets, performance assessments, TQM, standardisation

The praxeology of these can be broken down into three steps, each with its own set of tools:

  1. System design – social system design (stakeholder analysis, interactions matrices), technical system design (logical framework, e.g., WBS matrix, and logical system tree)
  2. System analysis – risk analysis (technical/social risk analysis/mapping), scenario analysis (stakeholder variables & zones)
  3. System management – scheduling, organisation & planning, strategic control

As such, Bredillet describes the MAP method trajectory as

  1. Strategic choice with a) conception, b) formulation
  2. Tactical alternatives with a) alternatives analysis and evaluation, b) decision
  3. Realisation with a) implementation, b) reports and feedback, c) transition into operations, c) post-audit review

In praxis the learning takes part in form of simulations, where real life complex situations have to be solved using the various concepts, methods, tools, and techniques (quantitative and psycho-sociological) which are included in the MAP-method. To close the reflective learning loop at the end two meta-reports have to be written – use of methods and team work, and how learning is transferred to the workplace. Bredillet says that with this method his students developed case studies, scenario analysis, corporate strategy evaluation, and tools for strategic control.

Project portfolio management – There’s more to it than what management enacts (Blichfeldt & Eskerodt, 2008)

Oktober 23rd, 2008

Project portfolio management – There’s more to it than what management enacts (Blichfeldt & Eskerodt, 2008)

Blichfeldt, Bodil Stilling; Eskerod, Pernille: Project portfolio management – There’s more to it than what management enacts; in: International Journal of Project Management, Vol. 26 (2008), No. 4, pp. 357-365.

Project Portfolio Management in Theory consists of

  • Initial screening, selection, and prioritisation of proposed projects
  • Concurrent re-prioritisation
  • Allocation and re-allocation of resources

These activities are free of any value. Blichfeldt & Eskerodt analyse the reality of project portfolio management to find out if it does any good to the organisations it is used in.

In reality they find that project portfolio management is merely a battle for resources and that portfolios consist of way to many projects to be practically manageable. He finds two distinct categories of projects in a portfolio – (1) enacted projects and (2) hidden projects.

Among the enacted projects are typically the new product developments, the classic project, trimmed for successful launch of a new cash cow. But in this enacted project category there are also the larger renewal projects. The larger renewal projects are usually not directly linked to the demand side, their primary aim is to enhance internal activities and not customer value, and some of them cut across departments. Overall the large renewal projects are not as well managed as product development projects – they lack experience, have a low priority, and lack structure, reviews etc.

The second category are the hidden projects. Usually bottom-up initiatives, departments or even single persons start during their work hours, or in specifically allocated time to pursue innovative projects of own interest.

Blichfeldt & Eskerod recommend to enact more projects. Manage the larger renewal projects in a more structured way, and include the hidden projects into the portfolio. If they drain resources they must be managed. Without destroying the creativity and innovation that usually come from these grass-root projects, organisations should allocate resources to a pool of loosely-controlled resources. Unenacted projects should be allowed to draw resources from this pool, with minimal administrative burden.

The PM_BOK Code (Whitty & Schulz, 2006)

Oktober 23rd, 2008

 The PM_BOK Code (Whitty & Schulz, 2006)

Whitty, S. J.; Schulz, M. F.: The PM_BOK Code; in: The Proceedings of 20th IPMA World Congress on Project Management, Vol. 1 (2008) , pp. 466 – 472.

The bold claim of this article is that project management is more about appearance than productivity.
Whitty & Schulz argue that our hard-wiring for memes and the western culture have turned project management (in it’s special representation in the PMI’s PMBOK) into a travesty.
The western culture is synonymous with the spirit of capitalism combined with the meme of the corporation, which has been disected many times most noteably by Achbar, Abbott & Bakan.

The authors compare the everyday madness of projects to nothing else but theatres. Keeping up appearances. They draw similarities between the theatrical stage – think meeting rooms and offices, costumes – think dark suits or funny t-shirts, scripts – think charts and status reports, props – think powerpoint, and audience – think co-workers and managers. Whitty & Schulz that the big show we put up everyday is to appear in control and successful.

Project management is the ideal way to represent western culture. Being flexible, ready for change, constantly exploiting new opportunities.
On the flip side, the authors argue, that project management kills creativity and democracy. It fractionalises the workforce, thus driving down productivity.

The way out of this predicament is to „reform […] the PMBOK® Guide version of PM in a way that elieves practitioners from performativity, and opens project work up to more creative and democratic processes“ (p. 471).

Making a difference? Evaluating an innovative approach to the project management Centre of Excellence in a UK government department (O’Leary & Williams, 2008)

Oktober 23rd, 2008

Making a difference? Evaluating an innovative approach to the project management Centre of Excellence in a UK government department (O’Leary & Williams, 2008)

O’Leary, Tim; Williams, Terry: Making a difference? Evaluating an innovative approach to the project management Centre of Excellence in a UK government department; in: International Journal of Project Management, Vol. 26 (2008), No. 5, pp. 556-565.

The UK has rolled out the ambitious programme of setting-up IT Centres of Excellence in all its departments. Focal point of these Centres of Excellence are Programme Offices.

The role of these Programme Offices has been defined as: Reporting, Recovering & Standardising.
The objectives for the programme offices are monitoring and reporting the status of the IT initiatives in the department, and implementing a structured life cycle methodology. This methodology ties in with a stage-gate framework that needs to be introduced. Additionally hit-teams of delivery managers have been set-up to turn-around ailing projects.

O’Leary and Williams find that the interventions seem to work successfully, whereas the reporting and standardisation objective has yet to be fulfilled. Moreover the authors analyse the root causes for this success. They found that the basis of success was:

  • Administrative control of department’s IT budget
  • Leadership of IT director
  • Exploitation of project management rhetoric
  • Quality of delivery managers

Building knowledge in projects – A practical application of social constructivism to information systems development (Jackson & Klobas, 2008)

Oktober 23rd, 2008

Building knowledge in projects - A practical application of social constructivism to information systems development (Jackson & Klobas, 2008)

Jackson, Paul; Klobas, Jane: Building knowledge in projects – A practical application of social constructivism to information systems development; in: International Journal of Project Management, Vol. 26 (2008), No. 4, pp. 329-337.

Jackson & Klobas describe the constructivist model of knowledge sharing and thus organisational learning. This classical model describes knowledge sharing in organisations as a constant cylcle of

  • Creating personal knowledge
  • Sharing newly created personal knowledge = Externalisation
  • Communication knowledge = Internalisation
  • Acquiring other peoples‘ knowledge = Learning

This cylcle includes the facilitating steps of Objectivation (=creating organisational knowledge), Legitimation (=authorising knowledge), and reification (=hardening knowledge) between externalisation and internalisation.

Jackson & Klobas argue that IT project failure can be explained using this model. The authors outline and discuss three failure factors – (1) lack of personal knowledge, (2) inability to externalise knowledge, and (3) lack of communication.