Archive for the ‘Software Development’ Category

The influence of checklists and roles on software practitioner risk perception and decision-making (Keil et al., 2008)

Freitag, April 24th, 2009

DSC02122

Keil, Mark; Li, Lei; Mathiassen, Lars;  Zheng, Guangzhi: The influence of checklists and roles on software practitioner risk perception and decision-making; in: Journal of Systems and Software, Vol. 81 (2008), No. 6, pp. 908-919.
http://dx.doi.org/10.1016/j.jss.2007.07.035 

In this paper the authors present the results of 128 role plays they conducted with software practitioners.  These role plays analysed the influence of checklists on the risk perception and decision-making.  The authors also controlled for the role of the participant, whether he/she was an insider = project manager or an outsider = consultant.  They found the role having no effect on the risks identified.

Keil et al. created a risk checklist based on the software risk model which was first conceptualised by Wallace et al. This model distinguishes 6 different risks – (1) Team, (2) Organisational environment, (3) Requirements, (4) Planning and control, (5) User, (6) Complexity. 
In their role plays the authors found that checklists have a significant influence on the number of risks identified. However the number of risks does not influence the decision-making.  Decision-making is influenced by the fact whether the participants have identified some key risks or not.  Therefore the risk checklists can influence the salience of the risks, i.e., whether they are perceived or not, but it does not influence the decision-making.

Managing the development of large software systems (Royce, 1970)

Donnerstag, April 23rd, 2009

DSC02121

Royce, W. Winston: Managing the development of large software systems; in: Proceedings of IEEE Wescon (1970), pp. 382-338.
http://leadinganswers.typepad.com/leading_answers/files/original_waterfall_paper_winston_royce.pdf

It’s never to late to start reading a classic.  This is one for sure.  The original paper which proposes the waterfall software development model.  This is now extremely common place – but and that is what stroke me odd as well, the model shows a huge number of feedback loops which typically are omitted.

The steps of the original waterfall are as follows

  1. System Requirements
  2. Software Requirements
  3. Preliminary Program Design which includes the preliminary software review
  4. Analysis
  5. Program Design which includes several critical software reviews
  6. Coding
  7. Testing which includes the final software review
  8. Operations

Among the interesting loops in this model is the big feedback from testing into program design and from program design into software requirements.  By no means can is this model what we commonly assume to be a waterfall process – there are no frozen requirements, no clear cut steps without any looking back.  This is much more RUP or AGILE or whatever you want to call it than the waterfall model I have in my head.

The Definite List of Project Management Methodologies

Freitag, Januar 16th, 2009

This is a gem.
Craig Brown from the ‚Better Projects‘-Blog (here) created a presentation on Jurgen Appelo’s Definite List of Project Management Methodologies.  Jurgen published his list first in his blog over at noop.nl and now moved it into a Google Knol here.  Craig put it into a great tongue in cheek presentation.  I very much enjoyed it, as such here it is:

Software Project Methods

View SlideShare presentation or Upload your own. (tags: software agile)

Understanding software project risk – a cluster analysis (Wallace et al., 2004)

Mittwoch, Januar 14th, 2009

DSC01461

Wallace, Linda; Keil, Mark; Rai, Arun: Understanding software project risk – a cluster analysis; in: Information & Management, Vol. 42 (2004), pp. 115–125.

Wallace et al. conducted a survey among 507 software project managers worldwide.  They tested a vast set of risks and tried to group these risks into 3 clusters of projects: high, medium, and low risk projects. 

The authors assumed 6 dimensions of software project risks –

  1. Team risk – turnover of staff, ramp-up time; lack of knowledge, cooperation, and motivation
  2. Organisational environment risk – politics, stability of organisation, management support
  3. Requirement risk – changes in requirements, incorrect and unclear requirements, and ambiguity
  4. Planning and control risk – unrealistic budgets, schedules; lack of visible milestones
  5. User risk – lack of user involvement, resistance by users
  6. Complexity risk – new technology, automating complex processes, tight coupling

Wallace et al. showed two interesting findings.  Firstly, the overall project risk is directly correlated to the project performance – the higher the risk the lower the performance!  Secondly, they found that even low risk projects have a high complexity risk.

10 Mistakes that Cause SOA to Fail (Kavis, 2008)

Dienstag, Januar 6th, 2009

DSC01430

Kavis, Mike: 10 Mistakes that Cause SOA to Fail; in: CIO Magazine, 01. October 2008.
I usually don’t care much about these industry journals. But since they arrive for free in my mail every other week, I could help but noticing this article, which gave a brief overview of two SOA cases – United’s new ticketing system and Synovus financial banking system replacement.

However, the ten mistakes cited are all too familiar:

  1. Fail to explain SOA’s business value – put BPM first, then the IT implementation
  2. Underestimate the impact of organisational change – create change management plans, follow Kotter’s eight step process, answer everyone the question ‚What’s in it for me?‘
  3. Lack strong executive sponsorship
  4. Attempt to do SOA on the cheap – invest in middleware, invest in governance, tools, training, consultants, infrastructure, security
  5. Lack SOA skills in staff – train, plan the resources, secure up-front funding
  6. Manage the project poorly
  7. View SOA as a project instead of an architecture – create your matrices, war-room; engage collaboration
  8. Underestimate SOAs complexity
  9. Fail to implement and adhere to SOA governance
  10. Let the vendor drive the architecture

Protecting Software Development Projects against Underestimation (Miranda & Abran, 2008)

Montag, November 3rd, 2008

Protecting Software Development Projects against Underestimation (Miranda & Abran, 2008)

Miranda, Eduardo; Abran, Alain: Protecting Software Development Projects Against Underestimation; in: Project Management Journal, Vol. 39, No. 3, 75–85.
DOI: 10.1002/pmj.20067

In this article Miranda & Abran argue „that project contingencies should be based on the amount it will take to recover from the underestimation, and not on the amount that would have been required had the project been adequately planned from the beginning, and that these funds should be administered at the portfolio level.“

Thus they propose delay funds instead of contingencies. The amount of that fund depends on the magnitude of recovery needed (u) and the time of recovery (t).  t and u are described using a PERT-like model of triangular probability distribution, based on a best, most-likely, and worst case estimation.

The authors argue that typically in a software development three effects occur that lead to underestimation of contingencies. These three effects are (1) MAIMS behaviour, (2) use of contingencies, (3) delay.
MAIMS stands for ‚money allocated is money spent‘ – which means that cost overruns usually can not be offset by cost under-runs somewhere else in the project. The second effect is that contingency is mostly used to add resources to the project in order to keep the schedule. Thus contingencies are not used to correct underestimations of the project, i.e. most times the plan remains unchanged until all hope is lost. The third effect is that delay is an important cost driver, but delay is only acknowledged as late as somehow possible. This is mostly due to the facts of wishful thinking and inaction inertia on the project management side.

Tom DeMarco proposed a simple square root formula to express that staff added to a late project makes it even later. In this paper Miranda & Abran break this idea down into several categories to better estimate these effects.

In their model the project runs through three phases after delay occurred:

  1. Time between the actual occurence of the delay and when the delay is decided-upon
  2. Additional resources are being ramped-up
  3. Additional resources are fully productive

During this time the whole contingency needed can be broken down into five categories:

  1. Budgeted effort, which would occur anyway with delay or not = FTE * Recovery time as orginally planned
  2. Overtime effort, which is the overtime worked of the original staff after the delay is decided-upon
  3. Additional effort by additional staff, with a ramp-up phase
  4. Overtime contibuted by the additonal staff
  5. Process losses du to ramp-up, coaching, communication by orginal staff to the addtional staff

Their model also includes fatigue effects which reduce the overtime worked on the project, with the duration of that overtime-is-needed-period. Finally the authors give a numerical example.

The mechanisms of project management of software development (McBride, 2008)

Montag, September 22nd, 2008

The mechanisms of project management of software development

McBride, Tom: The mechanisms of project management of software development; in: Journal of Systems and Software, Article in Press.
http://dx.doi.org/10.1016/j.jss.2008.06.015

McBride covers two aspects of tools and techniques for the management of software developments. Firstly the monitoring, secondly the control and thirdly the coordination mechanisms used in software development.

The author distinguishes four categories of monitoring tools: automatic, formal, ad hoc, and informal. The most common tools used are schedule tracking, team meeting, status report, management reviews, drill downs, conversations with the team and the customers.

The control mechanisms are categorised by their organisational form of control as either output, behaviour, input, or clan control. The most often used control mechanisms are Budget/schedule/functionality control, formal processes, project plan, team co-location, and informal communities.

Lastly the Coordination mechanisms are grouped by which way the try to coordinate the teams: standards, plans, formal and informal coordination mechanisms. The most common are specifications, schedule, test plans, team meetings, ad hoc meetings, co-location, and personal conversations.

A Framework for the Life Cycle Management of Information Technology Projects – ProjectIT (Stewart, 2008)

Donnerstag, Juli 17th, 2008

 ProjectIT

Stewart, Rodney A.: A Framework for the Life Cycle Management of Information Technology Projects – ProjectIT; in: International Journal of Project Management, Vol. 26 (2008), pp. 203-212.

Stewart outlines a framework of management tasks which are set to span the whole life cycle of a project. The life cycle consists of 3 phases – selection (called „SelectIT“), implementation (called „ImplementIT“), and close-out (called „EvaluateIT“).

The first phase’s main goal is to single out the projects worth doing. Therefore the project manager evaluates cost & benefits (=tangible monetary factors) and value & risks (=intangible monetary factors). In order to evaluate these the project manager needs to define a probability function of these factors for the project. Then these distribution functions are aggregated. Stewart suggests using also the Analytical Hierarchy Process Method (AHP) and the Vertex method [which I am not familiar with, neither is wikipedia or the general internet] in this step. Afterwards the rankings for each project are calculated and the projects are ranked accordingly.

The second phase is merely a controlling view on the IT project implementation. According to Stewart you should conduct SWOT-Analyses, come up with a IT diffusion strategy, design the operational strategy, some action plans on how to implement IT, and finally a monitoring plan.

The third stage („EvaluateIT“) advocates the use of an IT Balanced Score Card with 5 different perspectives – (1) Operations, (2) Benefits, (3) User, (4) Strategic competitiveness, and (5) Technology/System. In order to establish the Balanced Score Card measures for each category need to be defined first, then weighted, then applied and measured. The next step is to develop a utility function and finally overall IT performance can be monitored and improvements can be tracked.

Why does Software cost so much? (DeMarco, Tom 1995)

Mittwoch, Juli 2nd, 2008

WDSCSM? (Thumb)

Let’s start with a real classic. Tom DeMarco’s „Why does software cost so much? And other puzzles of the information age“ (http://www.amazon.com/Why-Does-Software-Cost-Much/dp/093263334X)

Well, it is a bit aged but given the projects I have seen, it is far from being outdated. So what is his answer? It’s Peopleware not Software and people have to function in their roles and sometimes they don’t.

DeMarco lists as root causes: Scheduling errors („The schedule is crap, when even high performers have no slack“), missing accountability by management („I don’t ask for an estimate, I ask for a promise!“), missing prioritization („All these recommendations for improving ourselves are great. But what if only one thing succeeds? What would it be?“), and the general tendency to ‚fuck up‘ the end-game (i.e. value capturing after implementation).

And of course DeMarcos specialty – Software Development Metrics. He adds the nice insight that measuring something without a clear idea how to improve on that metric is a waste of time and money. It might be worthwhile to sample business case points etc. for a while, but in the long-run only defect counts should be institutionalized.