It goes without saying that the correct use of statistics (or any other "science") can be helpful in project management. And, today, project simulation software with plug-ins that integrate statistical tools that work seamlessly with project data make large-scale project modeling seem relatively easy. But, that can be either a blessing or a problem: a blessing when you use this technology to reflect into your model the correct statistical nature of the processes that will be at play in your next project or a problem when you—as so many do—presume incorrectly to know the underlying statistical nature of the processes that will be at play in your next project. What you can know with certainty is the statistical nature of past projects. And, to the extent that your next project is like them, new statistical estimates based on this historical data make a good deal of sense. But, all too often, your upcoming project will have a brand new distribution of possible outcomes, because the nature of its tasks is different, the human and/or other resources are different, and so forth. Before cautioning you further about some of the pitfalls of using statistical tools, I want to say that I am an advocate of using them and, even when their use is not indicated, of thinking in statistical terms about project management.
Figure 3: @Risk for Microsoft Project used to simulate the statistical nature of work, cost, and so on.
The best-known project simulation technique is the Monte Carlo simulation. But, just as Game Theory (another advanced statistical methodology that's used by some of the top poker players) can't guarantee success at cards, Monte Carlo simulation can't guarantee that your project will be finished under budget, ahead of schedule, or with the highest possible quality.
In Monte Carlo simulation, for tasks that are hard to predict, you specify the lower and upper limits of an estimate and choose a probability curve between those limits (refer to Figure 3). Some tasks are renowned for their wide range between the lower and upper limit; for example, debugging code in a software development project.
The simulating software generates estimates for all the tasks using these parameters. It uses number generators that produce estimates that comply with the range and the distribution curve you've chosen for the estimate. Then, the Monte Carlo simulator creates the first version of the entire schedule and calculates the Critical Path. The software does this repeatedly and finally calculates average project duration and probabilities for a range of finish dates. Naturally, Monte Carlo simulation can be applied to other sets of random data such as the time-of-arrival of materials or funding. See Reference 3 for further details.
As already stated, one problem with this process is that it depends on your choosing the correct historical data to use for the prediction of future events. And, nothing in your software can account for learning curves—the time required for a person to become effective when applying a new skill. You can, of course, choose the level of probability you feel comfortable with, and derive the project costs, finish date, and so forth, or you can let management choose these probabilities for you. But, humans are notorious for coming up with estimates that are self serving or self deluding.
These shortcomings not withstanding, tools such as @Risk for Microsoft Project from Palisade, Inc. also can answer questions like "What is the probability we will finish on schedule?" using detailed statistics from a simulation. @Risk can give you a critical Indices report on the probability that a task will fall on the critical path, as well. This can identify critical tasks that Microsoft Project alone would miss! Sensitivity and scenario analyses identify the factors and situations that could seriously impact your project. These statistical analyses can help you predict "What are the key factors that could make our project go over budget or past deadline?"
Finally, a synergy occurs when your simulation software adds logic to statistics. So, for example, @Risk puts probabilistic branching, If/then conditional modeling and the like at your fingertips which, to a great extent, give you the feeling that you're "flying on autopilot." But, you're not. My admonition that nobody can know the probability of future events with certainty still holds. Use these powerful statistical and logical tools, but use them cautiously.
Management needs to be able to plan and control the activities of the organization. At the same time, people run organizations—be they senior management, project managers, or team members who can perform with higher or lower levels of efficiency, creativity, and job satisfaction. In the past, project management focused on "management," which implied a top down view of how projects are conducted.
However, studies of weaving mills in India and coal mines in England in the early 1950s discovered that work groups could effectively handle many production problems better than management if they were permitted to make their own decisions on scheduling, work allocation among members, bonus sharing, and so forth. This was particularly true when there were variations in the production process requiring quick reactions by the group or when the work of one shift overlapped with the work of other shifts. These findings apply, although not always directly, to the people who contribute to large-scale, complex, often-distributed projects such as those typically scheduled by tools like Microsoft Project and Palisade's @Risk. But, these people don't work in a vacuum:
At one end of the spectrum, opposed to the "autonomous" work groups described above, are toxic project managers—those who tightly control and manipulate others for their own aggrandizement. They can actually degrade the quality of work, morale, and even the stability of an organization. Organizations that tolerate or reward toxic behavior are heading for an inevitable fall, because it usually creates tension between labor and management that consumes the energy of both.
Then, in contradistinction to the effective work-group members described above, solitary problem solvers who excel at figuring out how to handle "nontrivial" tasks are found throughout many organizations. Forcing these "loners" to work by team rules can neutralize the potential contribution of these valuable members of your organization.
So what's all this talk about "good" team members and "bad" project managers got to do with project management software and systems? Just this: The savings of time (a.k.a. money) made possible by the proper use of today's powerful project management software and systems can be dwarfed by the economic losses that can follow from not considering the wants, needs, and "psychological" makeup of the people who do the actual work that your software attempts to model and the behavior of the project managers who run these applications.
What's the bottom line? That is, how can you measure the net effect of how well the people in your organization—all of them—are getting along and working with one another and, also, how well you're using the tools and computer technologies available today to model and keep confidential their work on your projects? Often, it's the people issues that have a bigger impact on your ROI (return on investment) than the tools and computer technologies. For example, just one disgruntled employee can neutralize the protection thought to be provided by your investment in PKI technology. There are, of course, sophisticated ways to measure how well you're doing (see Reference 4 for further details). However, you often can get a pretty good idea of how your project management is doing overall, through the use of some conspicuously low-tech metrics such as the rate of employee turnover, whether or not your customers return with future jobs, and whether or not your management responds favorably to your next requests.
Marcia Gulesian is Chief Technology Officer of Gulesian Associates, a consulting firm that has advised corporations, universities, and governments worldwide. She is author of more than 100 articles on software development, its economics, and its management. You can email Marcia at firstname.lastname@example.org.