FIND InnoNuggets


Thursday, September 20, 2007

Software Reliability and Complexity

Software Reliability Growth Models

Software reliability models are developed with a premise that with more testing and bug-fixing, reliability will increase as number of bugs remaining in the system will go down. This is based on the assumption, that total number of bugs in a developed software system are fixed. These models are called software reliability growth models (SRGM). This is however the perfect debugging scenario, where the process of fixing bugs doesn’t introduce more bugs. In the SRGMs where imperfect debugging is assumed, the modeling is based on the assumption that by fixing bugs, one may introduce more bugs, but always the number of bugs after the fix will be less than the number of fix before the fix.

Since the process of bug occurrence and bug fixing is non-deterministic, most of these models are stochastic in nature. This implies that there is a probability function associated with both the error content in the system and the error detection rate at a particular instance of time in the life cycle of the system. Typical probability distribution assumed is the Poisson distribution. When the averages of the probability distribution vary with time, it is called the Non-Homogeneous Poisson Process (NHPP). One of the earliest models in this class is the Musa Model, which has been developed from the basic Goel-Okumoto model.

Musa model belongs to NHPP exponential model class and was first proposed in mid 1980’s. Since then the field of software reliability modeling has progressed further and many new models closer to reality have been developed and used. In the NHPP models class, NHPP S-shaped, NHPP imperfect debugging and NHPP S-shaped imperfect debugging models are new advancements.

All these models require an accurate and robust parameter estimation mechanism. A major parameter of interest in these models is the estimate of total number of bugs in the system. By having an estimate of total number of bugs, a decision on when to release the product can be taken by estimating the remaining bugs at a particular instance of time. Ideally one would like to have zero bugs when product is released. However the amount of testing needed to find and eliminate all the bugs is time and cost prohibitive. The SRGMs can help the designers/developers to estimate the remaining bugs which can be used to take a call on fit to release.

System Complexity Estimator

System understanding requires a detailed analysis of system complexity and the factors that contributes to the complexity. Complexity emerges due to greater demands of providing more functionality within multiple constraints. The System Complexity Estimator (SCE) developed by me takes into account the functionality being provided by each module and the dependency of each module on the system to compute the relative contribution of each module on the overall complexity. The analysis is based on two fundamental concepts of Cohesion and Coupling in the software design. Best design is one with minimum coupling between modules and maximum cohesion of each module. The complexity is measured relative to an ideal system with minimum complexity. An ideal system with minimum complexity is defined as the system where no module depends on any other modules for its functioning and each module performs only one function. The output of this analysis can help in prioritization of development effort, resource planning, integration scheduling, etc.

McCabe Complexity/Code Complexity

Cyclomatic complexity is the most widely used member of a class of static software metrics. Cyclomatic complexity may be considered a broad measure of soundness and confidence for a program. Introduced by Thomas McCabe in 1976, it measures the number of linearly-independent paths through a program module. This measure provides a single ordinal number that can be compared to the complexity of other programs. Cyclomatic complexity is often referred to simply as program complexity, or as McCabe's complexity. It is often used in concert with other software metrics. As one of the more widely-accepted software metrics, it is intended to be independent of language and language format

Maintainability Index
A program's maintainability is calculated using a combination of widely-used and commonly-available measures to form a Maintainability Index (MI). The basic MI of a set of programs is a polynomial of the following form (all are based on average-per-code-module measurement):
171 - 5.2 * ln(aveV) - 0.23 * aveV(g') - 16.2 * ln (aveLOC) + 50 * sin (sqrt(2.4 * perCM))
The coefficients are derived from actual usage. The terms are defined as follows:
aveV = average Halstead Volume V per module
aveV(g') = average extended cyclomatic complexity per module
aveLOC = the average count of lines of code (LOC) per module; and, optionally
perCM = average percent of lines of comments per module

So What to do - how to use these and when?

In the software development life cycle - these models and metrics need to be placed appropriately so that complexity increase from one phase of the life cycle to next phase should be under control of maintaining reliability of the system being developed. This is easier said then done. As of now, I dont know about any end-to-end model for managing complexity simultaneously increasing reliability of the software being developed and at the end making the system maintainable. May be food for thought!
Post a Comment

My Book @Goodread

My GoodReads