FIND InnoNuggets

 

Thursday, September 20, 2007

Creating Brilliant Enterprises

Recently read an old paper of mine titled Creating Next Generation Enterprises which was published in the erstwhile eAI Journal (In 2002). Well 5 years back! Isnt it too old.

I have described the journey of entrprises from Data Driven -> Information Driven -> Knowledge Driven -> Intelligent Decision Making (I called the UDEK - Ubiquitous Decision Enabling Knowlegde) -> Smart Enterprises and Finally the brilliant enterprises which will be not only smart but self-learning and environment aware - where Innovation is the norm.

Looking at my own journey - I think I have been exploring various techniques from modeling simulation, multi-criteria decision making, TRIZ, Lean, and now the complex systems studies as reflected in the so called New Science - I believe the subconscious mind works on its own - as these are not explored in any systematic manner - I have been moving into these fields inadvertently - Amazing.

The concept of Brilliant Enterprises came to me from defence terminology - having read about dumb bombs to smart munitions and then the so called brilliant bombs that were coined to differentiate the inherent intelligence of the new bombs compared to the dumb bombs of the old technology.

So the Brilliance really lies in continuous knowledge, learning, adpating and exploring change. In effect - it is about enterprise wide Innovation.

Process Productivity - Software Equation

The software equation proposed by Putnam, relates the size of software, effort and time taken in a non-trivial way. The result is defined as the Process Productivity. It is mentioned that a true measure of productivity should be Process Productivity and not the conventional measure of productivity defined as KLOC/(Person-week). This conventional measure has been criticized because its value varies widely at any size the estimator is considering and even more widely from one size to another.

The Putnam equation [Putnam L.H., and Myers W., Five Core Metrics – The Intelligence Behind Successful Project Management, Dorset House Publishing, New York, 2003.] as it is known, explains a relationship between software size, effort spent and time taken to deliver the software. It relates these quantities with what is termed process productivity. The equation in words is

Size (at Defect Rate) = Effort x Time x Process Productivity (1)

The equation in its generic form is

Size (at Defect Rate) = Efforta x Timeb x Process Productivity (2)

After studying various projects, Putnam found the relationship as

Size (at Defect Rate) = (Effort/β)1/3 x (Time)4/3 x Process Productivity (3)

This is called the software equation. Beta (β) is a size dependent parameter that has the effect of giving greater weight to the effort factor in very small systems).

It is a real surprise that Software development organizations havent used this as widely as they could. In fact the evidence of experimenting with this coupled metric is also not there. May be we need to explore this more!

Software Reliability and Complexity

Software Reliability Growth Models

Software reliability models are developed with a premise that with more testing and bug-fixing, reliability will increase as number of bugs remaining in the system will go down. This is based on the assumption, that total number of bugs in a developed software system are fixed. These models are called software reliability growth models (SRGM). This is however the perfect debugging scenario, where the process of fixing bugs doesn’t introduce more bugs. In the SRGMs where imperfect debugging is assumed, the modeling is based on the assumption that by fixing bugs, one may introduce more bugs, but always the number of bugs after the fix will be less than the number of fix before the fix.

Since the process of bug occurrence and bug fixing is non-deterministic, most of these models are stochastic in nature. This implies that there is a probability function associated with both the error content in the system and the error detection rate at a particular instance of time in the life cycle of the system. Typical probability distribution assumed is the Poisson distribution. When the averages of the probability distribution vary with time, it is called the Non-Homogeneous Poisson Process (NHPP). One of the earliest models in this class is the Musa Model, which has been developed from the basic Goel-Okumoto model.

Musa model belongs to NHPP exponential model class and was first proposed in mid 1980’s. Since then the field of software reliability modeling has progressed further and many new models closer to reality have been developed and used. In the NHPP models class, NHPP S-shaped, NHPP imperfect debugging and NHPP S-shaped imperfect debugging models are new advancements.

All these models require an accurate and robust parameter estimation mechanism. A major parameter of interest in these models is the estimate of total number of bugs in the system. By having an estimate of total number of bugs, a decision on when to release the product can be taken by estimating the remaining bugs at a particular instance of time. Ideally one would like to have zero bugs when product is released. However the amount of testing needed to find and eliminate all the bugs is time and cost prohibitive. The SRGMs can help the designers/developers to estimate the remaining bugs which can be used to take a call on fit to release.

System Complexity Estimator

System understanding requires a detailed analysis of system complexity and the factors that contributes to the complexity. Complexity emerges due to greater demands of providing more functionality within multiple constraints. The System Complexity Estimator (SCE) developed by me takes into account the functionality being provided by each module and the dependency of each module on the system to compute the relative contribution of each module on the overall complexity. The analysis is based on two fundamental concepts of Cohesion and Coupling in the software design. Best design is one with minimum coupling between modules and maximum cohesion of each module. The complexity is measured relative to an ideal system with minimum complexity. An ideal system with minimum complexity is defined as the system where no module depends on any other modules for its functioning and each module performs only one function. The output of this analysis can help in prioritization of development effort, resource planning, integration scheduling, etc.

McCabe Complexity/Code Complexity

Cyclomatic complexity is the most widely used member of a class of static software metrics. Cyclomatic complexity may be considered a broad measure of soundness and confidence for a program. Introduced by Thomas McCabe in 1976, it measures the number of linearly-independent paths through a program module. This measure provides a single ordinal number that can be compared to the complexity of other programs. Cyclomatic complexity is often referred to simply as program complexity, or as McCabe's complexity. It is often used in concert with other software metrics. As one of the more widely-accepted software metrics, it is intended to be independent of language and language format

Maintainability Index
A program's maintainability is calculated using a combination of widely-used and commonly-available measures to form a Maintainability Index (MI). The basic MI of a set of programs is a polynomial of the following form (all are based on average-per-code-module measurement):
171 - 5.2 * ln(aveV) - 0.23 * aveV(g') - 16.2 * ln (aveLOC) + 50 * sin (sqrt(2.4 * perCM))
The coefficients are derived from actual usage. The terms are defined as follows:
aveV = average Halstead Volume V per module
aveV(g') = average extended cyclomatic complexity per module
aveLOC = the average count of lines of code (LOC) per module; and, optionally
perCM = average percent of lines of comments per module
---------------------------------------------

So What to do - how to use these and when?

In the software development life cycle - these models and metrics need to be placed appropriately so that complexity increase from one phase of the life cycle to next phase should be under control of maintaining reliability of the system being developed. This is easier said then done. As of now, I dont know about any end-to-end model for managing complexity simultaneously increasing reliability of the software being developed and at the end making the system maintainable. May be food for thought!

My Book @Goodread

My GoodReads