SOURCE CODE METRICS

Objective measurement of
Code complexity and structure

Slider

Automated Source Code Metrics for C and C++

Cantata provides over 300 source code metrics on C/C++ which provides useful objective measurement and visualisation of non-functional qualities of the source code:

Measurement of non-functional qualities involves the static inspection of the source code in order to provide an assessment of various non-functional features relating to the software, and is invoked on a build of a Cantata enabled software project. Configuring the analysis can be achieved through the following methods:

  • Enabling / disabling analysis of system headers
  • Specifying which static analysis metrics to calculate (through an options file)
  • Specifying the source files, functions or classes to be analyzed
  • Specifying precisely the source code statements to be analyzed (through pragmas in the source code)

Code quality and complexity metrics provided by Cantata can help users to determine areas of the code that will most likely suffer from bugs, as well as producing data from which the time required for testing can be estimated. Once the metrics have been gathered by Cantata they can be processed and manipulated using an add-in for Microsoft Excel.

Code Complexity and Structure

Cantata supports code complexity metrics on procedural source code as a means of increasing the maintainability of software, through objective measurement using recognised ‘academic’ and common sense metrics:

  • Halstead’s Software Science metrics.
  • McCabe’s, Myers’ and Hansen’s cyclomatic complexity metrics.
  • Average and maximum nesting level.
  • Basic counts of language constructs (comments, lines of code, statements, parameters etc).

Object Oriented Implementation

In addition to code complexity measures for object oriented code, Cantata also provides a number of metrics which measure aspects of object oriented implementation. These include:

  • Chidamber and Kemerer’s MOOSE metric set.
  • Fernando Brito e Abreu’s MOOD metric set.
  • Bansiya and Davis’ QMOOD metric set.
    Robert Martin’s object-oriented dependency metrics.
  • McCabe’s object-oriented metrics.
  • Bansiya’s class entropy metrics.

All metrics are provided at the function, class, translation unit, or system level, as appropriate.

Coding Standards

Organisations are increasingly adopting coding standards as a means of improving software quality and maintainability. However, unless these standards can be verified in an automated way, it is difficult to enforce them effectively. While Cantata is not a coding standards rule checking product, it does provide the developer with static analysis metrics on the use of several useful coding constructs such as:

  • Unreachable code
  • Switch statements with no defaults or fallthroughs
  • Number of GOTO statements, used and unused GOTO Labels
  • Lack of cohesion methods

Testing Effort Estimation

Understanding how complex the source code is, can be very helpful for estimating how long it will take to test it. Cantata source code metrics use industry standard complexity metrics to accurately estimate the testing effort for source items. An example is McCabe Cyclomatic Complexity and its variants, the result of which equals the minimum number of test cases required to achieve 100% decision code coverage.

Visualising and reporting metrics

Although the formatted metrics are very useful it is often more helpful to visualise the data graphically. Plotting graphs of the data can aid understanding and create overall pictures of the trends occurring lower down that are not immediately obvious when you are reading the metrics as numbers alone. Metrics visualisation can be plotted at the class, function or category level.

Example uses of Metrics

As Cantata can produce over 300 static metrics on source code, below are some examples of specific metrics and their most useful application. For an exhaustive list please refer to the Cantata manual.

Standard Code Size Metrics

These are simple metrics regarding the number of lines of code, comments, etc.

NameDescriptionScope
LINE_CODETotal number of lines of code (including blank lines and comments).Function or system
LINE_COMMENTTotal number of lines of comments (both C and C++).Function or system
LINE_SOURCETotal number of lines of source code (not including blank lines or comments).Function or system

Standard Code Quality Metrics

The quality of a piece of software is to some degree based on the number of occurrences of dubious code contained within it. These metrics alert the user of such occurrences.

NameDescriptionScope
LABEL_GOTOUSEDNumber of goto labels that are used.Function or system
LABEL_GOTOUNUSEDNumber of unused goto labels.Function or system
STMT_GOTONumber of goto statements.Function or system
SWITCH_NODEFNumber of switch statements with no default.Function or system
SWITCH_FALLTHRUNumber of non-empty case blocks which fall through to the next case block.Function or system
UNREACHABLENumber of statically unreachable statements in the given scope.Function or system

Standard Complexity Metrics

The complexity of a piece of code is generally regarded as a measure that will affect the effort involved with maintaining it. These metrics attempt to estimate the complexity of the software based on various factors, such as the level of nesting.

NameDescriptionScope
HALSTEAD_PARAMSNumber of parameters. Function
MCCABEThe McCabe Cyclomatic Complexity value for the function.Function
NESTING_MAXMaximum statement nesting level.Function
NESTING_SUMSum of the statement nesting levels for all statements in the function.Function

Specialist Object Oriented Metrics

Many standard metrics are still applicable to OO systems. For example, the maximum nesting levels within functions is also applicable to class methods. However there are also a range of specific OO metrics. These may be with respect to a given class, or for the system as a whole.

NameDescriptionScope
MAX_DEPTHMaximum length of inheritance path to ultimate base class.System
MOOD_ADNumber of new attributes defined for this class.Class
MOOD_MDNumber of new methods plus overridden methods defined for this class.Class
MOOD_AHFProportion of attributes that are hidden (private or protected).Class
MOOD_MHFProportion of methods that are hidden (private or protected).Class
MOOSE_CBOLevel of coupling between objects.  The number of classes with which this class is coupled (via a non-inheritance dependency from this class to that, or vice versa).System
MOOSE_WMC_MCCABEAverage McCabe Cyclomatic Complexity value of for all methods of the class (excluding inherited methods) defined in this translation unit.Class
MOOSE_LCOM98Chidamber & Kemerer’s Lack of Cohesion of Methods metric (1998 definition).  The minimum number of disjoint clusters of (new or overridden) methods (excluding constructors), where each cluster operates on disjoint set of (new) instance variables.Class
MOOSE_RFCChidamber & Kemerer’s Response for a class metric.  The number of methods or functions defined in the class or called by methods of the class.Class

The ‘OO’ aspects of the C++ language have tended to render the old procedural C metrics less useful, but fortunately new sets of metrics have taken their place. The popular ones include MOOSE (Metrics for OO Software Engineering), MOOD (Metrics for OO Design), and QMOOD (Quality Metrics for OO Design). Between them they define a number of metrics which can be useful for judging whether a C++ class is ‘worth testing’. Some examples are:

Quality identified in source codeEXAMPLE METRICS
Poor or Questionable Design

‘MCCABE Quality’

‘MOOSE Lack of Cohesion among Methods’

‘MOOD Attribute Hiding Factor’

Estimated Number of Faults

‘MOOD Methods Defined’

‘MOOD Attributes Defined’

‘MOOSE Weighted Methods in Class’

General Complexity

‘MOOSE Depth of Inheritance’

‘QMOOD Number of Ancestors’

‘MOOSE Number of Children’

Estimated Test Effort

‘Methods Available’

‘MOOSE Response for a Class’

‘MOOSE Coupling Between Objects’

‘MOOD Method Hiding Factor’

Additional System Metrics

Additional system level metrics can be created by taking averages for various class or function scope metrics. For example, we can calculate the mean McCabe Cyclomatic Complexity value for all functions or methods within our system.