Function Point Counting Training - Online

AUDITING FUNCTION POINT COUNTS

by David Longstreet

Introduction

The very term auditor and audit makes many of us feel uncomfortable. Many industries have independent inspectors and auditors. As function points become more widely used and become a more important part of decision making, those using the function point counts will want them independently reviewed and audited. Auditing is the process by which a competent, independent person accumulates and evaluates evidence about quantifiable information related to a specific entity for the purpose of determining and reporting on the degree of correspondence between the quantifiable information and established criteria.

Like other industries that have prescribed guidelines (such as accounting), independent auditors can provide valuable feedback on the actual function point count and the overall function point counting process. An function point auditor should be independent, but an auditor can be still be inside your company, perhaps part of your metrics team, or they can be an independent third party.

To do an audit of any kind, there must be information in a verifiable form and some standard (hopefully IFPUG 4.1) by which the auditor can evaluate the information. The auditor may go to the business premises to examine records and obtain information about the reliability of the function point counts. On the other hand, there may be adequate information that can be sent to the function point auditor that can be reviewed off site.

Accumulating and Evaluating Evidence

Reviewing a function point count to insure IFPUG 4.1 counting guidelines were followed would be considered a compliance audit. The purpose of a compliance audit is to determine whether the function point counts follow specific procedures and guidelines set down by the IFPUG Counting Practices Committee. The results of a compliance audit are generally reported to someone within the organizational unit being audited rather than to a broad spectrum of users.

Evidence is defined as any information used by the auditor to determine whether the function point count being audited is in compliance with IFPUG guidelines. Evidence can take many different forms, the function point count, system documentation, conversations with developers and users, and interviews with individuals that conducted the original count. The auditor gathers evidence to draw conclusions.

Of course the function point count itself can be used as evidence, but using the function point count alone would be severely inadequate. It is impossible to determine the accuracy of a function point count without evaluating additional evidence.

If an auditor was given the task of auditing a company with 500,000 function points it would be impossible to review every count. The auditor may select only 20 or 30 applications to actually audit. The actual sample size will very from auditor to auditor and audit to audit. The decision of how many items to test must be made by the auditor for each audit procedure. There are several factors that determine the appropriate sample size in audits. The two most important ones are the auditors' expectations of the number of errors and the effectiveness of the clients internal function point counting procedures. The suggested procedures at the end of this document can help in determining both of these criteria.

Additionally, the evidence must be pertain or be relevant to the audit. The auditor must be skilled at finding areas to test or review further. For example, the auditor may determine during conversations that their was some confusion about external inputs and external interface files. In this case, the auditor would review the actual system documentation and the function point count to insure that the all the external input and external interface file were treated correctly. Another example, would be that the function point counter had never counted a GUI application. The auditor would review a series of screens and determine if the original counter had correctly counted such items as radio buttons, check boxes, and so on.

The evidence must be considered believable or worthy of trust. If evidence is considered highly trusty worthy, it is a great help in assisting the auditor with a function point audit. On the other hand, if the evidence is in question such as incomplete documentation (or old documentation) then the auditor would have to scrutinize these areas of the count more closely. Additionally, the auditor should make note in the final report of any evidence they requested and the client was not able to provide.

All evidence should be evaluated based upon valuation, completeness, classification, rating, mechanical accuracy, and analytical analysis.

Valuation: The objective deals with whether items included in the function point count should of been included. Perhaps the original function point count included additional transactions or files that should not of been included.

Completeness: The objective deals with including all transactions and files in the final function point count. It is important that the application team review the final function point count to insure all transactions and files have been included. The valuation and completeness objectives emphasize opposite audit concerns. Valuation deals with potential overstatement and completeness with unrecorded transactions and files.

Classification: Classification involves determining whether all transactions and files have been correctly classified. It is important to make sure the external input and external interface file have been classified correctly for example.

Rating: This objective deals with determining if the transactions and files were appropriately ranked as low, average or high. To complete this objective a detail examination of the data elements and files referenced.

Mechanical Accuracy: Testing the mechanical accuracy involves rechecking a sample of the computations and transfers of information from one document to the next. Rechecking of computations consists of testing the original function point counters arithmetical accuracy. This is most important if an automated tool was not used while counting function points.

Analytical Analysis: This procedure is another way that a function point count can be validated. For example, the ratio of external inputs, external output, external inquiry, internal logical file, and external interface file can be compared with other applications meeting similar business needs. Also, the general system characteristics can be reviewed and compared to similar applications. Analytical procedures should be performed early in the audit so to help the auditor determine areas that need to be more thoroughly investigated.

Before an audit or validation of a function point count can take place a procedure should be in place to evaluate a the count(s). A procedure, at a minimum, should cover all the areas mentioned above. The procedure does not have to be a rigid document that is followed, but a guideline to conduct the audit. At the end of this article is a 20 step procedure that should assist anyone with developing their own guidelines or with auditing a function point count.

The Auditor

The auditor must be a qualified to understand the criteria used and competent to know the types and amount of evidence to accumulate to reach the proper conclusion after the evidence has been examined. The auditor also must have an independent mental attitude. It does little good to have a competent person who is biased performing the audit.

Independence cannot be absolute by any means, but it must be a goal. For example, even though an auditor may be paid a fee by a company they may still be sufficiently independent to conduct audits that can be relied upon by users. Auditors may not be sufficiently independent if they are also company employees.

Types of Reports

The final stage of the audit process is the audit report - the communication of findings to the users. Reports differ in nature, but in all cases they must inform readers of the compliance with IFPUG Counting Guidelines. The auditor can have three types of reports.

Like in all audits, the most common report should be the Standard Unqualified Audit Report. It is used in 90 percent of all audits, and a function point audit should be no different. The standard unqualified report is used when the following conditions are met.

  1. All systems and users documentation have been included in the original function point count.
  2. IFPUG Counting Practices Guidelines were followed.
  3. The function point count is thoroughly documented and with no outstanding issues.

Another type of audit report is "conditions requiring a departure." There are two conditions requiring a departure from a Standard Unqualified Audit.

  1. The scope of the auditor has been restricted. This is the case when the auditor has not accumulated enough evidence to conclude if the function point was completed in accordance with IFPUG 4.1 Counting Guidelines.
  2. The function point count was not completed in accordance with IFPUG 4.1 Counting Guidelines. In this case, a detail analysis outlining the specific areas should be included in the final report.

Additionally, the auditor may create an adverse opinion. An adverse opinion is used only when the auditor believes the overall function point counts are so materially misstated or misleading that they do no present fairly the functional size of the application being counted. The auditor should be very specific on why they are making this conclusion.

A 20 Step Procedure for Auditing a Function Point Count

  1. Was the task of counting function points included in the overall project plan?
    All activities the project team engages in should be an item in the project plan. Ensure that adequate amount of time has been dedicated to achieve to complete the task.
  2. Is the person performing the function point count trained in Function Point Counting? Are they certified?

    To often function point counts are completed by individuals not trained in function point counting. Formal class room training may not be necessary, but the individual conducting the count should be familiar with IFPUG 4.1 counting rules.

    It is even better if the person completing the count has passed the IFPUG Certification exam. Passing he exam does not guarantee accurate counts, but it does guarantee a minimal level of competency.
  3. Were IFPUG 4.1 Counting Practices Manual followed?
  4. Did the function point counter use current project documentation to count function points? If not How old was the documentation?
  5. Did the project team participate in the function point count?
    The project team should be the most knowledgeable individuals regarding the functionality being delivered to the user. They are the best source of information regarding the project. Frequently the project team is left in the dark when a function point count is completed. The function point counter gathers some documentation and sits in a room for a few days and out comes a function point number. This will cause the project team to question the accuracy of the number.
  6. Were internally developed function point counting guidelines followed?
  7. Was the application counted from the user's point of view?
  8. Was the system counted from a logical and not a physical point of view?
  9. Does the established boundary for the FP count match the boundary of other metrics (time reporting, defect tracking)? If not, why?
  10. If the function point count was for an enhancement was boundary the same as the boundary for the application? If not, why?
  11. Has the boundary changed? If so, why?
  12. Was any tool used for function point counting or was the count done manually?
    If the count was done manual a review of the arithmetic needs to be done.
  13. Do the individual Function Point components (ILF, EIF, EI, EO, and EQ) percentages conform to industry averages. If not, is there a valid reason?
    If auditing several applications, are the percentages of transactions and files similar.
  14. Has an inventory of transactions (EI, EO and EQ) and files (ILF and EIF) been reviewed by the project team. The greatest error counting function point is the error of omission (not including everything). It is important that the application team review the function point count for completeness and accuracy.
  15. Does the total Value Adjustment Factor agree with other projects? The total Value Adjustment Factor should fall within +/- 5 percent of the average value adjustment factor for all applications reviewed. If it falls outside of this range a written explanation needs to be included with the function point count. For example, if the average VAF was 1.05, then the VAF would have to be between 1.0 and 1.10.
  16. Does each of the 14 General System Characteristics fall within the ranges of other projects? Is each General System Characteristics within 1 point of the average GSC.. For example, if a particular GSC was rated as 2.0 then the GSC would have to be either be 1, 2 or 3. If the GSC was outside this range a written explanation needs to be included with the function point count.
  17. Have all the assumptions associated with the function point count be documented. All assumptions should be documented so they can be reviewed at later date if necessary.
  18. Are these assumptions consistent with other projects ?
  19. Have all the assumptions impacting Function Point Counting been forwarded to a the central function point group? All assumptions should be reviewed by the central function point group.
  20. Has the count been reviewed by an independent Certified Function Point Specialist?

Conclusions

In almost every sophisticated industry there are auditors and inspectors function point analysis should not be any different. There is no need to fear audits or auditors. If they are done appropriately they should provide valuable feedback on the function point counting process. The audit report may allow you to correct any incorrect function point counts, and re-evaluate the decisions you have made to date.

Function Point Counting Training - Online

Tools   Consulting    Full Articles    Training    Contact Us    Clients   

Software Estimating Training

Copy and reproduction is permitted if following appears on all pages.

Related Links

FP Training
Public Training Courses
Free Stuff
University Programs
Online Library

Client List
Public Speaking

FP Public Training

 

Other Links

Main Page
Function Point Consulting
Software Economics
Training Courses
Online Library

Expert Testimony
Venture Capitalist & IPO's
Software Measurement
Mergers & Acquistions
Benchmark Studies
About David Longstreet
Client List
Site Map
Site Metrics

 

For more information:
http://www.linkedin.com/in/davidlongstreet

to Share — to copy, distribute and transmit the work
to Remix — to adapt the work
to make commercial use of the work

       
         
               
Keywords,Software Estimating, Benchmark, Software industry data,  Software Productivity, function points, FPA, IFPUG, Initial Public Offerings, Merger, Acquisitions, Software Measurement,software metrics, CMM, benchmark studies, function point counting, outsourcing metrics, offshore, offshore measurement,venture capital,sizing software, software economics, economics of software, expert testimony, software litigation, IPO, software IPO, benchmark data, training,consulting,software productivity, industry data, productivity data, software estimating, process metrics