logo.gif (7358 bytes)

               RECOMMENDATIONS FOR FURTHER FUNCTION POINT RESEARCH

By Guest Author
Charley Tichenor, Ph.D.

 

You need a free copy of flash to see this menu

 

You need a free copy of flash to see this logo

Interested in writing an article?

 

Other Links

Main Page
Function Point Consulting
Software Economics
Training Courses
Online Library

Expert Testimony
Venture Capitalist & IPO's
Software Measurement
Mergers & Acquistions
Benchmark Studies
About David Longstreet
Client List
Site Map
Site Metrics

 

 

 

Abstract

Function point analysis is a field ripe for research.  Although many of the early questions about the soundness of the function point methodology were resolved in the early 1990s and incorporated into the International Function Point Users Group’s Counting Practices Manual in 1994, there are still many opportunities to continuously improve the methodology.  This paper offers the author’s opinions regarding four areas for further function point research.  Two higher priority questions need to be answered.  Should the General Systems Characteristics be modified?  Can we expand the function point analysis of algorithms?  Also, two small gaps may exist in the current methodology.  Should there be Super EIs, Super EOs, and Super EQs?  Do all function points have the same size?  Answers to these questions may serve to strengthen the quality of our function point counts, reduce the variances we experience in our software business forecasting models, and improve our function point customers’ confidence.

Should the General System Characteristics by Modified?

Three important software metrics are forms of “cost per function point,” “days to deliver a function point,” and “defects per function point.”  These are important because based on historical information, one can forecast the cost, schedule, and quality of future software development projects.  Getting the function point count correct is essential to making these metrics work.  Consider the following simplistic example using the “cost per function point” metric.

Suppose a firm contracts with a software development vendor, the XYZ Company, to develop a large software application of 1000 unadjusted function points. One primary purpose of this application is to produce a number of reports. The firm will not accept a bid for over $1000 per function point.

The firm presents two optional requirements strategies. 

Strategy 1 is to build a batch application.  All external inputs could be on tape, they could be batch processed at night, and reports could be generated by line printer the next day.  A function point analyst and a project representative determine that the sum of the General Systems Characteristics (GSCs) produces a Value Adjustment Factor (VAF) of .85.  The forecast function point count is 850.  The forecast cost of this option is $850,000.

Strategy 2 is to build a real time application.  All external inputs could be input by screen in real time, and reports could be generated in real time.  The software would be designed to maximize human ergonomics, have the fastest reasonable response time, employ considerable data communications, etc.  A function point analyst and a project representative determine that the sum of the GSCs produce a VAF of 1.15.  The forecast function point count is 1,150.  The forecast cost of this option is $1,150,000.

How much should the company offer to pay for the software?  Do factors such as real time operations, maximizing human ergonomics, having the fastest reasonable response time, employing considerable data communications, etc., represent added functionality to the user or are they completely irrelevant?  Or, are they irrelevant to a degree?  In the industry today, some would argue that for this simple example $1,150,000 is the correct bid for a real time option and $850,000 for the batch option.  Others would argue that $1,000,000 is the correct bid for both options because the GSCs are irrelevant.  Others suggest that there is a middle ground.

One perspective to this situation is from the International Function Point Users Group (IFPUG).  Employ their Counting Practices Manual (CPM) now in version 4.1.  Anyone counting function points who advertises full IFPUG compliance must use it.  Personally, I have obtained excellent correlations between work effort and function point counts using the GSCs.

However, others are asking questions.  For example, in September 1999 a Master’s thesis was submitted by Captain Joseph Willoughby and 1st Lieutenant Michael D. Prater to the Air Force Institute of Technology entitled “The Adequacy of the Fourteen General Systems Characteristics as Function Point Adjustment Factors.” [1]  One purpose of this study was to distribute and analyze the results of a survey to “... measure attitudes regarding the use of GSCs in function point sizing.” [2]  Among the study’s conclusions was the notion that many feel that the GSCs may not reflect the current software technology, as the GSCs have remained virtually unchanged at least since 1991 while technology has markedly changed.

Others just do not use the GSCs.  At least one major company does not directly include them as inputs into its popular commercial software estimation tool.  A colleague at that company told me that he honestly feels that the GSCs do not add value to the function point count.  Also, the current ISO consideration of function points does not include them.

My personal opinions aside, I think it is time to reevaluate the GSCs quantitatively.  This could be an ideal topic for a Ph.D. dissertation.  One approach might be as follows.

·        Quantitatively reevaluate the 14 GSCs.

·        Determine if all are needed “as is,” if some need to be changed, if some need to be deleted, and/or if new ones need to be included.

·        Show convincing statistical soundness of the findings -- not in terms of “customer satisfaction,” but in terms of mathematical modeling.

·        Show the effect of any recommended changes.

The Ph.D. committee might include a Statistics professor, a function point practitioner representing or coordinating with IFPUG, and a professor from the University’s Business school.  I recommend keeping the dissertation process formally distinct from the IFPUG acceptance process.  First ensure the academic soundness of the dissertation.  If it is academically sound and successfully defended, then confer the Ph.D. in accordance with regular University policy.  Next, submit the dissertation to the IFPUG Counting Practices Committee as a separate action.  For best results, however, coordination with the Counting Practices Committee would be critical throughout the process.


Can We Expand the Function Point Analysis of Algorithms?

An algorithm is a series of equations solved in a logical sequence to produce an external output.  Function point counters, software developers, and others occasionally encounter algorithms embedded in software.  Sizing these algorithms using function point analysis can result in more accurate measures of application size and improve quality in forecasting costs, schedule, and quality.  It can also improve the confidence of developers who are new to the function point methodology as they see that all of their mathematical work is recognized and measured.

I was very fortunate to have served as technical advisor to Nancy Redgate as she successfully found a general solution to the problem of measuring the size and complexity of algorithms using function points. [3]  This effort was through an independent study course in Operations Research she completed for her Master’s degree at Renssalear Polytechnic Institute.  We later extended this concept into sizing simple single calculus integration formulas. [4]  

I feel that these were excellent and academically sound initial steps, but much more research needs to be completed.  Function point analysis needs to be applied to more complex single integration formulas, double integrals, etc.  It needs to be researched for applicability to differential equations.  Also, examples need to be provided for counting numerous business quantitative methods algorithms taught in graduate schools of business.  A corresponding white paper could be submitted to IFPUG for general distribution.

I strongly recommend that there must be one serious constraint placed on the research:  no IFPUG counting rule should be changed, and no “patches” must be added.  Put another way, the procedures must be consistent with the CPM.

Should There Be Super EIs, Super EOs, and Super EQs?

A controversial topic in the function point industry is the Super File Rule.  Here is the background, at least from my perspective.

In the manufacturing industry, customers are billed (in simple situations) based on the number of units of product they order.  For example, if customers are billed at a straight rate, a customer receiving an order for 1000 cases of beer receives a bill from the brewery, which is ten times higher than a customer who receives an order for 100 cases of beer.

Software developers can bill their customers based on a given dollar per function point rate.  As the number of function points delivered increases, the dollars charged also should increase.

Recall from the function type complexity matrices in the CPM that a “high” ILF has at least 2 RETs and more than 50 DETs.  This is valued at 15 unadjusted function points.  Suppose that the VAF is calculated as 1.0.  Then, if a customer orders an ILF with, say, 3 RETs and 60 DETs this would be counted as having 15 function points.

Suppose, as a simple case, a software developer averages five days to develop a function point.  The developer would then schedule about 75 days to develop this ILF.  If the developer charged its customer $1000 to develop each function point, then this ILF would be billed at $15,000.

Now suppose another customer wants a large master file developed as an ILF.  For this example, suppose that the large master file contained 600 DETs and 30 RETs.  Since the number of DETs and RETs are each ten times the amount in the first example, one might initially expect the developer to charge a much higher price to develop this large master file. However, according to the CPM, such a large master file must be counted as a high -- having more than 2 RETs and more than 50 DETs.  If the developer was contracted to charge following the rules in the CPM, the developer would have to charge $15,000 for this large master file, also, and promise to schedule this development time at 75 days.  This is clearly an impossible situation for the developer. 

I believe that what is needed here is a special rule to account for the additional functionality inherent in large ILFs -- such as large master files more than 100 DETs.  This special rule must correlate DETs and RETs to function point count for ILFs and EIFs to a very high degree of statistical significance.  This Super File Rule is from CPM 3.4 (but not included in subsequent editions), and is as basically as follows.

If a countable ILF or EIF contains more than 100 DETs, then count each RET as a unique ILF or EIF.

For example, suppose a master file contains 300 DETs, comprised of 5 RETs of 60 DETs each.  Using the Super File Rule, count 5 average ILFs for 50 function points.  Otherwise, one would count a single high ILF for 15 function points. 

Which counting method makes more sense in this case?  Statistically, I would argue that it is convincing that the Super File Rule does.  I use it on the occasions when it presents itself, and footnote my counts accordingly.

If one could consider Super ILFs and EIFs, could the same thinking apply to A Super” EIs, EO, and EQs?  For example, a high EO having, say, 4 FTRs and 20 DETs has 7 function points.  Following the example above, a forecast of the associated development cost is $7,000.  What about an EO of 4 FTRs and 40 DETs, or larger B is $7,000 also a reasonable cost for this EO? 

Research could be conducted to answer this question.  Its conclusions should be firmly grounded statistically.  One direction might be to start by considering a multiple regression problem.  Every combination of DETs and FTRs could be put into a spreadsheet as independent variables (the first independent variable being the number of DETs and the second being the associated number of FTRs). The dependent variable would be function point count for each combination.  (For EIs, the regression I ran was at up to 30 DETs and 3 RETs.  For EOs and EQs the regression was run up to 38 DETs and 4 RETs.  These represent twice the DET count from the associated complexity matrices as the Super File Rule does.)  Then, perform the regression.  When this is done, one obtains the following regression equations.

(1) External Inputs:  2.19 + (.065DET) + (.802FTR)

(2) External Outputs:  3.12 + (.072DET) + (.405FTR)

(3) External Inquiries: 2.16 + (.061DET) + (.580RET)

It would be interesting to learn if these equations can be soundly extrapolated past DET counts higher than twice the counts in the complexity matrices, and to FTR counts of variable sizes.  If so, it would be interesting to have a new “Super” EI, EO, and EQ rule that is both statistically significant and passes the common sense test.  Such a rule would improve the quality of function point counting in these exceptional situations

Do All Function Points Have the Same Size?

Reconsider equations (1), (2), and (3) above.  The same line of reasoning was used to obtain the corresponding ILF and EIF equations below.

 (4) Internal Logical Files:  4.72 + (.084DET) + (.792RET)

(5) External Interface Files:  3.66 + (.052DET) + (.490RET)

Notice that each of these five function types has a different, not identical, equation.  Does this mean that each function type function point is actually different in size than the other types?   Could this be one source of some of what is currently viewed as unexplained variance in productivity rate correlations, for example, when using “Staffing Required” on the x axis and “Function Points Delivered” on the y axis?  Or, is this a difference important only at the “subatomic” function point level that washes out statistically as function point counts increase in size?  Should function point counts of applications “heavy” in EOs and EQs be adjusted in comparison to applications “balanced” in each of the five function types?  What is meant by “heavy” and “balanced?”  

Conclusions

There are many opportunities for both students and professionals to conduct meaningful function point research.   The topics suggested in this paper all have the potential of  improving the accuracy of function point counts and reducing the variances we experience in our software business forecasting models based on function point measures of software size.  Incremental continuous improvements also make it easier to sell function points to those new to software sizing as they see that function points more easily pass their common sense test.  Sound research results from either the student or professional can be presented at an enjoyable annual IFPUG conference.

About the Author

Charley Tichenor, Ph.D., serves as an information technology operations research analyst and as an adjunct professor at Strayer University's Anne Arundel, MD campus.  He has a bachelor's degree in business administration from Ohio State University, a master's degree in business administration from Virginia Polytechnic and State University, and a doctorate degree in business from Berne University.  Dr. Tichenor has 17 years of IT experience. He lives in Springfield, VA with his wife and son.  His hobbies include amateur astronomy and guitar, and he holds a 2nd degree black belt in Tae Kwon Do.  Dr. Tichenor can be reach via email at (tichenor@erols.com )

References

[1]  Willoughby, Joseph and Prater, Michael D. (September 1999). “The Adequacy of the Fourteen General Systems Characteristics as Function Point Adjustment Factors,” Master’s degree thesis submitted to the Air Force Institute of Technology.  Published on the web in the membership area of the IFPUG web site, www.ifpug.org  

[2]  Ibid.  Page 35.

 [3]  Redgate, Nancy, and Tichenor, Charles B. (February 2001). “Measure Size, Complexity of Algorithms Using Function Points,” CrossTalk  The Journal of Defense Software Engineering, February 2001, www.stsc.hill.af.mil   Paper presented at the Fall 2001 IFPUG conference.

 [4]  Redgate, Nancy, and Tichenor, Charles B. (June 2002).  “Measuring Calculus Integration Formulas Using Function Point Analysis,” CrossTalk  The Journal of Defense Software Engineering, June 2002, www.stsc.hill.af.mil 

Software Estimating Course Kansas City October 26 & 27

Appropriate Citation For This Article:

Tichenor, Charley. "Recommendations for Further Function Point Research." SoftwareMetrics.Com  15 July 2002 <http://www.SoftwareMetrics.Com/Articles/Tichenor.htm>

 Would you like to submit an Article to SoftwareMetrics.Com?  Click here for details.

Tools   Consulting     Articles    Training    Contact Us    Clients    Links  Examples

Copy and reproduction is permitted if following appears on all pages.

logo.gif (7358 bytes)
Copyright Longstreet Consulting Inc. 1995 -2003
www.SoftwareMetrics.Com
David@SoftwareMetrics.Com
Longstreet Consulting Inc.
2207 S. West Walnut St.
Blue Springs, MO 64015
(816) 739-4058

   
  Keywords,Software Estimating, Benchmark, QA76, Software industry data,  Software Productivity, function points, FPA, IFPUG, Initial Public Offerings, Merger, Acquisitions, Software Measurement,software metrics, CMM, benchmark studies, function point counting, outsourcing metrics, offshore, offshore measurement,venture capital,sizing software, software economics, economics of software, expert testimony, software litigation, IPO, software IPO, benchmark data, training,consulting,software productivity, industry data, productivity data, software estimating, process metrics