In Proceedings


Conference publications:

2017

  • F. Zampetti, C. Noiseux, G. Antoniol, F. Khomh, and M. D. Penta, “Recommending when design technical debt should be self-admitted,” in Icsme: the international conference on software maintenance and evolution, 2017, p. To Appear.
    [Bibtex]
    @inproceedings{Cedric17,
    author = {Fiorella Zampetti and Cédric Noiseux and Giuliano Antoniol and Foutse Khomh and Massimiliano Di Penta
    },
    title = {Recommending when Design Technical Debt Should be Self-Admitted},
    booktitle = {ICSME: The International Conference on Software Maintenance and Evolution},
    pages = {To Appear},
    year = {2017}
    }
  • P. Galinier, S. Kpodjedo, and G. Antoniol, “A penalty-based tabu search for constrained covering arrays,” in Gecco: the genetic and evolutionary computation conference, 2017, p. To Appear.
    [Bibtex]
    @inproceedings{SeglaKPG17,
    author = {Philppe Galinier and
    Segla Kpodjedo and
    Giuliano Antoniol
    },
    title = {A penalty-based Tabu search for constrained covering arrays},
    booktitle = {GECCO: The Genetic and Evolutionary Computation Conference},
    pages = {To Appear},
    year = {2017}
    }
  • R. Saborido, F. Khomh, G. Antoniol, and Y. -, “Comprehension of ads-supported and paid android applications: are they different?,” in Proceedings of the 25th international conference on program comprehension, ICPC 2017, buenos aires, argentina, may 22-23, 2017, 2017, pp. 143-153.
    [Bibtex]
    @inproceedings{SaboridoKAG17,
    author = {Rub{\'{e}}n Saborido and
    Foutse Khomh and
    Giuliano Antoniol and
    Yann{-}Ga{\"{e}}l Gu{\'{e}}h{\'{e}}neuc},
    title = {Comprehension of ads-supported and paid Android applications: are
    they different?},
    booktitle = {Proceedings of the 25th International Conference on Program Comprehension,
    {ICPC} 2017, Buenos Aires, Argentina, May 22-23, 2017},
    pages = {143--153},
    year = {2017}
    }
  • M. Moussa, D. P. Massimiliano, G. Antoniol, and G. Beltrame, “Accuse: helping users to minimize android app privacy concerns,” in 4th ieee/acm international conference on mobile software engineering and systems, 2017.
    [Bibtex]
    @InProceedings{MoussaACCUSE2017,
    title = {ACCUSE: Helping Users to Minimize Android App Privacy Concerns},
    author = {Moussa, Majda and Di Penta Massimiliano and Antoniol, Giuliano and Beltrame, Giovanni},
    booktitle = {4th IEEE/ACM International Conference on Mobile Software Engineering and Systems},
    year = {2017},
    organization = {IEEE, ACM},
    keyword = {accepted},
    note = {Accepted for publication}
    }
  • [DOI] L. An, O. Mlouki, F. Khomh, and G. Antoniol, “Stack overflow: A code laundering platform?,” in IEEE 24th international conference on software analysis, evolution and reengineering, SANER 2017, klagenfurt, austria, february 20-24, 2017, 2017, pp. 283-293.
    [Bibtex]
    @inproceedings{AnMKA17,
    author = {Le An and
    Ons Mlouki and
    Foutse Khomh and
    Giuliano Antoniol},
    title = {Stack Overflow: {A} code laundering platform?},
    booktitle = {{IEEE} 24th International Conference on Software Analysis, Evolution
    and Reengineering, {SANER} 2017, Klagenfurt, Austria, February 20-24,
    2017},
    pages = {283--293},
    year = {2017},
    doi = {10.1109/SANER.2017.7884629}
    }
  • [DOI] A. Saboury, P. Musavi, F. Khomh, and G. Antoniol, “An empirical study of code smells in javascript projects,” in IEEE 24th international conference on software analysis, evolution and reengineering, SANER 2017, klagenfurt, austria, february 20-24, 2017, 2017, pp. 294-305.
    [Bibtex]
    @inproceedings{SabouryMKA17,
    author = {Amir Saboury and
    Pooya Musavi and
    Foutse Khomh and
    Giulio Antoniol},
    title = {An empirical study of code smells in JavaScript projects},
    booktitle = {{IEEE} 24th International Conference on Software Analysis, Evolution
    and Reengineering, {SANER} 2017, Klagenfurt, Austria, February 20-24,
    2017},
    pages = {294--305},
    doi = {10.1109/SANER.2017.7884630},
    year = {2017}
    }

2016

  • [DOI] R. Morales, A. Sabane, P. Musavi, F. Khomh, F. Chicano, and G. Antoniol, “Finding the best compromise between design quality and testing effort during refactoring,” in Saner, 2016, pp. 24-35.
    [Abstract]

    Anti-patterns are poor design choices that hinder code evolution, and understandability. Practitioners perform refactoring, that are semantic-preserving-code transformations, to correct anti-patterns and to improve design quality. However, manual refactoring is a consuming task and a heavy burden for developers who have to struggle to complete their coding tasks and maintain the design quality of the system at the same time. For that reason, researchers and practitioners have proposed several approaches to bring automated support to developers, with solutions that ranges from single anti-patterns correction, to multiobjective solutions. The latter attempt to reduce refactoring effort, or to improve semantic similarity between classes and methods in addition to remove anti-patterns. To the best of our knowledge none of the previous approaches have considered the impact of refactoring on another important aspect of software development, which is the testing effort. In this paper we propose a novel search-based multiobjective approach for removing five well-know anti-patterns and minimizing testing effort. To assess the effectiveness of our proposed approach, we implement three different multiobjective metaheuristics (NSGA-II, SPEA2, MOCell) and apply them to a benchmark comprised of four open-source systems. Results show that MOCell is the metaheuristic that provides the best performance.

    [Bibtex]

    @inproceedings{rodrigo2016saner,
    author = { Rodrigo Morales and Aminata Sabane and Pooya Musavi and Foutse Khomh and Francisco Chicano and Giulio Antoniol},
    title = {Finding the Best Compromise Between Design Quality and Testing Effort During Refactoring},
    booktitle = {SANER},
    pages = {24--35},
    year = {2016},
    crossref = {DBLP:conf/wcre/2016},
    url = {http://dx.doi.org/10.1109/SANER.2016.23},
    doi = {10.1109/SANER.2016.23},
    abstract = {
    Anti-patterns are poor design choices that hinder code evolution, and understandability. Practitioners perform refactoring, that are semantic-preserving-code transformations, to correct anti-patterns and to improve design quality. However, manual refactoring is a consuming task and a heavy burden for developers who have to struggle to complete their coding tasks and maintain the design quality of the system at the same time. For that reason, researchers and practitioners have proposed several approaches to bring automated support to developers, with solutions that ranges from single anti-patterns correction, to multiobjective solutions. The latter attempt to reduce refactoring effort, or to improve semantic similarity between classes and methods in addition to remove anti-patterns. To the best of our knowledge none of the previous approaches have considered the impact of refactoring on another important aspect of software development, which is the testing effort. In this paper we propose a novel search-based multiobjective approach for removing five well-know anti-patterns and minimizing testing effort. To assess the effectiveness of our proposed approach, we implement three different multiobjective metaheuristics (NSGA-II, SPEA2, MOCell) and apply them to a benchmark comprised of four open-source systems. Results show that MOCell is the metaheuristic that provides the best performance.
    },
    }
  • [DOI] O. Mlouki, F. Khomh, and G. Antoniol, “On the detection of licenses violations in android ecosystem,” in Saner, 2016, pp. 382-392.
    [Abstract]

    Mobile applications (apps), developers often reuse code from existing libraries and frameworks in order to reduce development costs. However, these libraries and frameworks are governed by licenses to which developers must comply. A failure to comply with a license is likely to result in penalties and fines. In this paper we define a three steps approach that helps to identify licenses used in a system and thus to detect licenses violations. We validate our approach in a set of apps from the F-droid market1 . We identify first the most common license used in mobile open source apps. Then we propose our model that identify licenses across different categories of mobile apps, some kinds of violation and licence changes in the process of software

    [Bibtex]

    @inproceedings{ons2016saner,
    author = {Ons Mlouki and Foutse Khomh and Giulio Antoniol},
    title = {On the Detection of Licenses Violations in Android Ecosystem},
    booktitle = {SANER},
    year = {2016},
    pages = {382-392},
    year = {2016},
    url = {http://dx.doi.org/10.1109/SANER.2016.73},
    doi = {10.1109/SANER.2016.73},
    abstract = {
    Mobile applications (apps), developers often reuse code from existing libraries and frameworks in order to reduce development costs. However, these libraries and frameworks are governed by licenses to which developers must comply. A failure to comply with a license is likely to result in penalties and fines. In this paper we define a three steps approach that helps to identify licenses used in a system and thus to detect licenses violations. We validate our approach in a set of apps from the F-droid market1 . We identify first the most common license used in mobile open source apps. Then we propose our model that identify licenses across different categories of mobile apps, some kinds of violation and licence changes in the process of software
    },
    }
  • [DOI] R. Saborido-Infantes, G. Beltrame, F. Khomh, E. Alba, and G. Antoniol, “Optimizing user experience in choosing android applications,” in Saner, 2016, pp. 438-448.
    [Abstract]

    Why is my cell phone battery already low? How did I use almost all the data of my monthly Internet plan? Is my recently released new application more efficient than similar competing applications? These are not easy questions to answer. Different applications implementing similar or identical functionalities may have different energy consumptions. In this paper, we present a recommendation system aimed at helping users and developers alike. We help users to choose optimal sets of applications belonging to different categories (eg. browsers, e-mails, cameras) while minimizing energy consumption, transmitted data, and maximizing application rating. We also help developers by showing the relative placement of their application’s efficiency with respect to selected others. When the optimal set of applications is computed, it is leveraged to position a given application with respect to the optimal, median and worst application in its category (eg. browsers). Out of eight categories we selected 144 applications, manually defined typical execution scenarios, collected the relevant data, and computed the Pareto optimal front solving a multi-objective optimization problem. We report evidence that, on the one hand, ratings do not correlate with energy efficiency and data frugality. On the other hand, we show that it is possible to help developers understanding how far is a new Android application power consumption and network usage with respect to optimal applications in the same category. From the user perspective, we show that choosing optimal sets of applications, power consumption and network usage can be reduced by 16.61\% and 40.17\%, respectively, in comparison to choosing the set of applications that maximizes only the rating.

    [Bibtex]

    @inproceedings{ruben2016saner,
    author = {Ruben Saborido-Infantes and Giovanni Beltrame and Foutse Khomh and Enrique Alba and Giulio Antoniol},
    title = {Optimizing User Experience in Choosing Android Applications},
    booktitle = {SANER},
    year = {2016},
    pages = {438-448},
    year = {2016},
    url = {http://dx.doi.org/10.1109/SANER.2016.64},
    doi = {10.1109/SANER.2016.64},
    abstract = {
    Why is my cell phone battery already low? How did I use almost all the data of my monthly Internet plan? Is my recently released new application more efficient than similar competing applications? These are not easy questions to answer. Different applications implementing similar or identical functionalities may have different energy consumptions.
    In this paper, we present a recommendation system aimed at helping users and developers alike. We help users to choose optimal sets of applications belonging to different categories (eg. browsers, e-mails, cameras) while minimizing energy consumption, transmitted data, and maximizing application rating. We also help developers by showing the relative placement of their application's efficiency with respect to selected others. When the optimal set of applications is computed, it is leveraged to position a given application with respect to the optimal, median and worst application in its category (eg. browsers).
    Out of eight categories we selected 144 applications, manually defined typical execution scenarios, collected the relevant data, and computed the Pareto optimal front solving a multi-objective optimization problem. We report evidence that, on the one hand, ratings do not correlate with energy efficiency and data frugality. On the other hand, we show that it is possible to help developers understanding how far is a new Android application power consumption and network usage with respect to optimal applications in the same category.
    From the user perspective, we show that choosing optimal sets of applications, power consumption and network usage can be reduced by
    16.61\% and 40.17\%, respectively, in comparison to choosing the set of applications that maximizes only the rating.
    },
    }

2015

  • D. Martin, J. Cordy, B. Adams, and G. Antoniol, “Make it simple – an empirical analysis of gnu make feature use in open source projects,” in Icpc, 2015, pp. 207-217.
    [Abstract]

    Make is one of the oldest build technologies and is still widely used today, whether by manually writing Makefiles, or by generating them using tools like Autotools and CMake. Despite its conceptual simplicity, modern Make implementations such as GNU Make have become very complex languages, featuring functions, macros, lazy variable assignments and (in GNU Make 4.0) the Guile embedded scripting language. Since we are interested in understanding how widespread such complex language features are, this paper studies the use of Make features in almost 20,000 Makefiles, comprised of over 8.4 million lines, from more than 350 different open source projects. We look at the popularity of features and the difference between hand-written Makefiles and those generated using various tools. We find that generated Makefiles use only a core set of features and that more advanced features (such as function calls) are used very little, and almost exclusively in hand-written Makefiles.

    [Bibtex]

    @inproceedings{doug2015,
    author = {Douglas Martin and James Cordy and Bram Adams and Giulio Antoniol},
    title = {Make It Simple - An Empirical Analysis of GNU Make Feature Use in Open Source Projects},
    booktitle = {ICPC},
    year = {2015},
    pages = {207-217},
    abstract = {
    Make is one of the oldest build technologies and is
    still widely used today, whether by manually writing Makefiles,
    or by generating them using tools like Autotools and CMake.
    Despite its conceptual simplicity, modern Make implementations
    such as GNU Make have become very complex languages,
    featuring functions, macros, lazy variable assignments and (in
    GNU Make 4.0) the Guile embedded scripting language. Since
    we are interested in understanding how widespread such complex
    language features are, this paper studies the use of Make features
    in almost 20,000 Makefiles, comprised of over 8.4 million lines,
    from more than 350 different open source projects. We look at the
    popularity of features and the difference between hand-written
    Makefiles and those generated using various tools. We find that
    generated Makefiles use only a core set of features and that more
    advanced features (such as function calls) are used very little, and
    almost exclusively in hand-written Makefiles.
    }
    }
  • S. Panichella, V. Arnaoudova, M. D. Penta, and G. Antoniol, “Would static analysis tools help developers with code reviews?,” in International conference on software analysis, evolution, and reengineering (saner), 2015-01-01 2015, pp. 161-170.
    [Abstract]

    Code reviews have been conducted since decades in software projects, with the aim of improving code quality from many different points of view. During code reviews, developers are supported by checklists, coding standards and, possibly, by various kinds of static analysis tools. This paper investigates whether warnings highlighted by static analysis tools are taken care of during code reviews and, whether there are kinds of warnings that tend to be removed more than others. Results of a study conducted by mining the Gerrit repository of six Java open source projects indicate that the density of warnings only slightly vary after each review. The overall percentage of warnings removed during reviews is slightly higher than what previous studies found for the overall project evolution history. However, when looking (quantitatively and qualitatively) at specific categories of warnings, we found that during code reviews developers focus on certain kinds of problems. For such categories of warnings the removal percentage tend to be very high, often above 50\% and sometimes up to 100\%. Examples of those are warnings in the imports, regular expressions, and type resolution categories. In conclusion, while a broad warning detection might produce way too many false positives, enforcing the removal of certain warnings prior to the patch submission could reduce the amount of effort provided during the code review process.

    [Bibtex]

    @inproceedings{Panichella:saner15:CodeReviewsWarnings,
    title = {Would Static Analysis Tools Help Developers with Code Reviews?},
    author = {Sebastiano Panichella and Venera Arnaoudova and Massimiliano Di Penta and Giuliano Antoniol},
    year = {2015},
    date = {2015-01-01},
    booktitle = {International Conference on Software Analysis, Evolution, and Reengineering (SANER)},
    abstract = {
    Code reviews have been conducted since decades in
    software projects, with the aim of improving code quality from
    many different points of view. During code reviews, developers
    are supported by checklists, coding standards and, possibly, by
    various kinds of static analysis tools. This paper investigates
    whether warnings highlighted by static analysis tools are taken
    care of during code reviews and, whether there are kinds of
    warnings that tend to be removed more than others. Results
    of a study conducted by mining the Gerrit repository of six
    Java open source projects indicate that the density of warnings
    only slightly vary after each review. The overall percentage
    of warnings removed during reviews is slightly higher than
    what previous studies found for the overall project evolution
    history. However, when looking (quantitatively and qualitatively)
    at specific categories of warnings, we found that during code
    reviews developers focus on certain kinds of problems. For such
    categories of warnings the removal percentage tend to be very
    high, often above 50\% and sometimes up to 100\%. Examples
    of those are warnings in the imports, regular expressions, and type resolution
    categories. In conclusion, while a broad warning
    detection might produce way too many false positives, enforcing
    the removal of certain warnings prior to the patch submission
    could reduce the amount of effort provided during the code review
    process.
    },
    pages = {161-170},
    }
  • L. M. Eshkevari, F. D. Santos, J. R. Cordy, and G. Antoniol, “Are php applications ready for hack,” in International conference on software analysis, evolution, and reengineering (saner), 2015-01-01 2015, pp. 63-72.
    [Abstract]

    PHP is by far the most popular WEB scripting language, accounting for more than 80\% of existing websites. PHP is dynamically typed, which means that variables take on the type of the objects that they are assigned, and may change type as execution proceeds. While some type changes are likely not harmful, others involving function calls and global variables may be more difficult to understand and the source of many bugs. Hack, a new PHP variant endorsed by Facebook, attempts to address this problem by adding static typing to PHP variables, which limits them to a single consistent type throughout execution. This paper defines an empirical taxonomy of PHP type changes along three dimensions: the complexity or burden imposed to understand the type change; whether or not the change is potentially harmful; and the actual types changed. We apply static and dynamic analyses to three widely used WEB applications coded in PHP (WordPress, Drupal and phpBB) to investigate (1) to what extent developers really use dynamic typing, (2) what kinds of type changes are actually encountered; and (3) how difficult it might be to refactor the code to avoid type changes, and thus meet the constraints of Hack’s static typing. We report evidence that dynamic typing is actually a relatively uncommon practice in production PHP programs, and that most dynamic type changes are simple representational changes, such as between strings and integers. We observe that most PHP type changes in these programs are relatively simple, and that the largest proportion of them are easy to refactor to consistent static typing using simple local renaming transformations. Overall, the paper casts doubt on the usefulness of dynamic typing in PHP, and indicates that for many production applications, conversion to Hack’s static typing may not be very difficult.

    [Bibtex]

    @inproceedings{laleh2015,
    title = {Are PHP applications ready for Hack},
    author = {Laleh Mousavi Eshkevari and Fabien Dos Santos and James R. Cordy and Giuliano Antoniol},
    year = {2015},
    date = {2015-01-01},
    booktitle = {International Conference on Software Analysis, Evolution, and Reengineering (SANER)},
    abstract = {
    PHP is by far the most popular WEB scripting language, accounting
    for more than 80\% of existing websites.
    PHP is dynamically typed, which means that variables take on the type
    of the objects that they are assigned, and may change type as execution proceeds.
    While some type changes are likely not harmful, others involving function calls and
    global variables may be more difficult to understand and the source of many bugs.
    Hack, a new PHP variant endorsed by Facebook, attempts to address this
    problem by adding static typing to PHP variables, which limits them to
    a single consistent type throughout execution.
    This paper defines an empirical taxonomy of PHP type changes along three dimensions:
    the complexity or burden imposed to understand the type change;
    whether or not the change is potentially harmful;
    and the actual types changed.
    We apply static and dynamic analyses to three widely used WEB applications coded in
    PHP (WordPress, Drupal and phpBB) to investigate (1) to what extent developers really use dynamic typing,
    (2) what kinds of type changes are actually encountered; and
    (3) how difficult it might be to refactor the code to avoid type changes, and thus meet
    the constraints of Hack's static typing.
    We report evidence that dynamic typing is actually a relatively uncommon practice
    in production PHP programs, and that most dynamic type changes are simple
    representational changes, such as between strings and integers.
    We observe that most PHP type changes in these programs are relatively simple,
    and that the largest proportion of them are easy to refactor to consistent static typing
    using simple local renaming transformations.
    Overall, the paper casts doubt on the usefulness of dynamic typing in PHP, and
    indicates that for many production applications, conversion to Hack's static typing
    may not be very difficult.
    },
    pages = {63-72},
    }

2014

  • [DOI] S. Panichella, G. Bavota, M. D. Penta, G. Canfora, and G. Antoniol, “How developers’ collaborations identified from different sources tell us about code changes,” in 30th IEEE international conference on software maintenance and evolution, victoria, bc, canada, september 29 – october 3, 2014, 2014, pp. 251-260.
    [Abstract]

    Written communications recorded through chan- nels such as mailing lists or issue trackers, but also code co- changes, have been used to identify emerging collaborations in software projects. Also, such data has been used to identify the relation between developers’ roles in communication networks and source code changes, or to identify mentors aiding newcomers to evolve the software project. However, results of such analyses may be different depending on the communication channel being mined. This paper investigates how collaboration links vary and complement each other when they are identified through data from three different kinds of communication channels, i.e., mailing lists, issue trackers, and IRC chat logs. Also, the study investigates how such links overlap with links mined from code changes, and how the use of different sources would influence (i) the identification of project mentors, and (ii) the presence of a correlation between the social role of a developer and her changes. Results of a study conducted on seven open source projects indicate that the overlap of communication links between the various sources is relatively low, and that the application of networks obtained from different sources may lead to different results.

    [Bibtex]

    @inproceedings{conf/icsm/PanichellaBPCA14,
    author = {Sebastiano Panichella and Gabriele Bavota and Massimiliano Di Penta and Gerardo Canfora and Giuliano Antoniol},
    title = {How Developers' Collaborations Identified from Different Sources Tell Us about Code Changes},
    booktitle = {30th {IEEE} International Conference on Software Maintenance and Evolution, Victoria, BC, Canada, September 29 - October 3, 2014},
    pages = {251--260},
    year = {2014},
    url = {http://dx.doi.org/10.1109/ICSME.2014.47},
    abstract = {
    Written communications recorded through chan-
    nels such as mailing lists or issue trackers, but also code co-
    changes, have been used to identify emerging collaborations in
    software projects. Also, such data has been used to identify the
    relation between developers’ roles in communication networks
    and source code changes, or to identify mentors aiding newcomers
    to evolve the software project. However, results of such analyses
    may be different depending on the communication channel being
    mined. This paper investigates how collaboration links vary
    and complement each other when they are identified through
    data from three different kinds of communication channels, i.e.,
    mailing lists, issue trackers, and IRC chat logs. Also, the study
    investigates how such links overlap with links mined from code
    changes, and how the use of different sources would influence
    (i) the identification of project mentors, and (ii) the presence
    of a correlation between the social role of a developer and her
    changes. Results of a study conducted on seven open source
    projects indicate that the overlap of communication links between
    the various sources is relatively low, and that the application of
    networks obtained from different sources may lead to different
    results.
    },
    doi = {10.1109/ICSME.2014.47},
    }
  • L. M. Eshkevari, G. Antoniol, J. R. Cordy, and M. D. Penta, “Identifying and locating interference issues in php applications: the case of wordpress,” in Icpc, 2014, pp. 157-167.
    [Abstract]

    he large success of Content management Systems (CMS) such as WordPress is largely due to the rich ecosystem of themes and plugins developed around the CMS that allows users to easily build and customize complex Web applications featuring photo galleries, contact forms, and blog pages. However, the design of the CMS, the plugin-based architecture, and the implicit characteristics of the programming language used to develop them (often PHP), can cause interference or unwanted side effects between the resources declared and used by different plugins. This paper describes the problem of interference between plugins in CMS, specifically those developed using PHP, and outlines an approach combining static and dynamic analysis to detect and locate such interference. Results of a case study conducted over 10 WordPress plugins shows that the analysis can help to identify and locate plugin interference, and thus be used to enhance CMS quality assurance

    [Bibtex]

    @inproceedings{conf/iwpc/EshkevariACP14,
    author = {Laleh Mousavi Eshkevari and Giuliano Antoniol and James R. Cordy and Massimiliano Di Penta},
    title = {Identifying and locating interference issues in PHP applications: the case of WordPress},
    booktitle = {ICPC},
    year = {2014},
    pages = {157-167},
    ee = {http://doi.acm.org/10.1145/2597008.2597153},
    crossref = {DBLP:conf/iwpc/2014},
    abstract = {
    he large success of Content management Systems (CMS) such as WordPress is largely due to the rich ecosystem of themes and plugins developed around the CMS that allows users to easily build and customize complex Web applications featuring photo galleries, contact forms, and blog pages. However, the design of the CMS, the plugin-based architecture, and the implicit characteristics of the programming language used to develop them (often PHP), can cause interference or unwanted side effects between the resources declared and used by different plugins. This paper describes the problem of interference between plugins in CMS, specifically those developed using PHP, and outlines an approach combining static and dynamic analysis to detect and locate such interference. Results of a case study conducted over 10 WordPress plugins shows that the analysis can help to identify and locate plugin interference, and thus be used to enhance CMS quality assurance
    },
    bibsource = {DBLP, http://dblp.uni-trier.de},
    }
  • G. Bavota, R. Oliveto, A. D. Lucia, A. Marcus, Y. Guéhéneuc, and G. Antoniol, “In medio stat virtus: extract class refactoring through nash equilibria,” in Csmr-wcre, 2014, pp. 214-223.
    [Bibtex]
    @inproceedings{conf/csmr/BavotaOLMGA14,
    author = {Gabriele Bavota and Rocco Oliveto and Andrea De Lucia and Andrian Marcus and Yann-Ga{\"e}l Gu{\'e}h{\'e}neuc and Giuliano Antoniol},
    title = {In medio stat virtus: Extract class refactoring through nash equilibria},
    booktitle = {CSMR-WCRE},
    year = {2014},
    pages = {214-223},
    ee = {http://dx.doi.org/10.1109/CSMR-WCRE.2014.6747173},
    crossref = {DBLP:conf/csmr/2014},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    }

2013

  • [PDF] V. Arnaoudova, M. D. Penta, G. Antoniol, and Y. Guéhéneuc, “A new family of software anti-patterns: linguistic anti-patterns,” in Csmr, 2013, pp. 187-196.
    [Abstract]

    Recent and past studies have shown that poor source code lexicon negatively affects software understand ability, maintainability, and, overall, quality. Besides a poor usage of lexicon and documentation, sometimes a software artifact description is misleading with respect to its implementation. Consequently, developers will spend more time and effort when understanding these software artifacts, or even make wrong assumptions when they use them. This paper introduces the definition of software linguistic antipatterns, and defines a family of them, i.e., those related to inconsistencies (i) between method signatures, documentation, and behavior and (ii) between attribute names, types, and comments. Whereas "design" antipatterns represent recurring, poor design choices, linguistic antipatterns represent recurring, poor naming and commenting choices. The paper provides a first catalogue of one family of linguistic antipatterns, showing real examples of such antipatterns and explaining what kind of misunderstanding they can cause. Also, the paper proposes a detector prototype for Java programs called LAPD (Linguistic Anti-Pattern Detector), and reports a study investigating the presence of linguistic antipatterns in four Java software projects.

    [Bibtex]

    @inproceedings{06498467,
    author = {Venera Arnaoudova and Massimiliano Di Penta and Giuliano Antoniol and Yann-Ga{\"e}l Gu{\'e}h{\'e}neuc},
    title = {A New Family of Software Anti-patterns: Linguistic Anti-patterns},
    booktitle = {CSMR},
    year = {2013},
    pages = {187-196},
    ee = {http://dx.doi.org/10.1109/CSMR.2013.28, http://doi.ieeecomputersociety.org/10.1109/CSMR.2013.28},
    crossref = {DBLP:conf/csmr/2013},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    abstract = {
    Recent and past studies have shown that poor source code lexicon negatively affects software understand ability, maintainability, and, overall, quality. Besides a poor usage of lexicon and documentation, sometimes a software artifact description is misleading with respect to its implementation. Consequently, developers will spend more time and effort when understanding these software artifacts, or even make wrong assumptions when they use them. This paper introduces the definition of software linguistic antipatterns, and defines a family of them, i.e., those related to inconsistencies (i) between method signatures, documentation, and behavior and (ii) between attribute names, types, and comments. Whereas "design" antipatterns represent recurring, poor design choices, linguistic antipatterns represent recurring, poor naming and commenting choices. The paper provides a first catalogue of one family of linguistic antipatterns, showing real examples of such antipatterns and explaining what kind of misunderstanding they can cause. Also, the paper proposes a detector prototype for Java programs called LAPD (Linguistic Anti-Pattern Detector), and reports a study investigating the presence of linguistic antipatterns in four Java software projects.
    },
    pdf = {2013/06498467.pdf},
    }
  • [PDF] A. Sabane, M. D. Penta, G. Antoniol, and Y. Guéhéneuc, “A study on the relation between antipatterns and the cost of class unit testing,” in Csmr, 2013, pp. 167-176.
    [Abstract]

    Antipatterns are known as recurring, poor design choices, recent and past studies indicated that they negatively affect software systems in terms of understand ability and maintainability, also increasing change-and defect-proneness. For this reason, refactoring actions are often suggested. In this paper, we investigate a different side-effect of antipatterns, which is their effect on testability and on testing cost in particular. We consider as (upper bound) indicator of testing cost the number of test cases that satisfy the minimal data member usage matrix (MaDUM) criterion proposed by Bashir and Goel. A study-carried out on four Java programs, Ant 1.8.3, ArgoUML 0.20, Check Style 4.0, and JFreeChart 1.0.13-supports the evidence that, on the one hand, antipatterns unit testing requires, on average, a number of test cases substantially higher than unit testing for non-antipattern classes. On the other hand, antipattern classes must be carefully tested because they are more defect-prone than other classes. Finally, we illustrate how specific refactoring actions-applied to classes participating in antipatterns-could reduce testing cost.

    [Bibtex]

    @inproceedings{06498465,
    author = {Aminata Sabane and Massimiliano Di Penta and Giuliano Antoniol and Yann-Ga{\"e}l Gu{\'e}h{\'e}neuc},
    title = {A Study on the Relation between Antipatterns and the Cost of Class Unit Testing},
    booktitle = {CSMR},
    year = {2013},
    pages = {167-176},
    ee = {http://dx.doi.org/10.1109/CSMR.2013.26, http://doi.ieeecomputersociety.org/10.1109/CSMR.2013.26},
    crossref = {DBLP:conf/csmr/2013},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    abstract = {
    Antipatterns are known as recurring, poor design choices, recent and past studies indicated that they negatively affect software systems in terms of understand ability and maintainability, also increasing change-and defect-proneness. For this reason, refactoring actions are often suggested. In this paper, we investigate a different side-effect of antipatterns, which is their effect on testability and on testing cost in particular. We consider as (upper bound) indicator of testing cost the number of test cases that satisfy the minimal data member usage matrix (MaDUM) criterion proposed by Bashir and Goel. A study-carried out on four Java programs, Ant 1.8.3, ArgoUML 0.20, Check Style 4.0, and JFreeChart 1.0.13-supports the evidence that, on the one hand, antipatterns unit testing requires, on average, a number of test cases substantially higher than unit testing for non-antipattern classes. On the other hand, antipattern classes must be carefully tested because they are more defect-prone than other classes. Finally, we illustrate how specific refactoring actions-applied to classes participating in antipatterns-could reduce testing cost.
    },
    pdf = {2013/06498465.pdf},
    }
  • M. Leotta, F. Ricca, G. Antoniol, V. Garousi, J. Zhi, and G. Ruhe, “A pilot experiment to quantify the effect of documentation accuracy on maintenance tasks,” in Icsm, 2013, pp. 428-431.
    [Abstract]

    This paper reports the results and some challenges we discovered during the design and execution of a pilot experiment with 21 bachelor students aimed at investigating the effect of documentation accuracy during software maintenance and evolution activities. As documentation we considered: a high level system functionality description and UML documents. Preliminary results indicate a benefit of +15\% in terms of efficiency (computed as number of correct tasks per minute) when a more accurate documentation is used. The discovered challenging aspects to carefully consider in future executions of the experiment are as follows: selecting "the right" documentation artefacts, maintenance tasks and documentation versions, verifying that the subjects really used the documentation during the experiment and measuring documentation-code alignment.

    [Bibtex]

    @inproceedings{conf/icsm/LeottaRAGZR13,
    author = {Maurizio Leotta and Filippo Ricca and Giuliano Antoniol and Vahid Garousi and Junji Zhi and G{\"u}nther Ruhe},
    title = {A Pilot Experiment to Quantify the Effect of Documentation Accuracy on Maintenance Tasks},
    booktitle = {ICSM},
    year = {2013},
    pages = {428-431},
    ee = {http://dx.doi.org/10.1109/ICSM.2013.64},
    crossref = {DBLP:conf/icsm/2013},
    abstract = {
    This paper reports the results and some challenges we discovered during the design and execution of a pilot experiment with 21 bachelor students aimed at investigating the effect of documentation accuracy during software maintenance and evolution activities. As documentation we considered: a high level system functionality description and UML documents. Preliminary results indicate a benefit of +15\% in terms of efficiency (computed as number of correct tasks per minute) when a more accurate documentation is used. The discovered challenging aspects to carefully consider in future executions of the experiment are as follows: selecting "the right" documentation artefacts, maintenance tasks and documentation versions, verifying that the subjects really used the documentation during the experiment and measuring documentation-code alignment.
    },
    bibsource = {DBLP, http://dblp.uni-trier.de},
    }
  • Z. Sharafi, A. Marchetto, A. Susi, G. Antoniol, and Y. Guéhéneuc, “An empirical study on the efficiency of graphical vs. textual representations in requirements comprehension,” in Icpc, 2013, pp. 33-42.
    [Bibtex]
    @inproceedings{conf/iwpc/SharafiMSAG13,
    author = {Zohreh Sharafi and Alessandro Marchetto and Angelo Susi and Giuliano Antoniol and Yann-Ga{\"e}l Gu{\'e}h{\'e}neuc},
    title = {An empirical study on the efficiency of graphical vs. textual representations in requirements comprehension},
    booktitle = {ICPC},
    year = {2013},
    pages = {33-42},
    ee = {http://doi.ieeecomputersociety.org/10.1109/ICPC.2013.6613831},
    crossref = {DBLP:conf/iwpc/2013},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    }
  • Z. Soh, F. Khomh, Y. Guéhéneuc, G. Antoniol, and B. Adams, “On the effect of program exploration on maintenance tasks,” in Wcre, 2013, pp. 391-400.
    [Bibtex]
    @inproceedings{conf/wcre/SohKGAA13,
    author = {Z{\'e}phyrin Soh and Foutse Khomh and Yann-Ga{\"e}l Gu{\'e}h{\'e}neuc and Giuliano Antoniol and Bram Adams},
    title = {On the effect of program exploration on maintenance tasks},
    booktitle = {WCRE},
    year = {2013},
    pages = {391-400},
    ee = {http://doi.ieeecomputersociety.org/10.1109/WCRE.2013.6671314},
    crossref = {DBLP:conf/wcre/2013},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    }
  • Z. Soh, F. Khomh, Y. Guéhéneuc, and G. Antoniol, “Towards understanding how developers spend their effort during maintenance activities,” in Wcre, 2013, pp. 152-161.
    [Bibtex]
    @inproceedings{conf/wcre/SohKGA13,
    author = {Z{\'e}phyrin Soh and Foutse Khomh and Yann-Ga{\"e}l Gu{\'e}h{\'e}neuc and Giuliano Antoniol},
    title = {Towards understanding how developers spend their effort during maintenance activities},
    booktitle = {WCRE},
    year = {2013},
    pages = {152-161},
    ee = {http://doi.ieeecomputersociety.org/10.1109/WCRE.2013.6671290},
    crossref = {DBLP:conf/wcre/2013},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    }

2012

  • [PDF] N. Ali, Z. Sharafi, Y. Guéhéneuc, and G. Antoniol, “An empirical study on requirements traceability using eye-tracking,” in Icsm, 2012, pp. 191-200.
    [Abstract]

    Requirements traceability (RT) links help developers to understand programs and ensure that their source code is consistent with its documentation. Creating RT links is a laborious and resource-consuming task. Information Retrieval (IR) techniques are useful to automatically recover traceability links. However, IR-based approaches typically have low accuracy (precision and recall) and, thus, creating RT links remains a human intensive process. We conjecture that understanding how developers verify RT links could help improve the accuracy of IR-based approaches to recover RT links. Consequently, we perform an empirical study consisting of two controlled experiments. First, we use an eye-tracking system to capture developers’ eye movements while they verify RT links. We analyse the obtained data to identify and rank developers’ preferred source code entities (SCEs), e.g., class names, method names. Second, we use the ranked SCEs to propose two new weighting schemes called SE/IDF (source code entity/inverse document frequency) and DOI/IDF (domain or implementation/inverse document frequency) to recover RT links combined with an IR technique. SE/IDF is based on the developers preferred SCEs to verify RT links. DOI/IDF is an extension of SE/IDF distinguishing domain and implementation concepts. We use LSI combined with SE/IDF, DOI/IDF, and TF/IDF to show, using two systems, iTrust and Pooka, that LSIDOI/IDF statistically improves the accuracy of the recovered RT links over LSITF/IDF.

    [Bibtex]

    @inproceedings{06405271,
    author = {Nasir Ali and Zohreh Sharafi and Yann-Ga{\"e}l Gu{\'e}h{\'e}neuc and Giuliano Antoniol},
    title = {An empirical study on requirements traceability using eye-tracking},
    booktitle = {ICSM},
    year = {2012},
    pages = {191-200},
    ee = {http://doi.ieeecomputersociety.org/10.1109/ICSM.2012.6405271},
    crossref = {DBLP:conf/icsm/2012},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    abstract = {
    Requirements traceability (RT) links help developers to understand programs and ensure that their source code is consistent with its documentation. Creating RT links is a laborious and resource-consuming task. Information Retrieval (IR) techniques are useful to automatically recover traceability links. However, IR-based approaches typically have low accuracy (precision and recall) and, thus, creating RT links remains a human intensive process. We conjecture that understanding how developers verify RT links could help improve the accuracy of IR-based approaches to recover RT links. Consequently, we perform an empirical study consisting of two controlled experiments. First, we use an eye-tracking system to capture developers' eye movements while they verify RT links. We analyse the obtained data to identify and rank developers' preferred source code entities (SCEs), e.g., class names, method names. Second, we use the ranked SCEs to propose two new weighting schemes called SE/IDF (source code entity/inverse document frequency) and DOI/IDF (domain or implementation/inverse document frequency) to recover RT links combined with an IR technique. SE/IDF is based on the developers preferred SCEs to verify RT links. DOI/IDF is an extension of SE/IDF distinguishing domain and implementation concepts. We use LSI combined with SE/IDF, DOI/IDF, and TF/IDF to show, using two systems, iTrust and Pooka, that LSIDOI/IDF statistically improves the accuracy of the recovered RT links over LSITF/IDF.
    },
    pdf = {2012/06405271.pdf},
    }
  • [PDF] S. Medini, G. Antoniol, Y. Guéhéneuc, M. D. Penta, and P. Tonella, “Scan: an approach to label and relate execution trace segments,” in Wcre, 2012, pp. 135-144.
    [Abstract]

    Identifying concepts in execution traces is a task often necessary to support program comprehension or maintenance activities. Several approaches—static, dynamic or hybrid—have been proposed to identify cohesive, meaningful sequence of methods in execution traces. However, none of the proposed approaches is able to label such segments and to identify relations identified in other segments of the same trace This paper present SCAN (Segment Concept AssigNer) an approach to assign labels to sequences of methods in execution traces, and to identify relations between such segments. SCAN uses information retrieval methods and formal concept analysis to produce sets of words helping the developer to understand the concept implemented by a segment. Specifically, formal concept analysis allows SCAN to discover commonalities between segments in different trace areas, as well as terms more specific to a given segment and higher level relation between segments. The paper describes SCAN along with a preliminary manual validation—upon execution traces collected from usage scenarios of JHotDraw and ArgoUML—of SCAN accuracy in assigning labels representative of concepts implemented by trace segments.

    [Bibtex]

    @inproceedings{06385109,
    author = {Soumaya Medini and Giuliano Antoniol and Yann-Ga{\"e}l Gu{\'e}h{\'e}neuc and Massimiliano Di Penta and Paolo Tonella},
    title = {SCAN: An Approach to Label and Relate Execution Trace Segments},
    booktitle = {WCRE},
    year = {2012},
    pages = {135-144},
    ee = {http://doi.ieeecomputersociety.org/10.1109/WCRE.2012.23},
    crossref = {DBLP:conf/wcre/2012},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    pdf = {2012/06385109.pdf},
    abstract = {Identifying concepts in execution traces is a task often necessary to support program comprehension or maintenance activities. Several approaches---static, dynamic or hybrid---have been proposed to identify cohesive, meaningful sequence of methods in execution traces. However, none of the proposed approaches is able to label such segments and to identify relations identified in other segments of the same trace This paper present SCAN (Segment Concept AssigNer) an approach to assign labels to sequences of methods in execution traces, and to identify relations between such segments. SCAN uses information retrieval methods and formal concept analysis to produce sets of words helping the developer to understand the concept implemented by a segment. Specifically, formal concept analysis allows SCAN to discover commonalities between segments in different trace areas, as well as terms more specific to a given segment and higher level relation between segments. The paper describes SCAN along with a preliminary manual validation---upon execution traces collected from usage scenarios of JHotDraw and ArgoUML---of SCAN accuracy in assigning labels representative of concepts implemented by trace segments.},
    }
  • [PDF] S. L. Abebe, V. Arnaoudova, P. Tonella, G. Antoniol, and Y. Guéhéneuc, “Can lexicon bad smells improve fault prediction?,” in Wcre, 2012, pp. 235-244.
    [Abstract]

    In software development, early identification of fault-prone classes can save a considerable amount of resources. In the literature, source code structural metrics have been widely investigated as one of the factors that can be used to identify faulty classes. Structural metrics measure code complexity, one aspect of the source code quality. Complexity might affect program understanding and hence increase the likelihood of inserting errors in a class. Besides the structural metrics, we believe that the quality of the identifiers used in the code may also affect program understanding and thus increase the likelihood of error insertion. In this study, we measure the quality of identifiers using the number of Lexicon Bad Smells (LBS) they contain. We investigate whether using LBS in addition to structural metrics improves fault prediction. To conduct the investigation, we asses s the prediction capability of a model while using i) only structural metrics, and ii) structural metrics and LBS. The results on three open source systems, ArgoUML, Rhino, and Eclipse, indicate that there is an improvement in the majority of the cases.

    [Bibtex]

    @inproceedings{06385119,
    author = {Surafel Lemma Abebe and Venera Arnaoudova and Paolo Tonella and Giuliano Antoniol and Yann-Ga{\"e}l Gu{\'e}h{\'e}neuc},
    title = {Can Lexicon Bad Smells Improve Fault Prediction?},
    booktitle = {WCRE},
    year = {2012},
    pages = {235-244},
    ee = {http://doi.ieeecomputersociety.org/10.1109/WCRE.2012.33},
    crossref = {DBLP:conf/wcre/2012},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    pdf = {2012/06385119.pdf},
    abstract = {In software development, early identification of fault-prone classes can save a considerable amount of resources. In the literature, source code structural metrics have been widely investigated as one of the factors that can be used to identify faulty classes. Structural metrics measure code complexity, one aspect of the source code quality. Complexity might affect program understanding and hence increase the likelihood of inserting errors in a class. Besides the structural metrics, we believe that the quality of the identifiers used in the code may also affect program understanding and thus increase the likelihood of error insertion. In this study, we measure the quality of identifiers using the number of Lexicon Bad Smells (LBS) they contain. We investigate whether using LBS in addition to structural metrics improves fault prediction. To conduct the investigation, we asses s the prediction capability of a model while using i) only structural metrics, and ii) structural metrics and LBS. The results on three open source systems, ArgoUML, Rhino, and Eclipse, indicate that there is an improvement in the majority of the cases.},
    }
  • M. D. Penta, G. Antoniol, D. M. Germán, Y. Guéhéneuc, and B. Adams, “Five days of empirical software engineering: the pased experience,” in Icse, 2012, pp. 1255-1258.
    [Abstract]

    Acquiring the skills to plan and conduct different kinds of empirical studies is a mandatory requirement for graduate students working in the field of software engineering. These skills typically can only be developed based on the teaching and experience of the students’ supervisor, because of the lack of specific, practical courses providing these skills. To fill this gap, we organized the first Canadian Summer School on Practical Analyses of Software Engineering Data (PASED). The aim of PASED is to provide—using a “learning by doing” model of teaching—a solid foundation to software engineering graduate students on conducting empirical studies. This paper describes our experience in organizing the PASED school, i.e., what challenges we encountered, how we designed the lectures and laboratories, and what could be improved in the future based on the participants’ feedback.

    [Bibtex]

    @inproceedings{conf/icse/PentaAGGA12,
    author = {Massimiliano Di Penta and Giuliano Antoniol and Daniel M. Germ{\'a}n and Yann-Ga{\"e}l Gu{\'e}h{\'e}neuc and Bram Adams},
    title = {Five days of empirical software engineering: The PASED experience},
    booktitle = {ICSE},
    year = {2012},
    pages = {1255-1258},
    ee = {http://dx.doi.org/10.1109/ICSE.2012.6227017},
    crossref = {DBLP:conf/icse/2012},
    abstract = {
    Acquiring the skills to plan and conduct different
    kinds of empirical studies is a mandatory requirement for
    graduate students working in the field of software engineering.
    These skills typically can only be developed based on the
    teaching and experience of the students’ supervisor, because
    of the lack of specific, practical courses providing these skills.
    To fill this gap, we organized the first Canadian Summer
    School on Practical Analyses of Software Engineering Data
    (PASED). The aim of PASED is to provide—using a “learning
    by doing” model of teaching—a solid foundation to software
    engineering graduate students on conducting empirical studies.
    This paper describes our experience in organizing the PASED
    school, i.e., what challenges we encountered, how we designed
    the lectures and laboratories, and what could be improved in
    the future based on the participants’ feedback.
    },
    bibsource = {DBLP, http://dblp.uni-trier.de},
    }
  • L. Guerrouj, P. Galinier, Y. Guéhéneuc, G. Antoniol, and M. D. Penta, “Tris: a fast and accurate identifiers splitting and expansion algorithm,” in Wcre, 2012, pp. 103-112.
    [Abstract]

    Understanding source code identifiers, by identifying words composing them, is a necessary step for many program comprehension, reverse engineering, or redocumentation tasks. To this aim, researchers have proposed several identifier splitting and expansion approaches such as Samurai, TIDIER and more recently GenTest. The ultimate goal of such approaches is to help disambiguating conceptual information encoded in compound (or abbreviated) identifiers. This paper presents TRIS, TRee-based Identifier Splitter, a two-phases approach to split and expand program identifiers. TRIS takes as input a dictionary of words, the identifiers to split/expand, and the identifiers source code application. First, TRIS pre-compiles transformed dictionary words into a tree representation, associating a cost to each transformation. In a second phase, it maps the identifier splitting/expansion problem into a minimization problem, \ie{} the search of the shortest path (optimal split/expansion) in a weighted graph. We apply TRIS to a sample of 974 identifiers extracted from JHotDraw, 3,085 from Lynx, and to a sample of 489 identifiers extracted from 340 C programs. Also, we compare TRIS with GenTest on a set of 2,663 mixed Java, C and C++ identifiers. We report evidence that TRIS split (and expansion) is more accurate than state-of-the-art approaches and that it is also efficient in terms of computation time.

    [Bibtex]

    @inproceedings{conf/wcre/GuerroujGGAP12,
    author = {Latifa Guerrouj and Philippe Galinier and Yann-Ga{\"e}l Gu{\'e}h{\'e}neuc and Giuliano Antoniol and Massimiliano Di Penta},
    title = {TRIS: A Fast and Accurate Identifiers Splitting and Expansion Algorithm},
    booktitle = {WCRE},
    year = {2012},
    pages = {103-112},
    ee = {http://doi.ieeecomputersociety.org/10.1109/WCRE.2012.20},
    crossref = {DBLP:conf/wcre/2012},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    abstract = {Understanding source code identifiers, by identifying words composing them, is a necessary step for many program comprehension, reverse engineering, or redocumentation tasks. To this aim, researchers have proposed several identifier splitting and expansion approaches such as Samurai, TIDIER and more recently GenTest. The ultimate goal of such approaches is to help disambiguating conceptual information encoded in compound (or abbreviated) identifiers. This paper presents TRIS, TRee-based Identifier Splitter, a two-phases approach to split and expand program identifiers. TRIS takes as input a dictionary of words, the identifiers to split/expand, and the identifiers source code application. First, TRIS pre-compiles transformed dictionary words into a tree representation, associating a cost to each transformation. In a second phase, it maps the identifier splitting/expansion problem into a minimization problem, \ie{} the search of the shortest path (optimal split/expansion) in a weighted graph. We apply TRIS to a sample of 974 identifiers extracted from JHotDraw, 3,085 from Lynx, and to a sample of 489 identifiers extracted from 340 C programs. Also, we compare TRIS with GenTest on a set of 2,663 mixed Java, C and C++ identifiers. We report evidence that TRIS split (and expansion) is more accurate than state-of-the-art approaches and that it is also efficient in terms of computation time.},
    }
  • N. Bhattacharya, O. El-Mahi, E. Duclos, G. Beltrame, G. Antoniol, S. L. Digabel, and Y. Guéhéneuc, “Optimizing threads schedule alignments to expose the interference bug pattern,” in Ssbse, 2012, pp. 90-104.
    [Bibtex]
    @inproceedings{conf/ssbse/BhattacharyaEDBADG12,
    author = {Neelesh Bhattacharya and Olfat El-Mahi and Etienne Duclos and Giovanni Beltrame and Giuliano Antoniol and S{\'e}bastien Le Digabel and Yann-Ga{\"e}l Gu{\'e}h{\'e}neuc},
    title = {Optimizing Threads Schedule Alignments to Expose the Interference Bug Pattern},
    booktitle = {SSBSE},
    year = {2012},
    pages = {90-104},
    ee = {http://dx.doi.org/10.1007/978-3-642-33119-0_8},
    crossref = {DBLP:conf/ssbse/2012},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    }
  • Z. Sharafi, Z. Soh, Y. Guéhéneuc, and G. Antoniol, “Women and men – different but equal: on the impact of identifier style on source code reading,” in Icpc, 2012, pp. 27-36.
    [Bibtex]
    @inproceedings{conf/iwpc/SharafiSGA12,
    author = {Zohreh Sharafi and Z{\'e}phyrin Soh and Yann-Ga{\"e}l Gu{\'e}h{\'e}neuc and Giuliano Antoniol},
    title = {Women and men - Different but equal: On the impact of identifier style on source code reading},
    booktitle = {ICPC},
    year = {2012},
    pages = {27-36},
    ee = {http://dx.doi.org/10.1109/ICPC.2012.6240505},
    crossref = {DBLP:conf/iwpc/2012},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    }
  • S. Hassaine, Y. Guéhéneuc, S. Hamel, and G. Antoniol, “Advise: architectural decay in software evolution,” in Csmr, 2012, pp. 267-276.
    [Bibtex]
    @inproceedings{conf/csmr/HassaineGHA12,
    author = {Salima Hassaine and Yann-Ga{\"e}l Gu{\'e}h{\'e}neuc and Sylvie Hamel and Giuliano Antoniol},
    title = {ADvISE: Architectural Decay in Software Evolution},
    booktitle = {CSMR},
    year = {2012},
    pages = {267-276},
    ee = {http://dx.doi.org/10.1109/CSMR.2012.34, http://doi.ieeecomputersociety.org/10.1109/CSMR.2012.34},
    crossref = {DBLP:conf/csmr/2012},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    }
  • A. Maiga, N. Ali, N. Bhattacharya, A. Sabane, Y. Guéhéneuc, G. Antoniol, and E. Aïmeur, “Support vector machines for anti-pattern detection,” in Ase, 2012, pp. 278-281.
    [Bibtex]
    @inproceedings{conf/kbse/MaigaABSGAA12,
    author = {Abdou Maiga and Nasir Ali and Neelesh Bhattacharya and Aminata Sabane and Yann-Ga{\"e}l Gu{\'e}h{\'e}neuc and Giuliano Antoniol and Esma A\"{\i}meur},
    title = {Support vector machines for anti-pattern detection},
    booktitle = {ASE},
    year = {2012},
    pages = {278-281},
    ee = {http://doi.acm.org/10.1145/2351676.2351723},
    crossref = {DBLP:conf/kbse/2012},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    }
  • O. Gotel, J. Cleland-Huang, J. H. Hayes, A. Zisman, A. Egyed, P. Grünbacher, and G. Antoniol, “The quest for ubiquity: a roadmap for software and systems traceability research,” in Re, 2012, pp. 71-80.
    [Bibtex]
    @inproceedings{conf/re/GotelCHZEGA12,
    author = {Orlena Gotel and Jane Cleland-Huang and Jane Huffman Hayes and Andrea Zisman and Alexander Egyed and Paul Gr{\"u}nbacher and Giuliano Antoniol},
    title = {The quest for Ubiquity: A roadmap for software and systems traceability research},
    booktitle = {RE},
    year = {2012},
    pages = {71-80},
    ee = {http://dx.doi.org/10.1109/RE.2012.6345841},
    crossref = {DBLP:conf/re/2012},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    }
  • Z. Soh, Z. Sharafi, B. V. den Plas, G. C. Porras, Y. Guéhéneuc, and G. Antoniol, “Professional status and expertise for uml class diagram comprehension: an empirical study,” in Icpc, 2012, pp. 163-172.
    [Bibtex]
    @inproceedings{conf/iwpc/SohSPPGA12,
    author = {Z{\'e}phyrin Soh and Zohreh Sharafi and Bertrand Van den Plas and Gerardo Cepeda Porras and Yann-Ga{\"e}l Gu{\'e}h{\'e}neuc and Giuliano Antoniol},
    title = {Professional status and expertise for UML class diagram comprehension: An empirical study},
    booktitle = {ICPC},
    year = {2012},
    pages = {163-172},
    ee = {http://dx.doi.org/10.1109/ICPC.2012.6240484},
    crossref = {DBLP:conf/iwpc/2012},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    }
  • N. Ali, A. Sabane, Y. Guéhéneuc, and G. Antoniol, “Improving bug location using binary class relationships,” in Scam, 2012, pp. 174-183.
    [Bibtex]
    @inproceedings{conf/scam/AliSGA12,
    author = {Nasir Ali and Aminata Sabane and Yann-Ga{\"e}l Gu{\'e}h{\'e}neuc and Giuliano Antoniol},
    title = {Improving Bug Location Using Binary Class Relationships},
    booktitle = {SCAM},
    year = {2012},
    pages = {174-183},
    ee = {http://doi.ieeecomputersociety.org/10.1109/SCAM.2012.26},
    crossref = {DBLP:conf/scam/2012},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    }

2011

  • [PDF] S. Medini, P. Galinier, M. D. Penta, Y. Guéhéneuc, and G. Antoniol, “A fast algorithm to locate concepts in execution traces,” in Ssbse, 2011, pp. 252-266.
    [Abstract]

    The identification of cohesive segments in execution traces is an important step in concept location which, in turns, is of paramount importance for many program-comprehension activities. In this paper, we reformulate concept location as a trace segmentation problem solved via dynamic programming. Differently to approaches based on genetic algorithms, dynamic programming can compute an exact solution with better performance than previous approaches, even on long traces. We describe the new problem formulation and the algorithmic details of our approach. We then compare the performances of dynamic programming with those of a genetic algorithm, showing that dynamic programming reduces dramatically the time required to segment traces, without sacrificing precision and recall; even slightly improving them.

    [Bibtex]

    @inproceedings{chp3A1010072F978364223716422,
    author = {Soumaya Medini and Philippe Galinier and Massimiliano Di Penta and Yann-Ga{\"e}l Gu{\'e}h{\'e}neuc and Giuliano Antoniol},
    title = {A Fast Algorithm to Locate Concepts in Execution Traces},
    booktitle = {SSBSE},
    year = {2011},
    pages = {252-266},
    ee = {http://dx.doi.org/10.1007/978-3-642-23716-4_22},
    crossref = {DBLP:conf/ssbse/2011},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    abstract = {
    The identification of cohesive segments in execution traces is an important step in concept location which, in turns, is of paramount importance for many program-comprehension activities. In this paper, we reformulate concept location as a trace segmentation problem solved via dynamic programming. Differently to approaches based on genetic algorithms, dynamic programming can compute an exact solution with better performance than previous approaches, even on long traces. We describe the new problem formulation and the algorithmic details of our approach. We then compare the performances of dynamic programming with those of a genetic algorithm, showing that dynamic programming reduces dramatically the time required to segment traces, without sacrificing precision and recall; even slightly improving them.
    },
    pdf = {2011/chp3A1010072F978364223716422.pdf},
    }
  • [PDF] N. Ali, Y. Guéhéneuc, and G. Antoniol, “Trust-based requirements traceability,” in Icpc, 2011, pp. 111-120.
    [Abstract]

    Information retrieval (IR) approaches have proven useful in recovering traceability links between free-text documentation and source code. IR-based traceability recovery approaches produce ranked lists of traceability links between pieces of documentation and of source code. These traceability links are then pruned using various strategies and, finally, validated by human experts. In this paper we propose two contributions to improve the precision and recall of traceability links and, thus, reduces the required human experts’ manual validation effort. First, we propose a novel approach, Trustrace, inspired by Web trust models to improve precision and recall of traceability links: Trustrace first uses any traceability recovery approach as the basis on which, second, it applies various experts’ opinions to add, remove, and–or adjust the rankings of the traceability links. The experts can be human experts or other traceability recovery approaches. Second, we propose a novel traceability recovery approach, Histrace, to identify traceability links between requirements and source code through CVS/SVN change logs using a Vector Space Model (VSM). We combine a traditional recovery traceability approach with Histrace to build Trustrace VSM, Histrace in which we use Histrace as one expert commenting the traceability links recovered using the VSM-based approach. We apply TrustraceVSM, Histrace on two case studies to compare its traceability links with those recovered using only the VSM-based approach, in terms of precision and recall. We show that Trustrace VSM, Histrace improves with statistical significance the precision of the traceability links while also improving recall but without statistical significance. We thus show that our trust-based approach indeed improves precision and recall and also that CVS/SVN change logs are useful in the traceability recovery process.

    [Bibtex]

    @inproceedings{05970169,
    author = {Nasir Ali and Yann-Ga{\"e}l Gu{\'e}h{\'e}neuc and Giuliano Antoniol},
    title = {Trust-Based Requirements Traceability},
    booktitle = {ICPC},
    year = {2011},
    pages = {111-120},
    ee = {http://doi.ieeecomputersociety.org/10.1109/ICPC.2011.42},
    crossref = {DBLP:conf/iwpc/2011},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    pdf = {2011/05970169.pdf},
    abstract = {Information retrieval (IR) approaches have proven useful in recovering traceability links between free-text documentation and source code. IR-based traceability recovery approaches produce ranked lists of traceability links between pieces of documentation and of source code. These traceability links are then pruned using various strategies and, finally, validated by human experts. In this paper we propose two contributions to improve the precision and recall of traceability links and, thus, reduces the required human experts' manual validation effort. First, we propose a novel approach, Trustrace, inspired by Web trust models to improve precision and recall of traceability links: Trustrace first uses any traceability recovery approach as the basis on which, second, it applies various experts' opinions to add, remove, and--or adjust the rankings of the traceability links. The experts can be human experts or other traceability recovery approaches. Second, we propose a novel traceability recovery approach, Histrace, to identify traceability links between requirements and source code through CVS/SVN change logs using a Vector Space Model (VSM). We combine a traditional recovery traceability approach with Histrace to build Trustrace VSM, Histrace in which we use Histrace as one expert commenting the traceability links recovered using the VSM-based approach. We apply TrustraceVSM, Histrace on two case studies to compare its traceability links with those recovered using only the VSM-based approach, in terms of precision and recall. We show that Trustrace VSM, Histrace improves with statistical significance the precision of the traceability links while also improving recall but without statistical significance. We thus show that our trust-based approach indeed improves precision and recall and also that CVS/SVN change logs are useful in the traceability recovery process.},
    }
  • [PDF] N. Ali, Y. Guéhéneuc, and G. Antoniol, “Requirements traceability for object oriented systems by partitioning source code,” in Wcre, 2011, pp. 45-54.
    [Abstract]

    Requirements trace ability ensures that source code is consistent with documentation and that all requirements have been implemented. During software evolution, features are added, removed, or modified, the code drifts away from its original requirements. Thus trace ability recovery approaches becomes necessary to re-establish the trace ability relations between requirements and source code. This paper presents an approach (Coparvo) complementary to existing trace ability recovery approaches for object-oriented programs. Coparvo reduces false positive links recovered by traditional trace ability recovery processes thus reducing the manual validation effort. Coparvo assumes that information extracted from different entities (i.e., class names, comments, class variables, or methods signatures) are different information sources, they may have different level of reliability in requirements trace ability and each information source may act as a different expert recommending trace ability links. We applied Coparvo on three data sets, Pooka, SIP Communicator, and iTrust, to filter out false positive links recovered via the information retrieval approach, i.e., vector space model. The results show that Coparvo significantly improves the of the recovered links accuracy and also reduces up to 83% effort required to manually remove false positive links.

    [Bibtex]

    @inproceedings{06079774,
    author = {Nasir Ali and Yann-Ga{\"e}l Gu{\'e}h{\'e}neuc and Giuliano Antoniol},
    title = {Requirements Traceability for Object Oriented Systems by Partitioning Source Code},
    booktitle = {WCRE},
    year = {2011},
    pages = {45-54},
    ee = {http://doi.ieeecomputersociety.org/10.1109/WCRE.2011.16},
    crossref = {DBLP:conf/wcre/2011},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    abstract = {
    Requirements trace ability ensures that source code is consistent with documentation and that all requirements have been implemented. During software evolution, features are added, removed, or modified, the code drifts away from its original requirements. Thus trace ability recovery approaches becomes necessary to re-establish the trace ability relations between requirements and source code. This paper presents an approach (Coparvo) complementary to existing trace ability recovery approaches for object-oriented programs. Coparvo reduces false positive links recovered by traditional trace ability recovery processes thus reducing the manual validation effort. Coparvo assumes that information extracted from different entities (i.e., class names, comments, class variables, or methods signatures) are different information sources, they may have different level of reliability in requirements trace ability and each information source may act as a different expert recommending trace ability links. We applied Coparvo on three data sets, Pooka, SIP Communicator, and iTrust, to filter out false positive links recovered via the information retrieval approach, i.e., vector space model. The results show that Coparvo significantly improves the of the recovered links accuracy and also reduces up to 83% effort required to manually remove false positive links.
    },
    pdf = {2011/06079774.pdf},
    }
  • [PDF] N. Bhattacharya, A. Sakti, G. Antoniol, Y. Guéhéneuc, and G. Pesant, “Divide-by-zero exception raising via branch coverage,” in Ssbse, 2011, pp. 204-218.
    [Abstract]

    In this paper, we discuss how a search-based branch coverage approach can be used to design an effective test data generation approach, specifically targeting divide-by-zero exceptions. We first propose a novel testability transformation combining approach level and branch distance. We then use different search strategies, i.e., hill climbing, simulated annealing, and genetic algorithm, to evaluate the performance of the novel testability transformation on a small synthetic example as well as on methods known to throw divide-by-zero exceptions, extracted from real world systems, namely Eclipse and Android. Finally, we also describe how the test data generation for divide-by-zero exceptions can be formulated as a constraint programming problem and compare the resolution of this problem with a genetic algorithm in terms of execution time. We thus report evidence that genetic algorithm using our novel testability transformation out-performs hill climbing and simulated annealing and a previous approach (in terms of numbers of fitness evaluation) but is out-performed by constraint programming (in terms of execution time).

    [Bibtex]

    @inproceedings{chp3A1010072F978364223716419,
    author = {Neelesh Bhattacharya and Abdelilah Sakti and Giuliano Antoniol and Yann-Ga{\"e}l Gu{\'e}h{\'e}neuc and Gilles Pesant},
    title = {Divide-by-Zero Exception Raising via Branch Coverage},
    booktitle = {SSBSE},
    year = {2011},
    pages = {204-218},
    ee = {http://dx.doi.org/10.1007/978-3-642-23716-4_19},
    crossref = {DBLP:conf/ssbse/2011},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    abstract = {
    In this paper, we discuss how a search-based branch coverage approach can be used to design an effective test data generation approach, specifically targeting divide-by-zero exceptions. We first propose a novel testability transformation combining approach level and branch distance. We then use different search strategies, i.e., hill climbing, simulated annealing, and genetic algorithm, to evaluate the performance of the novel testability transformation on a small synthetic example as well as on methods known to throw divide-by-zero exceptions, extracted from real world systems, namely Eclipse and Android. Finally, we also describe how the test data generation for divide-by-zero exceptions can be formulated as a constraint programming problem and compare the resolution of this problem with a genetic algorithm in terms of execution time. We thus report evidence that genetic algorithm using our novel testability transformation out-performs hill climbing and simulated annealing and a previous approach (in terms of numbers of fitness evaluation) but is out-performed by constraint programming (in terms of execution time).
    },
    pdf = {2011/chp3A1010072F978364223716419.pdf},
    }
  • [PDF] F. Jaafar, Y. Guéhéneuc, S. Hamel, and G. Antoniol, “An exploratory study of macro co-changes,” in Wcre, 2011, pp. 325-334.
    [Abstract]

    The literature describes several approaches to identify the artefacts of programs that change together to reveal the (hidden) dependencies among these artefacts. These approaches analyse historical data, mined from version control systems, and report co-changing artefacts, which hint at the causes, consequences, and actors of the changes. We introduce the novel concepts of macro co-changes (MCC), i.e., of artefacts that co-change within a large time interval, and of dephase macro co-changes (DMCC), i.e., macro co-changes that always happen with the same shifts in time. We describe typical scenarios of MCC and DMCC and we use the Hamming distance to detect approximate occurrences of MCC and DMCC. We present our approach, Macocha, to identify these concepts in large programs. We apply Macocha and compare it in terms of precision and recall with UML Diff (file stability) and association rules (co-changing files) on four systems: Argo UML, Free BSD, SIP, and XalanC. We also use external information to validate the (approximate) MCC and DMCC found by Macocha. We thus answer two research questions showing the existence and usefulness of theses concepts and explaining scenarios of hidden dependencies among artefacts.

    [Bibtex]

    @inproceedings{06079858,
    author = {Fehmi Jaafar and Yann-Ga{\"e}l Gu{\'e}h{\'e}neuc and Sylvie Hamel and Giuliano Antoniol},
    title = {An Exploratory Study of Macro Co-changes},
    booktitle = {WCRE},
    year = {2011},
    pages = {325-334},
    ee = {http://doi.ieeecomputersociety.org/10.1109/WCRE.2011.47},
    crossref = {DBLP:conf/wcre/2011},
    abstract = {
    The literature describes several approaches to identify the artefacts of programs that change together to reveal the (hidden) dependencies among these artefacts. These approaches analyse historical data, mined from version control systems, and report co-changing artefacts, which hint at the causes, consequences, and actors of the changes. We introduce the novel concepts of macro co-changes (MCC), i.e., of artefacts that co-change within a large time interval, and of dephase macro co-changes (DMCC), i.e., macro co-changes that always happen with the same shifts in time. We describe typical scenarios of MCC and DMCC and we use the Hamming distance to detect approximate occurrences of MCC and DMCC. We present our approach, Macocha, to identify these concepts in large programs. We apply Macocha and compare it in terms of precision and recall with UML Diff (file stability) and association rules (co-changing files) on four systems: Argo UML, Free BSD, SIP, and XalanC. We also use external information to validate the (approximate) MCC and DMCC found by Macocha. We thus answer two research questions showing the existence and usefulness of theses concepts and explaining scenarios of hidden dependencies among artefacts.
    },
    bibsource = {DBLP, http://dblp.uni-trier.de},
    pdf = {2011/06079858.pdf},
    }
  • [PDF] A. Belderrar, S. Kpodjedo, Y. Guéhéneuc, G. Antoniol, and P. Galinier, “Sub-graph mining: identifying micro-architectures in evolving object-oriented software,” in Csmr, 2011, pp. 171-180.
    [Abstract]

    Developers introduce novel and undocumented micro-architectures when performing evolution tasks on object-oriented applications. We are interested in understanding whether those organizations of classes and relations can bear, much like cataloged design and anti-patterns, potential harm or benefit to an object-oriented application. We present SGFinder, a sub-graph mining approach and tool based on an efficient enumeration technique to identify recurring micro-architectures in object-oriented class diagrams. Once SGFinder has detected instances of micro-architectures, we exploit these instances to identify their desirable properties, such as stability, or unwanted properties, such as change or fault proneness. We perform a feasibility study of our approach by applying SGFinder on the reverse-engineered class diagrams of several releases of two Java applications: ArgoUML and Rhino. We characterize and highlight some of the most interesting micro-architectures, e.g., the most fault prone and the most stable, and conclude that SGFinder opens the way to further interesting studies.

    [Bibtex]

    @inproceedings{05741259,
    author = {Ahmed Belderrar and Segla Kpodjedo and Yann-Ga{\"e}l Gu{\'e}h{\'e}neuc and Giuliano Antoniol and Philippe Galinier},
    title = {Sub-graph Mining: Identifying Micro-architectures in Evolving Object-Oriented Software},
    booktitle = {CSMR},
    year = {2011},
    pages = {171-180},
    ee = {http://dx.doi.org/10.1109/CSMR.2011.23},
    crossref = {DBLP:conf/csmr/2011},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    abstract = {
    Developers introduce novel and undocumented micro-architectures when performing evolution tasks on object-oriented applications. We are interested in understanding whether those organizations of classes and relations can bear, much like cataloged design and anti-patterns, potential harm or benefit to an object-oriented application. We present SGFinder, a sub-graph mining approach and tool based on an efficient enumeration technique to identify recurring micro-architectures in object-oriented class diagrams. Once SGFinder has detected instances of micro-architectures, we exploit these instances to identify their desirable properties, such as stability, or unwanted properties, such as change or fault proneness. We perform a feasibility study of our approach by applying SGFinder on the reverse-engineered class diagrams of several releases of two Java applications: ArgoUML and Rhino. We characterize and highlight some of the most interesting micro-architectures, e.g., the most fault prone and the most stable, and conclude that SGFinder opens the way to further interesting studies.
    },
    pdf = {2011/05741259.pdf},
    }
  • [PDF] B. Dit, L. Guerrouj, D. Poshyvanyk, and G. Antoniol, “Can better identifier splitting techniques help feature location?,” in Icpc, 2011, pp. 11-20.
    [Abstract]

    The paper presents an exploratory study of two feature location techniques utilizing three strategies for splitting identifiers: CamelCase, Samurai and manual splitting of identifiers. The main research question that we ask in this study is if we had a perfect technique for splitting identifiers, would it still help improve accuracy of feature location techniques applied in different scenarios and settings? In order to answer this research question we investigate two feature location techniques, one based on Information Retrieval and the other one based on the combination of Information Retrieval and dynamic analysis, for locating bugs and features using various configurations of preprocessing strategies on two open-source systems, Rhino and jEdit. The results of an extensive empirical evaluation reveal that feature location techniques using Information Retrieval can benefit from better preprocessing algorithms in some cases, and that their improvement in effectiveness while using manual splitting over state-of-the-art approaches is statistically significant in those cases. However, the results for feature location technique using the combination of Information Retrieval and dynamic analysis do not show any improvement while using manual splitting, indicating that any preprocessing technique will suffice if execution data is available. Overall, our findings outline potential benefits of putting additional research efforts into defining more sophisticated source code preprocessing techniques as they can still be useful in situations where execution information cannot be easily collected.

    [Bibtex]

    @inproceedings{05970159,
    author = {Bogdan Dit and Latifa Guerrouj and Denys Poshyvanyk and Giuliano Antoniol},
    title = {Can Better Identifier Splitting Techniques Help Feature Location?},
    booktitle = {ICPC},
    year = {2011},
    pages = {11-20},
    ee = {http://doi.ieeecomputersociety.org/10.1109/ICPC.2011.47},
    crossref = {DBLP:conf/iwpc/2011},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    pdf = {2011/05970159.pdf},
    abstract = {The paper presents an exploratory study of two feature location techniques utilizing three strategies for splitting identifiers: CamelCase, Samurai and manual splitting of identifiers. The main research question that we ask in this study is if we had a perfect technique for splitting identifiers, would it still help improve accuracy of feature location techniques applied in different scenarios and settings? In order to answer this research question we investigate two feature location techniques, one based on Information Retrieval and the other one based on the combination of Information Retrieval and dynamic analysis, for locating bugs and features using various configurations of preprocessing strategies on two open-source systems, Rhino and jEdit. The results of an extensive empirical evaluation reveal that feature location techniques using Information Retrieval can benefit from better preprocessing algorithms in some cases, and that their improvement in effectiveness while using manual splitting over state-of-the-art approaches is statistically significant in those cases. However, the results for feature location technique using the combination of Information Retrieval and dynamic analysis do not show any improvement while using manual splitting, indicating that any preprocessing technique will suffice if execution data is available. Overall, our findings outline potential benefits of putting additional research efforts into defining more sophisticated source code preprocessing techniques as they can still be useful in situations where execution information cannot be easily collected.},
    }
  • [PDF] S. Hassaine, F. Boughanmi, Y. Guéhéneuc, S. Hamel, and G. Antoniol, “Change impact analysis: an earthquake metaphor,” in Icpc, 2011, pp. 209-210.
    [Abstract]

    Impact analysis is crucial to make decisions among different alternative implementations and to anticipate future maintenance tasks. Several approaches were proposed to identify software artefacts being affected by a change. However, to the best of our knowledge, none of these approaches have been used to study the scope of changes in a program. Yet, this information would help developers assess their change efforts and perform more adequate changes. Thus, we present a metaphor inspired by seismology and propose a mapping between the concepts of seismology and software evolution. We show the applicability and usefulness of our metaphor using Rhino and Xerces-J.

    [Bibtex]

    @inproceedings{05970184,
    author = {Salima Hassaine and Ferdaous Boughanmi and Yann-Ga{\"e}l Gu{\'e}h{\'e}neuc and Sylvie Hamel and Giuliano Antoniol},
    title = {Change Impact Analysis: An Earthquake Metaphor},
    booktitle = {ICPC},
    year = {2011},
    pages = {209-210},
    ee = {http://doi.ieeecomputersociety.org/10.1109/ICPC.2011.54},
    crossref = {DBLP:conf/iwpc/2011},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    abstract = {
    Impact analysis is crucial to make decisions among different alternative implementations and to anticipate future maintenance tasks. Several approaches were proposed to identify software artefacts being affected by a change. However, to the best of our knowledge, none of these approaches have been used to study the scope of changes in a program. Yet, this information would help developers assess their change efforts and perform more adequate changes. Thus, we present a metaphor inspired by seismology and propose a mapping between the concepts of seismology and software evolution. We show the applicability and usefulness of our metaphor using Rhino and Xerces-J.
    },
    pdf = {2011/05970184.pdf},
    }
  • M. Abbes, F. Khomh, Y. Guéhéneuc, and G. Antoniol, “An empirical study of the impact of two antipatterns, blob and spaghetti code, on program comprehension,” in Csmr, 2011, pp. 181-190.
    [Bibtex]
    @inproceedings{conf/csmr/AbbesKGA11,
    author = {Marwen Abbes and Foutse Khomh and Yann-Ga{\"e}l Gu{\'e}h{\'e}neuc and Giuliano Antoniol},
    title = {An Empirical Study of the Impact of Two Antipatterns, Blob and Spaghetti Code, on Program Comprehension},
    booktitle = {CSMR},
    year = {2011},
    pages = {181-190},
    ee = {http://dx.doi.org/10.1109/CSMR.2011.24},
    crossref = {DBLP:conf/csmr/2011},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    }
  • D. Romano, M. D. Penta, and G. Antoniol, “An approach for search based testing of null pointer exceptions,” in Icst, 2011, pp. 160-169.
    [Abstract]

    Uncaught exceptions, and in particular null pointer exceptions (NPEs), constitute a major cause of crashes for software systems. Although tools for the s tatic identification of potential NPEs exist, there is need for proper approaches able to identify system execution scenarios causing NPEs. This paper proposes a search-based test data generation approach aimed at automatically identify NPEs. The approach consists of two steps: (i) an inter-p rocedural data and control flow analysis, relying on existing technology,that identifies paths between input parameters and potential NPEs, and (ii) a genetic algorithm that evolves a population of test data with the aim of covering such paths. The algorithm is able to deal with complex inputs containi ng arbitrary data structures. The approach has been evaluated on to test class clusters from six Java open source systems, where NPE bugs have been artificially introduced. Results sh ow that the approach is, indeed, able to identify the NPE bugs, and it outperforms random testing. Also, we show how the approach is able to identify rea l NPE bugs some of which are posted in the bug-tracking system of the Apache libraries.

    [Bibtex]

    @inproceedings{conf/icst/RomanoPA11,
    author = {Daniele Romano and Massimiliano Di Penta and Giuliano Antoniol},
    title = {An Approach for Search Based Testing of Null Pointer Exceptions},
    booktitle = {ICST},
    year = {2011},
    pages = {160-169},
    ee = {http://dx.doi.org/10.1109/ICST.2011.49, http://doi.ieeecomputersociety.org/10.1109/ICST.2011.49},
    crossref = {DBLP:conf/icst/2011},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    abstract = {Uncaught exceptions, and in particular null pointer exceptions (NPEs), constitute a major cause of crashes for software systems. Although tools for the s tatic identification of potential NPEs exist, there is need for proper approaches able to identify system execution scenarios causing NPEs. This paper proposes a search-based test data generation approach aimed at automatically identify NPEs. The approach consists of two steps: (i) an inter-p rocedural data and control flow analysis, relying on existing technology,that identifies paths between input parameters and potential NPEs, and (ii) a genetic algorithm that evolves a population of test data with the aim of covering such paths. The algorithm is able to deal with complex inputs containi ng arbitrary data structures. The approach has been evaluated on to test class clusters from six Java open source systems, where NPE bugs have been artificially introduced. Results sh ow that the approach is, indeed, able to identify the NPE bugs, and it outperforms random testing. Also, we show how the approach is able to identify rea l NPE bugs some of which are posted in the bug-tracking system of the Apache libraries.},
    }
  • N. Ali, W. Wu, G. Antoniol, M. D. Penta, Y. Guéhéneuc, and J. H. Hayes, “Moms: multi-objective miniaturization of software,” in Icsm, 2011, pp. 153-162.
    [Bibtex]
    @inproceedings{conf/icsm/AliWAPGH11,
    author = {Nasir Ali and Wei Wu and Giuliano Antoniol and Massimiliano Di Penta and Yann-Ga{\"e}l Gu{\'e}h{\'e}neuc and Jane Huffman Hayes},
    title = {MoMS: Multi-objective miniaturization of software},
    booktitle = {ICSM},
    year = {2011},
    pages = {153-162},
    ee = {http://dx.doi.org/10.1109/ICSM.2011.6080782},
    crossref = {DBLP:conf/icsm/2011},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    }
  • L. M. Eshkevari, V. Arnaoudova, M. D. Penta, R. Oliveto, Y. Guéhéneuc, and G. Antoniol, “An exploratory study of identifier renamings,” in Msr, 2011, pp. 33-42.
    [Bibtex]
    @inproceedings{conf/msr/EshkevariAPOGA11,
    author = {Laleh Mousavi Eshkevari and Venera Arnaoudova and Massimiliano Di Penta and Rocco Oliveto and Yann-Ga{\"e}l Gu{\'e}h{\'e}neuc and Giuliano Antoniol},
    title = {An exploratory study of identifier renamings},
    booktitle = {MSR},
    year = {2011},
    pages = {33-42},
    ee = {http://doi.acm.org/10.1145/1985441.1985449},
    crossref = {DBLP:conf/msr/2011},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    }
  • S. Hassaine, F. Boughanmi, Y. Guéhéneuc, S. Hamel, and G. Antoniol, “A seismology-inspired approach to study change propagation,” in Icsm, 2011, pp. 53-62.
    [Bibtex]
    @inproceedings{conf/icsm/HassaineBGHA11,
    author = {Salima Hassaine and Ferdaous Boughanmi and Yann-Ga{\"e}l Gu{\'e}h{\'e}neuc and Sylvie Hamel and Giuliano Antoniol},
    title = {A seismology-inspired approach to study change propagation},
    booktitle = {ICSM},
    year = {2011},
    pages = {53-62},
    ee = {http://dx.doi.org/10.1109/ICSM.2011.6080772},
    crossref = {DBLP:conf/icsm/2011},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    }

2010

  • [PDF] M. D. Penta, D. M. Germán, Y. Guéhéneuc, and G. Antoniol, “An exploratory study of the evolution of software licensing,” in Icse (1), 2010, pp. 145-154.
    [Abstract]

    Free and open source software (FOSS) is distributed and made available to users under different software licenses, mentioned in FOSS code by means of licensing statements. Various factors, such as changes in the legal landscape, commercial code licensed as FOSS, or code reused from other FOSS systems, lead to evolution of licensing, which may affect the way a system or part of it can be subsequently used. Therefore, it is crucial to monitor licensing evolution. However, manually tracking the licensing evolution of thousands of files is a daunting task. After presenting several cases about the effects of licensing evolution, we argue that developers and system integrators must monitor licensing evolution and they need an automatic approach due of the sheer size of FOSS. We propose an approach to automatically track changes occurring in the licensing terms of a system and report an empirical study of the licensing evolution of six different FOSS systems. Results show that licensing underwent frequent and substantial changes.

    [Bibtex]

    @inproceedings{p145-di_penta,
    author = {Massimiliano Di Penta and Daniel M. Germ{\'a}n and Yann-Ga{\"e}l Gu{\'e}h{\'e}neuc and Giuliano Antoniol},
    title = {An exploratory study of the evolution of software licensing},
    booktitle = {ICSE (1)},
    year = {2010},
    pages = {145-154},
    ee = {http://doi.acm.org/10.1145/1806799.1806824},
    crossref = {DBLP:conf/icse/2010-1},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    pdf = {2010/p145-di_penta.pdf},
    abstract = {Free and open source software (FOSS) is distributed and made available to users under different software licenses, mentioned in FOSS code by means of licensing statements. Various factors, such as changes in the legal landscape, commercial code licensed as FOSS, or code reused from other FOSS systems, lead to evolution of licensing, which may affect the way a system or part of it can be subsequently used. Therefore, it is crucial to monitor licensing evolution. However, manually tracking the licensing evolution of thousands of files is a daunting task. After presenting several cases about the effects of licensing evolution, we argue that developers and system integrators must monitor licensing evolution and they need an automatic approach due of the sheer size of FOSS. We propose an approach to automatically track changes occurring in the licensing terms of a system and report an empirical study of the licensing evolution of six different FOSS systems. Results show that licensing underwent frequent and substantial changes.},
    }
  • [PDF] N. Haderer, F. Khomh, and G. Antoniol, “Squaner: a framework for monitoring the quality of software systems,” in Icsm, 2010, pp. 1-4.
    [Abstract]

    Despite the large number of quality models and publicly available quality assessment tools like PMD, Checkstyle, or FindBugs, very few studies have investigated the use of quality models by developers in their daily activities. One reason for this lack of studies is the absence of integrated environments for monitoring the evolution of software quality. We propose SQUANER (Software QUality ANalyzER), a framework for monitoring the evolution of the quality of object-oriented systems. SQUANER connects directly to the SVN of a system, extracts the source code, and perform quality evaluations and faults predictions every time a commit is made by a developer. After quality analysis, a feedback is provided to developers with instructions on how to improve their code.

    [Bibtex]

    @inproceedings{05609684,
    author = {Nicolas Haderer and Foutse Khomh and Giuliano Antoniol},
    title = {SQUANER: A framework for monitoring the quality of software systems},
    booktitle = {ICSM},
    year = {2010},
    pages = {1-4},
    ee = {http://dx.doi.org/10.1109/ICSM.2010.5609684},
    crossref = {DBLP:conf/icsm/2010},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    abstract = {
    Despite the large number of quality models and publicly available quality assessment tools like PMD, Checkstyle, or FindBugs, very few studies have investigated the use of quality models by developers in their daily activities. One reason for this lack of studies is the absence of integrated environments for monitoring the evolution of software quality. We propose SQUANER (Software QUality ANalyzER), a framework for monitoring the evolution of the quality of object-oriented systems. SQUANER connects directly to the SVN of a system, extracts the source code, and perform quality evaluations and faults predictions every time a commit is made by a developer. After quality analysis, a feedback is provided to developers with instructions on how to improve their code.
    },
    pdf = {2010/05609684.pdf},
    }
  • [PDF] F. Asadi, M. D. Penta, G. Antoniol, and Y. Guéhéneuc, “A heuristic-based approach to identify concepts in execution traces,” in Csmr, 2010, pp. 31-40.
    [Abstract]

    Concept or feature identification, i.e., the identification of the source code fragments implementing a particular feature, is a crucial task during software understanding and maintenance. This paper proposes an approach to identify concepts in execution traces by finding cohesive and decoupled fragments of the traces. The approach relies on search-based optimization techniques, textual analysis of the system source code using latent semantic indexing, and trace compression techniques. It is evaluated to identify features from execution traces of two open source systems from different domains, JHotDraw and ArgoUML. Results show that the approach is always able to identify trace segments implementing concepts with a high precision and, for highly cohesive concepts, with a high overlap with the manually-built oracle.

    [Bibtex]

    @inproceedings{05714415,
    author = {Fatemeh Asadi and Massimiliano Di Penta and Giuliano Antoniol and Yann-Ga{\"e}l Gu{\'e}h{\'e}neuc},
    title = {A Heuristic-Based Approach to Identify Concepts in Execution Traces},
    booktitle = {CSMR},
    year = {2010},
    pages = {31-40},
    ee = {http://dx.doi.org/10.1109/CSMR.2010.17},
    crossref = {DBLP:conf/csmr/2010},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    abstract = {
    Concept or feature identification, i.e., the identification of the source code fragments implementing a particular feature, is a crucial task during software understanding and maintenance. This paper proposes an approach to identify concepts in execution traces by finding cohesive and decoupled fragments of the traces. The approach relies on search-based optimization techniques, textual analysis of the system source code using latent semantic indexing, and trace compression techniques. It is evaluated to identify features from execution traces of two open source systems from different domains, JHotDraw and ArgoUML. Results show that the approach is always able to identify trace segments implementing concepts with a high precision and, for highly cohesive concepts, with a high overlap with the manually-built oracle.
    },
    pdf = {2010/05714415.pdf},
    }
  • [PDF] M. D. Penta, D. M. Germán, and G. Antoniol, “Identifying licensing of jar archives using a code-search approach,” in Msr, 2010, pp. 151-160.
    [Abstract]

    Free and open source software strongly promotes the reuse of source code. Some open source Java components/libraries are distributed as jar archives only containing the bytecode and some additional information. For whoever wanting to integrate this jar in her own project, it is important to determine the license(s) of the code from which the jar archive was produced, as this affects the way that such component can be used. This paper proposes an automatic approach to determine the license of jar archives, combining the use of a code-search engine with the automatic classification of licenses contained in textual flies enclosed in the jar. Results of an empirical study performed on 37 jars – from 17 different systems – indicate that this approach is able to successfully infer the jar licenses in over 95 \% of the cases, but that in many cases the license in textual flies may differ from the one of the classes contained in the jar.

    [Bibtex]

    @inproceedings{05463282,
    author = {Massimiliano Di Penta and Daniel M. Germ{\'a}n and Giuliano Antoniol},
    title = {Identifying licensing of jar archives using a code-search approach},
    booktitle = {MSR},
    year = {2010},
    pages = {151-160},
    ee = {http://dx.doi.org/10.1109/MSR.2010.5463282},
    crossref = {DBLP:conf/msr/2010},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    pdf = {2010/05463282.pdf},
    abstract = {Free and open source software strongly promotes the reuse of source code. Some open source Java components/libraries are distributed as jar archives only containing the bytecode and some additional information. For whoever wanting to integrate this jar in her own project, it is important to determine the license(s) of the code from which the jar archive was produced, as this affects the way that such component can be used. This paper proposes an automatic approach to determine the license of jar archives, combining the use of a code-search engine with the automatic classification of licenses contained in textual flies enclosed in the jar. Results of an empirical study performed on 37 jars - from 17 different systems - indicate that this approach is able to successfully infer the jar licenses in over 95 \% of the cases, but that in many cases the license in textual flies may differ from the one of the classes contained in the jar.},
    }
  • [PDF] R. Oliveto, F. Khomh, G. Antoniol, and Y. Guéhéneuc, “Numerical signatures of antipatterns: an approach based on b-splines,” in Csmr, 2010, pp. 248-251.
    [Abstract]

    Antipatterns are poor object-oriented solutions to recurring design problems. The identification of occurrences of antipatterns in systems has received recently some attention but current approaches have two main limitations: either (1) they classify classes strictly as being or not antipatterns, and thus cannot report accurate information for borderline classes, or (2) they return the probabilities of classes to be antipatterns but they require an expensive tuning by experts to have acceptable accuracy. To mitigate such limitations, we introduce a new identification approach, ABS (Antipattern identification using B-Splines), based on a similarity computed via a numerical analysis technique using B-splines. We illustrate our approach on the Blob and compare it with DECOR, which uses strict thresholds, and with another approach based on Bayesian Beliefs Networks. We show that our approach generally outperforms previous approaches in terms of accuracy.

    [Bibtex]

    @inproceedings{05714444,
    author = {Rocco Oliveto and Foutse Khomh and Giuliano Antoniol and Yann-Ga{\"e}l Gu{\'e}h{\'e}neuc},
    title = {Numerical Signatures of Antipatterns: An Approach Based on B-Splines},
    booktitle = {CSMR},
    year = {2010},
    pages = {248-251},
    ee = {http://dx.doi.org/10.1109/CSMR.2010.47},
    crossref = {DBLP:conf/csmr/2010},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    pdf = {2010/05714444.pdf},
    abstract = {Antipatterns are poor object-oriented solutions to recurring design problems. The identification of occurrences of antipatterns in systems has received recently some attention but current approaches have two main limitations: either (1) they classify classes strictly as being or not antipatterns, and thus cannot report accurate information for borderline classes, or (2) they return the probabilities of classes to be antipatterns but they require an expensive tuning by experts to have acceptable accuracy. To mitigate such limitations, we introduce a new identification approach, ABS (Antipattern identification using B-Splines), based on a similarity computed via a numerical analysis technique using B-splines. We illustrate our approach on the Blob and compare it with DECOR, which uses strict thresholds, and with another approach based on Bayesian Beliefs Networks. We show that our approach generally outperforms previous approaches in terms of accuracy.},
    }
  • [PDF] W. Wu, Y. Guéhéneuc, G. Antoniol, and M. Kim, “Aura: a hybrid approach to identify framework evolution,” in Icse (1), 2010, pp. 325-334.
    [Abstract]

    Software frameworks and libraries are indispensable to to- day’s software systems. As they evolve, it is often time- consuming for developers to keep their code up-to-date, so approaches have been proposed to facilitate this. Usually, these approaches cannot automatically identify change rules for one-replaced-by-many and many-replaced-by-one meth- ods, and they trade off recall for higher precision using one or more experimentally-evaluated thresholds. We introduce AURA, a novel hybrid approach that combines call depen- dency and text similarity analyses to overcome these limita- tions. We implement it in a Java system and compare it on five frameworks with three previous approaches by Dagenais and Robillard, M. Kim et al., and Sch ̈fer et al. The compar- a ison shows that, on average, the recall of AURA is 53.07 \% higher while its precision is similar, e.g., 0.10 \% lower.

    [Bibtex]

    @inproceedings{p325-wu,
    author = {Wei Wu and Yann-Ga{\"e}l Gu{\'e}h{\'e}neuc and Giuliano Antoniol and Miryung Kim},
    title = {AURA: a hybrid approach to identify framework evolution},
    booktitle = {ICSE (1)},
    year = {2010},
    pages = {325-334},
    ee = {http://doi.acm.org/10.1145/1806799.1806848},
    crossref = {DBLP:conf/icse/2010-1},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    pdf = {2010/p325-wu.pdf},
    abstract = {Software frameworks and libraries are indispensable to to- day’s software systems. As they evolve, it is often time- consuming for developers to keep their code up-to-date, so approaches have been proposed to facilitate this. Usually, these approaches cannot automatically identify change rules for one-replaced-by-many and many-replaced-by-one meth- ods, and they trade off recall for higher precision using one or more experimentally-evaluated thresholds. We introduce AURA, a novel hybrid approach that combines call depen- dency and text similarity analyses to overcome these limita- tions. We implement it in a Java system and compare it on five frameworks with three previous approaches by Dagenais and Robillard, M. Kim et al., and Sch ̈fer et al. The compar- a ison shows that, on average, the recall of AURA is 53.07 \% higher while its precision is similar, e.g., 0.10 \% lower.},
    }
  • [PDF] G. Bavota, R. Oliveto, A. D. Lucia, G. Antoniol, and Y. Guéhéneuc, “Playing with refactoring: identifying extract class opportunities through game theory,” in Icsm, 2010, pp. 1-5.
    [Abstract]

    In software engineering, developers must often find solutions to problems balancing competing goals, e.g., quality versus cost, time to market versus resources, or cohesion versus coupling. Finding a suitable balance between contrasting goals is often complex and recommendation systems are useful to support developers and managers in performing such a complex task. We believe that contrasting goals can be often dealt with game theory techniques. Indeed, game theory is successfully used in other fields, especially in economics, to mathematically propose solutions to strategic situation, in which an individual’s success in making choices depends on the choices of others. To demonstrate the applicability of game theory to software engineering and to understand its pros and cons, we propose an approach based on game theory that recommend extract-class refactoring opportunities. A preliminary evaluation inspired by mutation testing demonstrates the applicability and the benefits of the proposed approach.

    [Bibtex]

    @inproceedings{05609739,
    author = {Gabriele Bavota and Rocco Oliveto and Andrea De Lucia and Giuliano Antoniol and Yann-Ga{\"e}l Gu{\'e}h{\'e}neuc},
    title = {Playing with refactoring: Identifying extract class opportunities through game theory},
    booktitle = {ICSM},
    year = {2010},
    pages = {1-5},
    ee = {http://dx.doi.org/10.1109/ICSM.2010.5609739},
    crossref = {DBLP:conf/icsm/2010},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    pdf = {2010/05609739.pdf},
    abstract = {In software engineering, developers must often find solutions to problems balancing competing goals, e.g., quality versus cost, time to market versus resources, or cohesion versus coupling. Finding a suitable balance between contrasting goals is often complex and recommendation systems are useful to support developers and managers in performing such a complex task. We believe that contrasting goals can be often dealt with game theory techniques. Indeed, game theory is successfully used in other fields, especially in economics, to mathematically propose solutions to strategic situation, in which an individual's success in making choices depends on the choices of others. To demonstrate the applicability of game theory to software engineering and to understand its pros and cons, we propose an approach based on game theory that recommend extract-class refactoring opportunities. A preliminary evaluation inspired by mutation testing demonstrates the applicability and the benefits of the proposed approach.},
    }
  • V. Arnaoudova, L. M. Eshkevari, R. Oliveto, Y. Guéhéneuc, and G. Antoniol, “Physical and conceptual identifier dispersion: measures and relation to fault proneness,” in Icsm, 2010, pp. 1-5.
    [Abstract]

    Poorly-chosen identifiers have been reported in the literature as misleading and increasing the program comprehension effort. Identifiers are composed of terms, which can be dictionary words, acronyms, contractions, or simple strings. We conjecture that the use of identical terms in different contexts may increase the risk of faults. We investigate our conjecture using a measure combining term entropy and term context-coverage to study whether certain terms increase the odds ratios of methods to be fault-prone. We compute term entropy and context-coverage in Rhino v1.4R3 and ArgoUML v0.16, and we show statistically that methods and attributes containing terms with high entropy and context-coverage are more fault-prone.

    [Bibtex]

    @inproceedings{conf/icsm/ArnaoudovaEOGA10,
    author = {Venera Arnaoudova and Laleh Mousavi Eshkevari and Rocco Oliveto and Yann-Ga{\"e}l Gu{\'e}h{\'e}neuc and Giuliano Antoniol},
    title = {Physical and conceptual identifier dispersion: Measures and relation to fault proneness},
    booktitle = {ICSM},
    year = {2010},
    pages = {1-5},
    ee = {http://dx.doi.org/10.1109/ICSM.2010.5609748},
    crossref = {DBLP:conf/icsm/2010},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    abstract = {Poorly-chosen identifiers have been reported in the literature as misleading and increasing the program comprehension effort. Identifiers are composed of terms, which can be dictionary words, acronyms, contractions, or simple strings. We conjecture that the use of identical terms in different contexts may increase the risk of faults. We investigate our conjecture using a measure combining term entropy and term context-coverage to study whether certain terms increase the odds ratios of methods to be fault-prone. We compute term entropy and context-coverage in Rhino v1.4R3 and ArgoUML v0.16, and we show statistically that methods and attributes containing terms with high entropy and context-coverage are more fault-prone.},
    }
  • S. Kpodjedo, P. Galinier, and G. Antoniol, “Enhancing a tabu algorithm for approximate graph matching by using similarity measures,” in Evocop, 2010, pp. 119-130.
    [Abstract]

    In this paper, we investigate heuristics in order to solve the Approximated Matching Problem (AGM). We propose a tabu search algorithm which exploits a simple neighborhood but is initialized by a greedy procedure which uses a measure of similarity between the vertices of the two graphs. The algorithm is tested on a large collection of graphs of various sizes (from 300 vertices and up to 3000 vertices) and densities. Computing times range from less than 1 second up to a few minutes. The algorithm obtains consistently very good results, especially on labeled graphs. The results obtained by the tabu algorithm alone (without the greedy procedure) were very poor, illustrating the importance of using vertex similarity during the early steps of the search process.

    [Bibtex]

    @inproceedings{conf/evoW/KpodjedoGA10,
    author = {Segla Kpodjedo and Philippe Galinier and Giuliano Antoniol},
    title = {Enhancing a Tabu Algorithm for Approximate Graph Matching by Using Similarity Measures},
    booktitle = {EvoCOP},
    year = {2010},
    pages = {119-130},
    ee = {http://dx.doi.org/10.1007/978-3-642-12139-5_11},
    crossref = {DBLP:conf/evoW/2010cop},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    abstract = {In this paper, we investigate heuristics in order to solve the Approximated Matching Problem (AGM). We propose a tabu search algorithm which exploits a simple neighborhood but is initialized by a greedy procedure which uses a measure of similarity between the vertices of the two graphs. The algorithm is tested on a large collection of graphs of various sizes (from 300 vertices and up to 3000 vertices) and densities. Computing times range from less than 1 second up to a few minutes. The algorithm obtains consistently very good results, especially on labeled graphs. The results obtained by the tabu algorithm alone (without the greedy procedure) were very poor, illustrating the importance of using vertex similarity during the early steps of the search process.},
    }
  • N. Madani, L. Guerrouj, M. D. Penta, Y. Guéhéneuc, and G. Antoniol, “Recognizing words from source code identifiers using speech recognition techniques,” in Csmr, 2010, pp. 68-77.
    [Abstract]

    The existing software engineering literature has empirically shown that a proper choice of identifiers influences software understandability and maintainability. Researchers have noticed that identifiers are one of the most important source of information about program entities and that the semantic of identifier components guide the cognitive process. Recognizing the words forming identifiers is not an easy task when naming conventions (e.g,, Camel Case) are not used or strictly followed and–or when these words have been abbreviated or otherwise transformed. This paper proposes a technique inspired from speech recognition, dynamic time warping, to split identifiers into component words. The proposed technique has been applied to identifiers extracted from two different applications: JHotDraw and Lynx. Results compared with manually-built oracles and with Camel Case split are encouraging. In fact, they show that the technique successfully recognize words composing identifiers (even when abbreviated) in about 90\% of cases and that it performs better than Camel Case. Furthermore, it was even able to spot mistakes in the manually built oracle.

    [Bibtex]

    @inproceedings{conf/csmr/MadaniGPGA10,
    author = {Nioosha Madani and Latifa Guerrouj and Massimiliano Di Penta and Yann-Ga{\"e}l Gu{\'e}h{\'e}neuc and Giuliano Antoniol},
    title = {Recognizing Words from Source Code Identifiers Using Speech Recognition Techniques},
    booktitle = {CSMR},
    year = {2010},
    pages = {68-77},
    ee = {http://dx.doi.org/10.1109/CSMR.2010.31},
    crossref = {DBLP:conf/csmr/2010},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    abstract = {The existing software engineering literature has empirically shown that a proper choice of identifiers influences software understandability and maintainability. Researchers have noticed that identifiers are one of the most important source of information about program entities and that the semantic of identifier components guide the cognitive process. Recognizing the words forming identifiers is not an easy task when naming conventions (e.g,, Camel Case) are not used or strictly followed and--or when these words have been abbreviated or otherwise transformed. This paper proposes a technique inspired from speech recognition, dynamic time warping, to split identifiers into component words. The proposed technique has been applied to identifiers extracted from two different applications: JHotDraw and Lynx. Results compared with manually-built oracles and with Camel Case split are encouraging. In fact, they show that the technique successfully recognize words composing identifiers (even when abbreviated) in about 90\% of cases and that it performs better than Camel Case. Furthermore, it was even able to spot mistakes in the manually built oracle.},
    }

2009

  • [PDF] S. Gueorguiev, M. Harman, and G. Antoniol, “Software project planning for robustness and completion time in the presence of uncertainty using multi objective search based software engineering,” in Gecco, 2009, pp. 1673-1680.
    [Abstract]

    All large–scale projects contain a degree of risk and uncertainty. Software projects are particularly vulnerable to overruns, due the this uncertainty and the inherent difficulty of software project cost estimation. In this paper we introduce a search based approach to software project robustness. The approach is to formulate this problem as a multi objective Search Based Software Engineering problem, in which robustness and completion time are treated as two competing objectives. The paper presents the results of the application of this new approach to four large real–world software projects, using two different models of uncertainty.

    [Bibtex]

    @inproceedings{p1673-gueorguiev,
    author = {Stefan Gueorguiev and Mark Harman and Giuliano Antoniol},
    title = {Software project planning for robustness and completion time in the presence of uncertainty using multi objective search based software engineering},
    booktitle = {GECCO},
    year = {2009},
    pages = {1673-1680},
    ee = {http://doi.acm.org/10.1145/1569901.1570125},
    crossref = {DBLP:conf/gecco/2009g},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    pdf = {2009/p1673-gueorguiev.pdf},
    abstract = {All large--scale projects contain a degree of risk and uncertainty. Software projects are particularly vulnerable to overruns, due the this uncertainty and the inherent difficulty of software project cost estimation. In this paper we introduce a search based approach to software project robustness. The approach is to formulate this problem as a multi objective Search Based Software Engineering problem, in which robustness and completion time are treated as two competing objectives. The paper presents the results of the application of this new approach to four large real--world software projects, using two different models of uncertainty.},
    }
  • [PDF] S. Kpodjedo, F. Ricca, P. Galinier, and G. Antoniol, “Recovering the evolution stable part using an ecgm algorithm: is there a tunnel in mozilla?,” in Csmr, 2009, pp. 179-188.
    [Abstract]

    Analyzing the evolutionary history of the design of Object-Oriented Software is an important and difficult task where matching algorithms play a fundamental r ole. In this paper, we investigate the applicability of an error-correcting graph matching (ECGM) algorithm to object-oriented software evolution. By means of a case study, we report evidence of ECGM applicability in studying the Mozilla class diagram evolution. We collected 144 Mozilla snapshots over the past six years, reverse-engineered class diagrams and recovered traceability links between subsequent class diagrams. Our algorithm allows us to identify evolving classes that maintain a stable structure of relations (associations, inheritances and aggregations) with other classes and thus likely constitute the backbone of Mozilla.

    [Bibtex]

    @inproceedings{04812751,
    author = {Segla Kpodjedo and Filippo Ricca and Philippe Galinier and Giuliano Antoniol},
    title = {Recovering the Evolution Stable Part Using an ECGM Algorithm: Is There a Tunnel in Mozilla?},
    booktitle = {CSMR},
    year = {2009},
    pages = {179-188},
    ee = {http://dx.doi.org/10.1109/CSMR.2009.24},
    crossref = {DBLP:conf/csmr/2009},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    pdf = {2009/04812751.pdf},
    abstract = {Analyzing the evolutionary history of the design of Object-Oriented Software is an important and difficult task where matching algorithms play a fundamental r ole. In this paper, we investigate the applicability of an error-correcting graph matching (ECGM) algorithm to object-oriented software evolution. By means of a case study, we report evidence of ECGM applicability in studying the Mozilla class diagram evolution. We collected 144 Mozilla snapshots over the past six years, reverse-engineered class diagrams and recovered traceability links between subsequent class diagrams. Our algorithm allows us to identify evolving classes that maintain a stable structure of relations (associations, inheritances and aggregations) with other classes and thus likely constitute the backbone of Mozilla.},
    }
  • [PDF] S. L. Abebe, S. Haiduc, A. Marcus, P. Tonella, and G. Antoniol, “Analyzing the evolution of the source code vocabulary,” in Csmr, 2009, pp. 189-198.
    [Abstract]

    Source code is a mixed software artifact, containing information for both the compiler and the developers. While programming language grammar dictates how the source code is written, developers have a lot of freedom in writing identifiers and comments. These are intentional in nature and become means of communication between developers. The goal of this paper is to analyze how the source code vocabulary changes during evolution, through an exploratory study of two software systems. Specifically, we collected data to answer a set of questions about the vocabulary evolution, such as: How does the size of the source code vocabulary evolve over time? What do most frequent terms refer to? Are new identifiers introducing new terms? Are there terms shared between different types of identifiers and comments? Are new and deleted terms in a type of identifiers mirrored in other types of identifiers or in comments?

    [Bibtex]

    @inproceedings{04812752,
    author = {Surafel Lemma Abebe and Sonia Haiduc and Andrian Marcus and Paolo Tonella and Giuliano Antoniol},
    title = {Analyzing the Evolution of the Source Code Vocabulary},
    booktitle = {CSMR},
    year = {2009},
    pages = {189-198},
    ee = {http://dx.doi.org/10.1109/CSMR.2009.61},
    crossref = {DBLP:conf/csmr/2009},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    pdf = {2009/04812752.pdf},
    abstract = {Source code is a mixed software artifact, containing information for both the compiler and the developers. While programming language grammar dictates how the source code is written, developers have a lot of freedom in writing identifiers and comments. These are intentional in nature and become means of communication between developers. The goal of this paper is to analyze how the source code vocabulary changes during evolution, through an exploratory study of two software systems. Specifically, we collected data to answer a set of questions about the vocabulary evolution, such as: How does the size of the source code vocabulary evolve over time? What do most frequent terms refer to? Are new identifiers introducing new terms? Are there terms shared between different types of identifiers and comments? Are new and deleted terms in a type of identifiers mirrored in other types of identifiers or in comments?},
    }
  • Z. Awedikian, K. Ayari, and G. Antoniol, “Mc/dc automatic test input data generation,” in Gecco, 2009, pp. 1657-1664.
    [Abstract]

    In regulated domain such as aerospace and in safety critical domains, software quality assurance is subject to strict regulation such as the RTCA DO-178B standard. Among other conditions, the DO-178B mandates for the satisfaction of the modified condition/decision coverage (MC/DC) testing criterion for software where failure condition may have catastrophic consequences. MC/DC is a white box testing criterion aiming at proving that all conditions involved in a predicate can influence the predicate value in the desired way. In this paper, we propose a novel fitness function inspired by chaining test data generation to efficiently generate test input data satisfying the MC/DC criterion. Preliminary results show the superiority of the novel fitness function that is able to avoid plateau leading to a behavior close to random test of traditional white box fitness functions.

    [Bibtex]

    @inproceedings{conf/gecco/AwedikianAA09,
    author = {Zeina Awedikian and Kamel Ayari and Giuliano Antoniol},
    title = {MC/DC automatic test input data generation},
    booktitle = {GECCO},
    year = {2009},
    pages = {1657-1664},
    ee = {http://doi.acm.org/10.1145/1569901.1570123},
    crossref = {DBLP:conf/gecco/2009g},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    abstract = {In regulated domain such as aerospace and in safety critical domains, software quality assurance is subject to strict regulation such as the RTCA DO-178B standard. Among other conditions, the DO-178B mandates for the satisfaction of the modified condition/decision coverage (MC/DC) testing criterion for software where failure condition may have catastrophic consequences. MC/DC is a white box testing criterion aiming at proving that all conditions involved in a predicate can influence the predicate value in the desired way. In this paper, we propose a novel fitness function inspired by chaining test data generation to efficiently generate test input data satisfying the MC/DC criterion. Preliminary results show the superiority of the novel fitness function that is able to avoid plateau leading to a behavior close to random test of traditional white box fitness functions.},
    }
  • F. Khomh, Y. Guéhéneuc, and G. Antoniol, “Playing roles in design patterns: an empirical descriptive and analytic study,” in Icsm, 2009, pp. 83-92.
    [Abstract]

    This work presents a descriptive and analytic study of classes playing zero, one, or two roles in six different design patterns (and combinations thereof). First, we answer three research questions showing that (1) classes playing one or two roles do exist in programs and are not negligible and that there are significant differences among the (2) internal (class metrics) and (3) external (change-proneness) characteristics of classes playing zero, one, or two roles. Second, we revisit a previous work on design patterns and changeability and show that its results were, in a great part, due to classes playing two roles. Third, we exemplify the use of the study results to provide a ranking of the occurrences of the design patterns identified in a program. The ranking allows developers to balance precision and recall.

    [Bibtex]

    @inproceedings{conf/icsm/KhomhGA09,
    author = {Foutse Khomh and Yann-Ga{\"e}l Gu{\'e}h{\'e}neuc and Giuliano Antoniol},
    title = {Playing roles in design patterns: An empirical descriptive and analytic study},
    booktitle = {ICSM},
    year = {2009},
    pages = {83-92},
    ee = {http://dx.doi.org/10.1109/ICSM.2009.5306327},
    crossref = {DBLP:conf/icsm/2009},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    abstract = {This work presents a descriptive and analytic study of classes playing zero, one, or two roles in six different design patterns (and combinations thereof). First, we answer three research questions showing that (1) classes playing one or two roles do exist in programs and are not negligible and that there are significant differences among the (2) internal (class metrics) and (3) external (change-proneness) characteristics of classes playing zero, one, or two roles. Second, we revisit a previous work on design patterns and changeability and show that its results were, in a great part, due to classes playing two roles. Third, we exemplify the use of the study results to provide a ranking of the occurrences of the design patterns identified in a program. The ranking allows developers to balance precision and recall.},
    }
  • G. Antoniol, “Keynote paper: search based software testing for software security: breaking code to make it safer,” in Icst workshops, 2009, pp. 87-100.
    [Bibtex]
    @inproceedings{conf/icst/Antoniol09,
    author = {Giuliano Antoniol},
    title = {Keynote Paper: Search Based Software Testing for Software Security: Breaking Code to Make it Safer},
    booktitle = {ICST Workshops},
    year = {2009},
    pages = {87-100},
    ee = {http://dx.doi.org/10.1109/ICSTW.2009.12, http://doi.ieeecomputersociety.org/10.1109/ICSTW.2009.12},
    crossref = {DBLP:conf/icst/2009w},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    }
  • G. Antoniol, R. Oliveto, and D. Poshyvanyk, “5$^{\mbox{th}}$ international workshop on traceability in emerging forms of software engineering (tefse 2009),” in Icse companion, 2009, pp. 472-473.
    [Bibtex]
    @inproceedings{conf/icse/AntoniolOP09,
    author = {Giuliano Antoniol and Rocco Oliveto and Denys Poshyvanyk},
    title = {5$^{\mbox{th}}$ international workshop on Traceability in Emerging Forms of Software Engineering (TEFSE 2009)},
    booktitle = {ICSE Companion},
    year = {2009},
    pages = {472-473},
    ee = {http://dx.doi.org/10.1109/ICSE-COMPANION.2009.5071068},
    crossref = {DBLP:conf/icse/2009c},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    }
  • D. M. Germán, M. D. Penta, Y. Guéhéneuc, and G. Antoniol, “Code siblings: technical and legal implications of copying code between applications,” in Msr, 2009, pp. 81-90.
    [Abstract]

    Source code cloning does not happen within a single system only. It can also occur between one system and another. We use the term code sibling to refer to a code clone that evolves in a wdifferent system than the code from which it originates. Code siblings can only occur when the source code copyright owner allows it and when the conditions imposed by such license are not incompatible with the license of the destination system. In some situations copying of source code fragments are allowed—legally—in one direction, but not in the other. In this paper, we use clone detection, license mining and classification, and change history techniques to understand how code siblings—under different licenses—flow in one direction or the other between Linux and two BSD Unixes, FreeBSD and OpenBSD. Our results show that, in most cases, this migration appears to happen according to the terms of the license of the original code being copied, favoring always copying from less restrictive licenses towards more restrictive ones. We also discovered that sometimes code is inserted to the kernels from an outside source.

    [Bibtex]

    @inproceedings{conf/msr/GermanPGA09,
    author = {Daniel M. Germ{\'a}n and Massimiliano Di Penta and Yann-Ga{\"e}l Gu{\'e}h{\'e}neuc and Giuliano Antoniol},
    title = {Code siblings: Technical and legal implications of copying code between applications},
    booktitle = {MSR},
    year = {2009},
    pages = {81-90},
    ee = {http://dx.doi.org/10.1109/MSR.2009.5069483},
    crossref = {DBLP:conf/msr/2009},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    abstract = {Source code cloning does not happen within a single system only. It can also occur between one system and another. We use the term code sibling to refer to a code clone that evolves in a wdifferent system than the code from which it originates. Code siblings can only occur when the source code copyright owner allows it and when the conditions imposed by such license are not incompatible with the license of the destination system. In some situations copying of source code fragments are allowed---legally---in one direction, but not in the other. In this paper, we use clone detection, license mining and classification, and change history techniques to understand how code siblings---under different licenses---flow in one direction or the other between Linux and two BSD Unixes, FreeBSD and OpenBSD. Our results show that, in most cases, this migration appears to happen according to the terms of the license of the original code being copied, favoring always copying from less restrictive licenses towards more restrictive ones. We also discovered that sometimes code is inserted to the kernels from an outside source.},
    }

2008

  • [PDF] M. D. Penta, L. Cerulo, Y. Guéhéneuc, and G. Antoniol, “An empirical study of the relationships between design pattern roles and class change proneness,” in Icsm, 2008, pp. 217-226.
    [Abstract]

    Analyzing the change-proneness of design patterns and the kinds of changes occurring to classes playing role(s) in some design pattern(s) during software evolution poses the basis for guidelines to help developers who have to choose, apply or maintain design patterns. Building on previous work, this paper shifts the focus from design patterns as wholes to the finer-grain level of design pattern roles. It presents an empirical study to understand whether there are roles that are more change-prone than others and whether there are changes that are more likely to occur to certain roles. It relies on data extracted from the source code repositories of three different systems (JHotDraw, Xerces, and Eclipse-JDT) and from 12 design patterns.

    [Bibtex]

    @inproceedings{04658070,
    author = {Massimiliano Di Penta and Luigi Cerulo and Yann-Ga{\"e}l Gu{\'e}h{\'e}neuc and Giuliano Antoniol},
    title = {An empirical study of the relationships between design pattern roles and class change proneness},
    booktitle = {ICSM},
    year = {2008},
    pages = {217-226},
    ee = {http://dx.doi.org/10.1109/ICSM.2008.4658070},
    crossref = {DBLP:conf/icsm/2008},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    pdf = {2008/04658070.pdf},
    abstract = {Analyzing the change-proneness of design patterns and the kinds of changes occurring to classes playing role(s) in some design pattern(s) during software evolution poses the basis for guidelines to help developers who have to choose, apply or maintain design patterns. Building on previous work, this paper shifts the focus from design patterns as wholes to the finer-grain level of design pattern roles. It presents an empirical study to understand whether there are roles that are more change-prone than others and whether there are changes that are more likely to occur to certain roles. It relies on data extracted from the source code repositories of three different systems (JHotDraw, Xerces, and Eclipse-JDT) and from 12 design patterns.},
    }
  • [PDF] B. Kenmei, G. Antoniol, and M. D. Penta, “Trend analysis and issue prediction in large-scale open source systems,” in Csmr, 2008, pp. 73-82.
    [Abstract]

    Effort to evolve and maintain a software system is likely to vary depending on the amount and frequency of change requests. This paper proposes to model change requests as time series and to rely on time series mathematical framework to analyze and model them. In particular, this paper focuses on the number of new change requests per KLOC and per unit of time. Time series can have a two-fold application: they can be used to forecast future values and to identify trends. Increasing trends can indicate an increase in customer requests for new features or a decrease in the software system quality. A decreasing trend can indicate application stability and maturity, but also a reduced popularity and adoption. The paper reports case studies over about five years for three large open source applications: Eclipse, Mozilla and JBoss. The case studies show the capability of time series to model change request density and provide empirical evidence of an increasing trend in newly opened change requests in the JBoss application framework.

    [Bibtex]

    @inproceedings{04493302,
    author = {B{\'e}n{\'e}dicte Kenmei and Giuliano Antoniol and Massimiliano Di Penta},
    title = {Trend Analysis and Issue Prediction in Large-Scale Open Source Systems},
    booktitle = {CSMR},
    year = {2008},
    pages = {73-82},
    ee = {http://dx.doi.org/10.1109/CSMR.2008.4493302},
    crossref = {DBLP:conf/csmr/2008},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    pdf = {2008/04493302.pdf},
    abstract = {Effort to evolve and maintain a software system is likely to vary depending on the amount and frequency of change requests. This paper proposes to model change requests as time series and to rely on time series mathematical framework to analyze and model them. In particular, this paper focuses on the number of new change requests per KLOC and per unit of time. Time series can have a two-fold application: they can be used to forecast future values and to identify trends. Increasing trends can indicate an increase in customer requests for new features or a decrease in the software system quality. A decreasing trend can indicate application stability and maturity, but also a reduced popularity and adoption. The paper reports case studies over about five years for three large open source applications: Eclipse, Mozilla and JBoss. The case studies show the capability of time series to model change request density and provide empirical evidence of an increasing trend in newly opened change requests in the JBoss application framework.},
    }
  • [PDF] J. H. Hayes, G. Antoniol, and Y. Guéhéneuc, “Prereqir: recovering pre-requirements via cluster analysis,” in Wcre, 2008, pp. 165-174.
    [Abstract]

    High-level software artifacts, such as requirements, domain-specific requirements, and so on, are an important source of information that is often neglected during the reverse- and re-engineering processes. We posit that domain specific pre-requirements information (PRI) can be obtained by eliciting the st akeholders’ understanding of generic systems or domains. We discuss the semi-automatic recovery of domain-specific PRI that can then be used during reverse and re-engineering, for example, to recover traceability links or to assess the degree of obsolescence of a system with respect to competing systems and the clients’ expectations. We present a method using partition around medoids and agglomerative clustering for obtaining, structuring, analyzing, and labeling textual PRI from a group of diverse stakeholders. We validate our method using PRI for the development of a generic Web browser provided by 22 different stakeholders. We show that, for a similarity threshold of about 0.36, about 55\% of the PRI were common to two or more stakeholders and 42\% were outliers. We automatically label the common and outlier PRI (82\% correctly labeled), and obtain 74\% accuracy for the similarity threshold of 0.36 (78\% for a th reshold of 0.5). We assess the recall and precision of the method, and compare the labeled PRI to a generic Web browser requirements specification.

    [Bibtex]

    @inproceedings{04656406,
    author = {Jane Huffman Hayes and Giuliano Antoniol and Yann-Ga{\"e}l Gu{\'e}h{\'e}neuc},
    title = {PREREQIR: Recovering Pre-Requirements via Cluster Analysis},
    booktitle = {WCRE},
    year = {2008},
    pages = {165-174},
    ee = {http://dx.doi.org/10.1109/WCRE.2008.36},
    crossref = {DBLP:conf/wcre/2008},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    pdf = {2008/04656406.pdf},
    abstract = {High-level software artifacts, such as requirements, domain-specific requirements, and so on, are an important source of information that is often neglected during the reverse- and re-engineering processes. We posit that domain specific pre-requirements information (PRI) can be obtained by eliciting the st akeholders' understanding of generic systems or domains. We discuss the semi-automatic recovery of domain-specific PRI that can then be used during reverse and re-engineering, for example, to recover traceability links or to assess the degree of obsolescence of a system with respect to competing systems and the clients' expectations. We present a method using partition around medoids and agglomerative clustering for obtaining, structuring, analyzing, and labeling textual PRI from a group of diverse stakeholders. We validate our method using PRI for the development of a generic Web browser provided by 22 different stakeholders. We show that, for a similarity threshold of about 0.36, about 55\% of the PRI were common to two or more stakeholders and 42\% were outliers. We automatically label the common and outlier PRI (82\% correctly labeled), and obtain 74\% accuracy for the similarity threshold of 0.36 (78\% for a th reshold of 0.5). We assess the recall and precision of the method, and compare the labeled PRI to a generic Web browser requirements specification.},
    }
  • G. Antoniol, K. Ayari, M. D. Penta, F. Khomh, and Y. Guéhéneuc, “Is it a bug or an enhancement?: a text-based approach to classify change requests,” in Cascon, 2008, p. 23.
    [Abstract]

    Bug tracking systems are valuable assets for managing maintenance activities. They are widely used in open-source projects as well as in the software industry. They collect many different kinds of issues: requests for defect fixing, enhancements, refactoring/restructuring activities and organizational issues. These different kinds of issues are simply labeled as "bug" for lack of a better classification support or of knowledge about the possible kinds. This paper investigates whether the text of the issues posted in bug tracking systems is enough to classify them into corrective maintenance and other kinds of activities. We show that alternating decision trees, naive Bayes classifiers, and logistic regression can be used to accurately distinguish bugs from other kinds of issues. Results from empirical studies performed on issues for Mozilla, Eclipse, and JBoss indicate that issues can be classified with between 77\% and 82\% of correct decisions.

    [Bibtex]

    @inproceedings{conf/cascon/AntoniolAPKG08,
    author = {Giuliano Antoniol and Kamel Ayari and Massimiliano Di Penta and Foutse Khomh and Yann-Ga{\"e}l Gu{\'e}h{\'e}neuc},
    title = {Is it a bug or an enhancement?: a text-based approach to classify change requests},
    booktitle = {CASCON},
    year = {2008},
    pages = {23},
    ee = {http://doi.acm.org/10.1145/1463788.1463819},
    crossref = {DBLP:conf/cascon/2008},
    abstract = {
    Bug tracking systems are valuable assets for managing maintenance activities. They are widely used in open-source projects as well as in the software industry. They collect many different kinds of issues: requests for defect fixing, enhancements, refactoring/restructuring activities and organizational issues. These different kinds of issues are simply labeled as "bug" for lack of a better classification support or of knowledge about the possible kinds.
    This paper investigates whether the text of the issues posted in bug tracking systems is enough to classify them into corrective maintenance and other kinds of activities.
    We show that alternating decision trees, naive Bayes classifiers, and logistic regression can be used to accurately distinguish bugs from other kinds of issues. Results from empirical studies performed on issues for Mozilla, Eclipse, and JBoss indicate that issues can be classified with between 77\% and 82\% of correct decisions.
    },
    bibsource = {DBLP, http://dblp.uni-trier.de},
    }
  • S. Kpodjedo, F. Ricca, P. Galinier, and G. Antoniol, “Error correcting graph matching application to software evolution,” in Wcre, 2008, pp. 289-293.
    [Abstract]

    Graph representations and graph algorithms are widely adopted to model and resolve problems in many different areas from telecommunications, to bio-informatics, to civil and software engineering. Many software artifacts such as the class diagram can be thought of as graphs and thus, many software evolution problems can be reformulated as a graph matching problem. In this paper, we investigate the applicability of an error-correcting graph matching algorithm to object-oriented software evolution and report results obtained on a small system — the Latazza application — supporting applicability and usefulness of our proposal.

    [Bibtex]

    @inproceedings{conf/wcre/KpodjedoRGA08,
    author = {Segla Kpodjedo and Filippo Ricca and Philippe Galinier and Giuliano Antoniol},
    title = {Error Correcting Graph Matching Application to Software Evolution},
    booktitle = {WCRE},
    year = {2008},
    pages = {289-293},
    ee = {http://dx.doi.org/10.1109/WCRE.2008.48},
    crossref = {DBLP:conf/wcre/2008},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    abstract = {Graph representations and graph algorithms are widely adopted to model and resolve problems in many different areas from telecommunications, to bio-informatics, to civil and software engineering. Many software artifacts such as the class diagram can be thought of as graphs and thus, many software evolution problems can be reformulated as a graph matching problem. In this paper, we investigate the applicability of an error-correcting graph matching algorithm to object-oriented software evolution and report results obtained on a small system --- the Latazza application --- supporting applicability and usefulness of our proposal.},
    }
  • G. Antoniol, J. H. Hayes, Y. Guéhéneuc, and M. D. Penta, “Reuse or rewrite: combining textual, static, and dynamic analyses to assess the cost of keeping a system up-to-date,” in Icsm, 2008, pp. 147-156.
    [Bibtex]
    @inproceedings{conf/icsm/AntoniolHGP08,
    author = {Giuliano Antoniol and Jane Huffman Hayes and Yann-Ga{\"e}l Gu{\'e}h{\'e}neuc and Massimiliano Di Penta},
    title = {Reuse or rewrite: Combining textual, static, and dynamic analyses to assess the cost of keeping a system up-to-date},
    booktitle = {ICSM},
    year = {2008},
    pages = {147-156},
    ee = {http://dx.doi.org/10.1109/ICSM.2008.4658063},
    crossref = {DBLP:conf/icsm/2008},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    }
  • M. Eaddy, A. V. Aho, G. Antoniol, and Y. Guéhéneuc, “Cerberus: tracing requirements to source code using information retrieval, dynamic analysis, and program analysis,” in Icpc, 2008, pp. 53-62.
    [Abstract]

    The concern location problem is to identify the source code within a program related to the features, requirements, or other concerns of the program. This problem is central to program development and maintenance. We present a new technique called prune dependency analysis that can be combined with existing techniques to dramatically improve the accuracy of concern location. We developed CERBERUS, a potent hybrid technique for concern location that combines i nformation retrieval, execution tracing, and prune dependency analysis. We used CERBERUS to trace the 360 requirements of RHINO, a 32,134 line Java program that implements the ECMAScript international standard. In our experiment, prune dependency analysis boosted the recall of information retrieval by 155% an d execution tracing by 104%. Moreover, we show that our combined technique outperformed the other techniques when run individually or in pairs

    [Bibtex]

    @inproceedings{conf/iwpc/EaddyAAG08,
    author = {Marc Eaddy and Alfred V. Aho and Giuliano Antoniol and Yann-Ga{\"e}l Gu{\'e}h{\'e}neuc},
    title = {CERBERUS: Tracing Requirements to Source Code Using Information Retrieval, Dynamic Analysis, and Program Analysis},
    booktitle = {ICPC},
    year = {2008},
    pages = {53-62},
    ee = {http://dx.doi.org/10.1109/ICPC.2008.39},
    crossref = {DBLP:conf/iwpc/2008},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    abstract = {The concern location problem is to identify the source code within a program related to the features, requirements, or other concerns of the program. This problem is central to program development and maintenance. We present a new technique called prune dependency analysis that can be combined with existing techniques to dramatically improve the accuracy of concern location. We developed CERBERUS, a potent hybrid technique for concern location that combines i nformation retrieval, execution tracing, and prune dependency analysis. We used CERBERUS to trace the 360 requirements of RHINO, a 32,134 line Java program that implements the ECMAScript international standard. In our experiment, prune dependency analysis boosted the recall of information retrieval by 155% an d execution tracing by 104%. Moreover, we show that our combined technique outperformed the other techniques when run individually or in pairs},
    }

2007

  • [PDF] R. Oliveto, G. Antoniol, A. Marcus, and J. H. Hayes, “Software artefact traceability: the never-ending challenge,” in Icsm, 2007, pp. 485-488.
    [Abstract]

    Software artefact traceability is widely recognised as an important factor for the effective development and maintenance of a software system. Unfortunately, the lack of automatic or semi-automatic supports makes the task of maintaining links among software artefacts a tedious and time consuming one. For this reason, often traceability information becomes out of date or it is completely absent during software development. In this working session, we discuss problems and challenges related to various aspects of trace-ability in software systems.

    [Bibtex]

    @inproceedings{04362664,
    author = {Rocco Oliveto and Giuliano Antoniol and Andrian Marcus and Jane Huffman Hayes},
    title = {Software Artefact Traceability: the Never-Ending Challenge},
    booktitle = {ICSM},
    year = {2007},
    pages = {485-488},
    ee = {http://dx.doi.org/10.1109/ICSM.2007.4362664},
    crossref = {DBLP:conf/icsm/2007},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    abstract = {
    Software artefact traceability is widely recognised as an important factor for the effective development and maintenance of a software system. Unfortunately, the lack of automatic or semi-automatic supports makes the task of maintaining links among software artefacts a tedious and time consuming one. For this reason, often traceability information becomes out of date or it is completely absent during software development. In this working session, we discuss problems and challenges related to various aspects of trace-ability in software systems.
    },
    pdf = {2007/04362664.pdf},
    }
  • [PDF] G. Antoniol, “Requiem for software evolution research: a few steps toward the creative age,” in Iwpse, 2007, pp. 1-3.
    [Abstract]

    Nowadays almost every company depends on software technologies to function, the challenge is that the technologies and software applications are constantly changing and adapting to the needs of users. This process of change is risky, since unplanned and undisciplined changes in any software system of realistic size risk degrading the quality of the software and producing unexpected side effects. The need for disciplined, intelligent, cost-effective software change and evolution is an urgent technological challenge in the software engineering field. New technologies, new social and cultural trends, a widespread adoption of open source software, the market globalization and new development environments are spelling the requiem to the traditional way in which software evolution research was carried out. Evolution research must evolve and adapt to the new society needs and trends thus turning challenges into opportunities. This keynote attempts to shed some light on key factors such new technology transfer opportunity, the need of benchmarks and the three items each and every research program in software evolution should integrate in one way or the other.

    [Bibtex]

    @inproceedings{p1-antoniol,
    author = {Giuliano Antoniol},
    title = {Requiem for software evolution research: a few steps toward the creative age},
    booktitle = {IWPSE},
    year = {2007},
    pages = {1-3},
    ee = {http://doi.acm.org/10.1145/1294948.1294950},
    crossref = {DBLP:conf/iwpse/2007},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    abstract = {
    Nowadays almost every company depends on software technologies to function, the challenge is that the technologies and software applications are constantly changing and adapting to the needs of users. This process of change is risky, since unplanned and undisciplined changes in any software system of realistic size risk degrading the quality of the software and producing unexpected side effects. The need for disciplined, intelligent, cost-effective software change and evolution is an urgent technological challenge in the software engineering field.
    New technologies, new social and cultural trends, a widespread adoption of open source software, the market globalization and new development environments are spelling the requiem to the traditional way in which software evolution research was carried out. Evolution research must evolve and adapt to the new society needs and trends thus turning challenges into opportunities. This keynote attempts to shed some light on key factors such new technology transfer opportunity, the need of benchmarks and the three items each and every research program in software evolution should integrate in one way or the other.
    },
    pdf = {2007/p1-antoniol.pdf},
    }
  • [PDF] E. Merlo, D. Letarte, and G. Antoniol, “Sql-injection security evolution analysis in php,” in Wse, 2007, pp. 45-49.
    [Abstract]

    Web sites are often a mixture of static sites and programs that integrate relational databases as a back-end. Software that implements Web sites continuously evolve to meet ever-changing user needs. As a Web sites evolve, new versions of programs, interactions and functionalities are added and existing ones are removed or modified. Web sites require configuration and programming attention to assure security, confidentiality, and trustiness of the published information. During evolution of Web software, from one version to the next one, security flaws may be introduced, corrected, or ignored. This paper presents an investigation of the evolution of security vulnerabilities as detected by propagating and combining granted authorization levels along an inter-procedural control flow graph (CFG) together with required security levels for DB accesses with respect to SQL-injection attacks. The paper reports results about experiments performed on 31 versions of phpBB, that is a publicly available bulletin board written in PHP, version 1.0.0 (9547 LOC) to version 2.0.22 (40663 LOC) have been considered as a case study. Results show that the vulnerability analysis can be used to observe and monitor the evolution of security vulnerabilities in subsequent versions of the same software package. Suggestions for further research are also presented.

    [Bibtex]

    @inproceedings{04380243,
    author = {Ettore Merlo and Dominic Letarte and Giuliano Antoniol},
    title = {SQL-Injection Security Evolution Analysis in PHP},
    booktitle = {WSE},
    year = {2007},
    pages = {45-49},
    ee = {http://dx.doi.org/10.1109/WSE.2007.4380243},
    crossref = {DBLP:conf/wse/2007},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    abstract = {
    Web sites are often a mixture of static sites and programs that integrate relational databases as a back-end. Software that implements Web sites continuously evolve to meet ever-changing user needs. As a Web sites evolve, new versions of programs, interactions and functionalities are added and existing ones are removed or modified. Web sites require configuration and programming attention to assure security, confidentiality, and trustiness of the published information. During evolution of Web software, from one version to the next one, security flaws may be introduced, corrected, or ignored. This paper presents an investigation of the evolution of security vulnerabilities as detected by propagating and combining granted authorization levels along an inter-procedural control flow graph (CFG) together with required security levels for DB accesses with respect to SQL-injection attacks. The paper reports results about experiments performed on 31 versions of phpBB, that is a publicly available bulletin board written in PHP, version 1.0.0 (9547 LOC) to version 2.0.22 (40663 LOC) have been considered as a case study. Results show that the vulnerability analysis can be used to observe and monitor the evolution of security vulnerabilities in subsequent versions of the same software package. Suggestions for further research are also presented.
    },
    pdf = {2007/04380243.pdf},
    }
  • [PDF] K. Ayari, P. Meshkinfam, G. Antoniol, and M. D. Penta, “Threats on building models from cvs and bugzilla repositories: the mozilla case study,” in Cascon, 2007, pp. 215-228.
    [Abstract]

    Information obtained by merging data extracted from problem reporting systems — such as Bugzilla — and versioning systems — such as Concurrent Version System (CVS) — is widely used in quality assessment approaches. This paper attempts to shed some light on threats and difficulties faced when trying to integrate information extracted from Mozilla CVS and bug repositories. Indeed, the heterogeneity of Mozilla bug reports, often dealing with non-defect issues, and lacking of traceable information may undermine validity of quality assessment approaches relying on repositories integration. In the reported Mozilla case study, we observed that available integration heuristics are unable to recover thousands of traceability links. Furthermore, Bugzilla classification mechanisms do not enforce a distinction between different kinds of maintenance activities. Obtained evidence suggests that a large amount of information is lost; we conjecture that to benefit from CVS and problem reporting systems, more systematic issue classification and more reliable traceability mechanisms are needed.

    [Bibtex]

    @inproceedings{p215-ayari,
    author = {Kamel Ayari and Peyman Meshkinfam and Giuliano Antoniol and Massimiliano Di Penta},
    title = {Threats on building models from CVS and Bugzilla repositories: the Mozilla case study},
    booktitle = {CASCON},
    year = {2007},
    pages = {215-228},
    ee = {http://doi.acm.org/10.1145/1321211.1321234},
    crossref = {DBLP:conf/cascon/2007},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    pdf = {2007/p215-ayari.pdf},
    abstract = {Information obtained by merging data extracted from problem reporting systems -- such as Bugzilla -- and versioning systems -- such as Concurrent Version System (CVS) -- is widely used in quality assessment approaches. This paper attempts to shed some light on threats and difficulties faced when trying to integrate information extracted from Mozilla CVS and bug repositories. Indeed, the heterogeneity of Mozilla bug reports, often dealing with non-defect issues, and lacking of traceable information may undermine validity of quality assessment approaches relying on repositories integration. In the reported Mozilla case study, we observed that available integration heuristics are unable to recover thousands of traceability links. Furthermore, Bugzilla classification mechanisms do not enforce a distinction between different kinds of maintenance activities. Obtained evidence suggests that a large amount of information is lost; we conjecture that to benefit from CVS and problem reporting systems, more systematic issue classification and more reliable traceability mechanisms are needed.},
    }
  • [PDF] E. Merlo, D. Letarte, and G. Antoniol, “Automated protection of php applications against sql-injection attacks,” in Csmr, 2007, pp. 191-202.
    [Abstract]

    Web sites may be static sites, programs, or databases, and very often a combination of the three integrating relational databases as a back-end. Web sites require care in configuration and programming to assure security, confidentiality, and trustworthiness of the published information. SQL-injection attacks exploit weak validation of textual input used to build database queries. Maliciously crafted input may threaten the confidentiality and the security policies of Web sites relying on a database to store and retrieve information. This paper presents an original approach that combines static analysis, dynamic analysis, and code reengineering to automatically protect applications written in PHP from SQL-injection attacks. The paper also reports preliminary results of experiments performed on an old SQL-injection prone version of phpBB (version 2.0.0, 37193 LOC of PHP version 4.2.2 code). Results show that our approach successfully improved phpBB-2.0.0 resistance to SQLinjection attacks.

    [Bibtex]

    @inproceedings{04145037,
    author = {Ettore Merlo and Dominic Letarte and Giuliano Antoniol},
    title = {Automated Protection of PHP Applications Against SQL-injection Attacks},
    booktitle = {CSMR},
    year = {2007},
    pages = {191-202},
    ee = {http://dx.doi.org/10.1109/CSMR.2007.16, http://doi.ieeecomputersociety.org/10.1109/CSMR.2007.16},
    crossref = {DBLP:conf/csmr/2007},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    abstract = {
    Web sites may be static sites, programs, or databases, and very often a combination of the three integrating relational databases as a back-end. Web sites require care in configuration and programming to assure security, confidentiality, and trustworthiness of the published information. SQL-injection attacks exploit weak validation of textual input used to build database queries. Maliciously crafted input may threaten the confidentiality and the security policies of Web sites relying on a database to store and retrieve information. This paper presents an original approach that combines static analysis, dynamic analysis, and code reengineering to automatically protect applications written in PHP from SQL-injection attacks. The paper also reports preliminary results of experiments performed on an old SQL-injection prone version of phpBB (version 2.0.0, 37193 LOC of PHP version 4.2.2 code). Results show that our approach successfully improved phpBB-2.0.0 resistance to SQLinjection attacks.
    },
    pdf = {2007/04145037.pdf},
    }
  • K. Ayari, S. Bouktif, and G. Antoniol, “Automatic mutation test input data generation via ant colony,” in Gecco, 2007, pp. 1074-1081.
    [Abstract]

    Fault-based testing is often advocated to overcome limitations of other testing approaches; however it is also recognized as being expen sive. On the other hand, evolutionary algorithms have been proved suitable for reducing the cost of data generation in the context of coverage based testing. In this paper, we propose a new evolutionary approach based on ant colony optimization for au tomatic test input data generation in the context of mutation testing to reduce the cost of such a test strategy. In our approach the a nt colony optimization algorithm is enhanced by a probability density estimation technique. We compare our proposal with other evolution ary algorithms, e.g., Genetic Algorithm. Our preliminary results on JAVA testbeds show that our approach performed significantly better than other alternatives.

    [Bibtex]

    @inproceedings{conf/gecco/AyariBA07,
    author = {Kamel Ayari and Salah Bouktif and Giuliano Antoniol},
    title = {Automatic mutation test input data generation via ant colony},
    booktitle = {GECCO},
    year = {2007},
    pages = {1074-1081},
    ee = {http://doi.acm.org/10.1145/1276958.1277172},
    crossref = {DBLP:conf/gecco/2007},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    abstract = {Fault-based testing is often advocated to overcome limitations of other testing approaches; however it is also recognized as being expen sive. On the other hand, evolutionary algorithms have been proved suitable for reducing the cost of data generation in the context of coverage based testing. In this paper, we propose a new evolutionary approach based on ant colony optimization for au tomatic test input data generation in the context of mutation testing to reduce the cost of such a test strategy. In our approach the a nt colony optimization algorithm is enhanced by a probability density estimation technique. We compare our proposal with other evolution ary algorithms, e.g., Genetic Algorithm. Our preliminary results on JAVA testbeds show that our approach performed significantly better than other alternatives.},
    }
  • M. D. Penta, M. Harman, G. Antoniol, and F. Qureshi, “The effect of communication overhead on software maintenance project staffing: a search-based approach,” in Icsm, 2007, pp. 315-324.
    [Abstract]

    Brooks’ milestone `Mythical Man Month’ established the observation that there is no simple conversion between people and time in large scale software projects. Communication and training overheads yield a subtle and variable relationship between the person-months required for a project and the number of people needed to complete the task within a given time frame. This paper formalises several instantiations of Brooks’ law and uses these to construct project schedule and staffing instances — using a search-based project staffing and scheduling approach — on data from two large real world maintenance projects. The results reveal the impact of different formulations of Brooks’ law on project completion time and on staff distribution across teams, and the influence of other factors such as the presence of dependencies between work packages on the effect of communication overhead.

    [Bibtex]

    @inproceedings{conf/icsm/PentaHAQ07,
    author = {Massimiliano Di Penta and Mark Harman and Giuliano Antoniol and Fahim Qureshi},
    title = {The Effect of Communication Overhead on Software Maintenance Project Staffing: a Search-Based Approach},
    booktitle = {ICSM},
    year = {2007},
    pages = {315-324},
    ee = {http://dx.doi.org/10.1109/ICSM.2007.4362644},
    crossref = {DBLP:conf/icsm/2007},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    abstract = {Brooks' milestone `Mythical Man Month' established the observation that there is no simple conversion between people and time in large scale software projects. Communication and training overheads yield a subtle and variable relationship between the person-months required for a project and the number of people needed to complete the task within a given time frame. This paper formalises several instantiations of Brooks' law and uses these to construct project schedule and staffing instances --- using a search-based project staffing and scheduling approach --- on data from two large real world maintenance projects. The results reveal the impact of different formulations of Brooks' law on project completion time and on staff distribution across teams, and the influence of other factors such as the presence of dependencies between work packages on the effect of communication overhead.},
    }
  • G. Antoniol, Y. Guéhéneuc, E. Merlo, and P. Tonella, “Mining the lexicon used by programmers during sofware evolution,” in Icsm, 2007, pp. 14-23.
    [Bibtex]
    @inproceedings{conf/icsm/AntoniolGMT07,
    author = {Giuliano Antoniol and Yann-Ga{\"e}l Gu{\'e}h{\'e}neuc and Ettore Merlo and Paolo Tonella},
    title = {Mining the Lexicon Used by Programmers during Sofware Evolution},
    booktitle = {ICSM},
    year = {2007},
    pages = {14-23},
    ee = {http://dx.doi.org/10.1109/ICSM.2007.4362614},
    crossref = {DBLP:conf/icsm/2007},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    }

2006

  • [PDF] S. Bouktif, H. A. Sahraoui, and G. Antoniol, “Simulated annealing for improving software quality prediction,” in Gecco, 2006, pp. 1893-1900.
    [Abstract]

    In this paper, we propose an approach for the combination and adaptation of software quality predictive models. Quality models are decomposed into sets of expertise. The approach can be seen as a search for a valuable set of expertise that when combined form a model with an optimal predictive accuracy. Since, in general, there will be several experts available and each expert will provide his expertise, the problem can be reformulated as an optimization and search problem in a large space of solutions. We present how the general problem of combining quality expert, modeled as Bayesian classifier, can be tackled via a simulated annealing algorithm custimization. The general approach was applied to built an expert predicting object-oriented software stability, a facet of software quality. Our findings demonstrate that, on available data, composed espert predictive accuracy outperforms the best available expert.

    [Bibtex]

    @inproceedings{p1893-bouktif,
    author = {Salah Bouktif and Houari A. Sahraoui and Giuliano Antoniol},
    title = {Simulated annealing for improving software quality prediction},
    booktitle = {GECCO},
    year = {2006},
    pages = {1893-1900},
    ee = {http://doi.acm.org/10.1145/1143997.1144313},
    crossref = {DBLP:conf/gecco/2006},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    pdf = {2006/p1893-bouktif.pdf},
    abstract = {In this paper, we propose an approach for the combination and adaptation of software quality predictive models. Quality models are decomposed into sets of expertise. The approach can be seen as a search for a valuable set of expertise that when combined form a model with an optimal predictive accuracy. Since, in general, there will be several experts available and each expert will provide his expertise, the problem can be reformulated as an optimization and search problem in a large space of solutions. We present how the general problem of combining quality expert, modeled as Bayesian classifier, can be tackled via a simulated annealing algorithm custimization. The general approach was applied to built an expert predicting object-oriented software stability, a facet of software quality. Our findings demonstrate that, on available data, composed espert predictive accuracy outperforms the best available expert.},
    }
  • [PDF] S. Bouktif, Y. Guéhéneuc, and G. Antoniol, “Extracting change-patterns from cvs repositories,” in Wcre, 2006, pp. 221-230.
    [Abstract]

    Often, the only sources of information about the evolution of software systems are the systems themselves and their histories. Version control repositories contain information on several thousand of files and on millions of changes. We propose an approach based on dynamic time warping to discover change-patterns, which, for example, describe files that change together almost all the time. We define the Synchrony change-pattern to answer the question: given a software system and one file under modification, what others files must be changed? We have applied our approach on PADL, a software system developed in Java, and on Mozilla. Interesting results are achieved even when the discovered groups of co-changing files are compared with these provided by experts.

    [Bibtex]

    @inproceedings{04023992,
    author = {Salah Bouktif and Yann-Ga{\"e}l Gu{\'e}h{\'e}neuc and Giuliano Antoniol},
    title = {Extracting Change-patterns from CVS Repositories},
    booktitle = {WCRE},
    year = {2006},
    pages = {221-230},
    ee = {http://doi.ieeecomputersociety.org/10.1109/WCRE.2006.27},
    crossref = {DBLP:conf/wcre/2006},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    pdf = {2006/04023992.pdf},
    abstract = {Often, the only sources of information about the evolution of software systems are the systems themselves and their histories. Version control repositories contain information on several thousand of files and on millions of changes. We propose an approach based on dynamic time warping to discover change-patterns, which, for example, describe files that change together almost all the time. We define the Synchrony change-pattern to answer the question: given a software system and one file under modification, what others files must be changed? We have applied our approach on PADL, a software system developed in Java, and on Mozilla. Interesting results are achieved even when the discovered groups of co-changing files are compared with these provided by experts.},
    }
  • [PDF] D. Poshyvanyk, A. Marcus, V. Rajlich, Y. Guéhéneuc, and G. Antoniol, “Combining probabilistic ranking and latent semantic indexing for feature identification,” in Icpc, 2006, pp. 137-148.
    [Abstract]

    The paper recasts the problem of feature location in source code as a decision-making problem in the presence of uncertainty. The main contribution consists in the combination of two existing techniques for feature location in source code. Both techniques provide a set of ranked facts from the software, as result to the feature identification problem. One of the techniques is based on a Scenario Based Probabilistic ranking of events observed while executing a program under given scenarios. The other technique is defined as an information retrieval task, based on the Latent Semantic Indexing of the source code. We show the viability and effectiveness of the combined technique with two case studies. A first case study is a replication of feature identification in Mozilla, which allows us to directly compare the results with previously published data. The other case study is a bug location problem in Mozilla. The results show that the combined technique improves feature identification significantly with respect to each technique used independently

    [Bibtex]

    @inproceedings{01631116,
    author = {Denys Poshyvanyk and Andrian Marcus and V{\'a}clav Rajlich and Yann-Ga{\"e}l Gu{\'e}h{\'e}neuc and Giuliano Antoniol},
    title = {Combining Probabilistic Ranking and Latent Semantic Indexing for Feature Identification},
    booktitle = {ICPC},
    year = {2006},
    pages = {137-148},
    ee = {http://doi.ieeecomputersociety.org/10.1109/ICPC.2006.17},
    crossref = {DBLP:conf/iwpc/2006},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    pdf = {2006/01631116.pdf},
    abstract = {The paper recasts the problem of feature location in source code as a decision-making problem in the presence of uncertainty. The main contribution consists in the combination of two existing techniques for feature location in source code. Both techniques provide a set of ranked facts from the software, as result to the feature identification problem. One of the techniques is based on a Scenario Based Probabilistic ranking of events observed while executing a program under given scenarios. The other technique is defined as an information retrieval task, based on the Latent Semantic Indexing of the source code. We show the viability and effectiveness of the combined technique with two case studies. A first case study is a replication of feature identification in Mozilla, which allows us to directly compare the results with previously published data. The other case study is a bug location problem in Mozilla. The results show that the combined technique improves feature identification significantly with respect to each technique used independently},
    }
  • [PDF] E. Merlo, D. Letarte, and G. Antoniol, “Insider and ousider threat-sensitive sql injection vulnerability analysis in php,” in Wcre, 2006, pp. 147-156.
    [Abstract]

    In general, SQL-injection attacks rely on some weak validation of textual input used to build database queries. Maliciously crafted input may threaten the confidentiality and the security policies of Web sites relying on a database to store and retrieve information. Furthermore, insiders may introduce malicious code in a Web application, code that, when triggered by some specific input, for example, would violate security policies. This paper presents an original approach based on static analysis to automatically detect statements in PHP applications that may be vulnerable to SQL-injections triggered by either malicious input (outsider threats) or malicious code (insider threats). Original flow analysis equations, that propagate and combine security levels along an inter-procedural control flow graph (CFG), are presented. The computation of security levels presents linear execution time and memory complexity.

    [Bibtex]

    @inproceedings{04023985,
    author = {Ettore Merlo and Dominic Letarte and Giuliano Antoniol},
    title = {Insider and Ousider Threat-Sensitive SQL Injection Vulnerability Analysis in PHP},
    booktitle = {WCRE},
    year = {2006},
    pages = {147-156},
    ee = {http://doi.ieeecomputersociety.org/10.1109/WCRE.2006.33},
    crossref = {DBLP:conf/wcre/2006},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    abstract = {
    In general, SQL-injection attacks rely on some weak validation of textual input used to build database queries. Maliciously crafted input may threaten the confidentiality and the security policies of Web sites relying on a database to store and retrieve information. Furthermore, insiders may introduce malicious code in a Web application, code that, when triggered by some specific input, for example, would violate security policies. This paper presents an original approach based on static analysis to automatically detect statements in PHP applications that may be vulnerable to SQL-injections triggered by either malicious input (outsider threats) or malicious code (insider threats). Original flow analysis equations, that propagate and combine security levels along an inter-procedural control flow graph (CFG), are presented. The computation of security levels presents linear execution time and memory complexity.
    },
    pdf = {2006/04023985.pdf},
    }
  • [PDF] S. Bouktif, G. Antoniol, E. Merlo, and M. Neteler, “A novel approach to optimize clone refactoring activity,” in Gecco, 2006, pp. 1885-1892.
    [Abstract]

    Software evolution and software quality are ever changing phenomena. As software evolves, evolution impacts software quality. On the other hand, software quality needs may drive software evolution strategies. This paper presents an approach to schedule quality improvement under constraints and priority. The general problem of scheduling quality improvement has been instantiated into the concrete problem of planning duplicated code removal in a geographical information system developed in C throughout the last 20 years. Priority and constraints arise from development team and from the adopted development process. The developer team long term goal is to get rid of duplicated code, improve software structure, decrease coupling, and improve cohesion. We present our problem formulation, the adopted approach, including a model of clone removal effort and preliminary results obtained on a real world application.

    [Bibtex]

    @inproceedings{p1885-bouktif,
    author = {Salah Bouktif and Giuliano Antoniol and Ettore Merlo and Markus Neteler},
    title = {A novel approach to optimize clone refactoring activity},
    booktitle = {GECCO},
    year = {2006},
    pages = {1885-1892},
    ee = {http://doi.acm.org/10.1145/1143997.1144312},
    crossref = {DBLP:conf/gecco/2006},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    pdf = {2006/p1885-bouktif.pdf},
    abstract = {Software evolution and software quality are ever changing phenomena. As software evolves, evolution impacts software quality. On the other hand, software quality needs may drive software evolution strategies. This paper presents an approach to schedule quality improvement under constraints and priority. The general problem of scheduling quality improvement has been instantiated into the concrete problem of planning duplicated code removal in a geographical information system developed in C throughout the last 20 years. Priority and constraints arise from development team and from the adopted development process. The developer team long term goal is to get rid of duplicated code, improve software structure, decrease coupling, and improve cohesion. We present our problem formulation, the adopted approach, including a model of clone removal effort and preliminary results obtained on a real world application.},
    }
  • [PDF] S. Bouktif, G. Antoniol, and E. Merlo, “A feedback based quality assessment to support open source software evolution: the grass case study,” in Icsm, 2006, pp. 155-165.
    [Abstract]

    Managing the software evolution for large open source software is a major challenge. Some factors that make software hard to maintain are geographically distributed development teams, frequent and rapid turnover of volunteers, absence of a formal means, and lack of documentation and explicit project planning. In this paper we propose remote and continuous analysis of open source software to monitor evolution using available resources such as CVS code repository, commitment log files and exchanged mail. Evolution monitoring relies on three principal services. The first service analyzes and monitors the increase in complexity and the decline in quality; the second supports distributed developers by sending them a feedback report after each contribution; the third allows developers to gain insight into the "big picture" of software by providing a dashboard of project evolution. Besides the description of provided services, the paper presents a prototype environment for continuous analysis of the evolution of GRASS, an open source software

    [Bibtex]

    @inproceedings{04021333,
    author = {Salah Bouktif and Giuliano Antoniol and Ettore Merlo},
    title = {A Feedback Based Quality Assessment to Support Open Source Software Evolution: the GRASS Case Study},
    booktitle = {ICSM},
    year = {2006},
    pages = {155-165},
    ee = {http://doi.ieeecomputersociety.org/10.1109/ICSM.2006.5},
    crossref = {DBLP:conf/icsm/2006},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    abstract = {
    Managing the software evolution for large open source software is a major challenge. Some factors that make software hard to maintain are geographically distributed development teams, frequent and rapid turnover of volunteers, absence of a formal means, and lack of documentation and explicit project planning. In this paper we propose remote and continuous analysis of open source software to monitor evolution using available resources such as CVS code repository, commitment log files and exchanged mail. Evolution monitoring relies on three principal services. The first service analyzes and monitors the increase in complexity and the decline in quality; the second supports distributed developers by sending them a feedback report after each contribution; the third allows developers to gain insight into the "big picture" of software by providing a dashboard of project evolution. Besides the description of provided services, the paper presents a prototype environment for continuous analysis of the evolution of GRASS, an open source software
    },
    pdf = {2006/04021333.pdf},
    }
  • M. Salah, S. Mancoridis, G. Antoniol, and M. D. Penta, “Scenario-driven dynamic analysis for comprehending large software systems,” in Csmr, 2006, pp. 71-80.
    [Abstract]

    Understanding large software systems is simplified when a combination of techniques for static and dynamic analysis is employed. Effective dynamic analysis requires that execution traces be generated by executing scenarios that are representative of the system’s typical usage. This paper presents an approach that uses dynamic analysis to extract views of a software system at different levels, namely (1) use cases views, (2) module interaction views, and (3) class interaction views. The proposed views can be used to help maintainers locate features to be changed. The proposed approach is evaluated against a large software system, the Mozilla Web browser.

    [Bibtex]

    @inproceedings{conf/csmr/SalahMAP06,
    author = {Maher Salah and Spiros Mancoridis and Giuliano Antoniol and Massimiliano Di Penta},
    title = {Scenario-Driven Dynamic Analysis for Comprehending Large Software Systems},
    booktitle = {CSMR},
    year = {2006},
    pages = {71-80},
    ee = {http://dx.doi.org/10.1109/CSMR.2006.47, http://doi.ieeecomputersociety.org/10.1109/CSMR.2006.47},
    crossref = {DBLP:conf/csmr/2006},
    abstract = {
    Understanding large software systems is simplified when a combination of techniques for static and dynamic analysis is employed. Effective dynamic analysis requires that execution traces be generated by executing scenarios that are representative of the system's typical usage. This paper presents an approach that uses dynamic analysis to extract views of a software system at different levels, namely (1) use cases views, (2) module interaction views, and (3) class interaction views. The proposed views can be used to help maintainers locate features to be changed. The proposed approach is evaluated against a large software system, the Mozilla Web browser.
    },
    bibsource = {DBLP, http://dblp.uni-trier.de},
    }

2005

  • [PDF] G. Antoniol, M. Ceccarelli, and A. Petrosino, “Microarray image addressing based on the radon transform,” in Icip (1), 2005, pp. 13-16.
    [Abstract]

    A fundamental step of microarray image analysis is the detection of the grid structure for the accurate localization of each spot, representing the state of a given gene in a particular experimental condition. This step is known as gridding or microarray addressing. Most of the available microarray gridding approaches require human intervention; for example, to specify landmarks, some points in the spot grid, or even to precisely locate individual spots. Automating this part of the process can allow high throughput analysis (Yang, Y, et al, 2002). This paper is aimed towards at the development fully automated procedures for the problem of automatic microarray gridding. Indeed, many of the automatic gridding approaches are based on two phases, the first aimed at the generation of an hypothesis consisting into a regular interpolating grid, whereas the second performs an adaptation of the hypothesis. Here we show that the first step can efficiently be accomplished by using the Radon transform, whereas the second step could be modeled by an iterative posterior maximization procedure (Antoniol, G and Ceccarelli, M, 2004).

    [Bibtex]

    @inproceedings{01529675,
    author = {Giuliano Antoniol and Michele Ceccarelli and Alfredo Petrosino},
    title = {Microarray image addressing based on the Radon transform},
    booktitle = {ICIP (1)},
    year = {2005},
    pages = {13-16},
    ee = {http://dx.doi.org/10.1109/ICIP.2005.1529675},
    crossref = {DBLP:conf/icip/2005},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    abstract = {
    A fundamental step of microarray image analysis is the detection of the grid structure for the accurate localization of each spot, representing the state of a given gene in a particular experimental condition. This step is known as gridding or microarray addressing. Most of the available microarray gridding approaches require human intervention; for example, to specify landmarks, some points in the spot grid, or even to precisely locate individual spots. Automating this part of the process can allow high throughput analysis (Yang, Y, et al, 2002). This paper is aimed towards at the development fully automated procedures for the problem of automatic microarray gridding. Indeed, many of the automatic gridding approaches are based on two phases, the first aimed at the generation of an hypothesis consisting into a regular interpolating grid, whereas the second performs an adaptation of the hypothesis. Here we show that the first step can efficiently be accomplished by using the Radon transform, whereas the second step could be modeled by an iterative posterior maximization procedure (Antoniol, G and Ceccarelli, M, 2004).
    },
    pdf = {2005/01529675.pdf},
    }
  • [PDF] M. Salah, S. Mancoridis, G. Antoniol, and M. D. Penta, “Towards employing use-cases and dynamic analysis to comprehend mozilla,” in Icsm, 2005, pp. 639-642.
    [Abstract]

    This paper presents an approach for comprehending large software systems using views that are created by subjecting the software systems to dynamic analysis under various use-case scenarios. Two sets of views are built from the runtime data: (1) graphs that capture the parts of the software’s architecture that pertain to the use-cases; and (2) metrics that measure the intricacy of the software and the similarity between the software’s use-cases. The Mozilla Web browser was chosen as the subject software system in our case study due to its size, intricacy, and ability to expose the challenges of analyzing large systems.

    [Bibtex]

    @inproceedings{01510163,
    author = {Maher Salah and Spiros Mancoridis and Giuliano Antoniol and Massimiliano Di Penta},
    title = {Towards Employing Use-Cases and Dynamic Analysis to Comprehend Mozilla},
    booktitle = {ICSM},
    year = {2005},
    pages = {639-642},
    ee = {http://doi.ieeecomputersociety.org/10.1109/ICSM.2005.94},
    crossref = {DBLP:conf/icsm/2005},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    abstract = {
    This paper presents an approach for comprehending large software systems using views that are created by subjecting the software systems to dynamic analysis under various use-case scenarios. Two sets of views are built from the runtime data: (1) graphs that capture the parts of the software's architecture that pertain to the use-cases; and (2) metrics that measure the intricacy of the software and the similarity between the software's use-cases. The Mozilla Web browser was chosen as the subject software system in our case study due to its size, intricacy, and ability to expose the challenges of analyzing large systems.
    },
    pdf = {2005/01510163.pdf},
    }
  • Y. Guéhéneuc and G. Antoniol, “Report on the 1st international workshop on design pattern theory and practice,” in Step, 2005, pp. 193-195.
    [Bibtex]
    @inproceedings{conf/step/GueheneucA05,
    author = {Yann-Ga{\"e}l Gu{\'e}h{\'e}neuc and Giuliano Antoniol},
    title = {Report on the 1st International Workshop on Design Pattern Theory and Practice},
    booktitle = {STEP},
    year = {2005},
    pages = {193-195},
    ee = {http://doi.ieeecomputersociety.org/10.1109/STEP.2005.20},
    crossref = {DBLP:conf/step/2005},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    }
  • G. Antoniol and Y. Guéhéneuc, “Feature identification: a novel approach and a case study,” in Icsm, 2005, pp. 357-366.
    [Abstract]

    Feature identification is a well-known technique to identify subsets of a program source code activated when exercising a functionality. Several approaches have been proposed to identify features. We present an approach to feature identification and comparison for large object-oriented multi-threaded programs using both static and dynamic data. We use processor emulation, knowledge filtering, and probabilistic ranking to overcome the difficulties of collecting dynamic data, i.e., imprecision and noise. We use model transformations to compare and to visualise identified features. We compare our approach with a naive approach and a concept analysis-based approach using a case study on a real-life large object-oriented multi-threaded program, Mozilla, to show the advantages of our approach. We also use the case study to compare processor emulation with statistical profiling.

    [Bibtex]

    @inproceedings{conf/icsm/AntoniolG05,
    author = {Giuliano Antoniol and Yann-Ga{\"e}l Gu{\'e}h{\'e}neuc},
    title = {Feature Identification: A Novel Approach and a Case Study},
    booktitle = {ICSM},
    year = {2005},
    pages = {357-366},
    ee = {http://doi.ieeecomputersociety.org/10.1109/ICSM.2005.48},
    crossref = {DBLP:conf/icsm/2005},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    abstract = {Feature identification is a well-known technique to identify subsets of a program source code activated when exercising a functionality. Several approaches have been proposed to identify features. We present an approach to feature identification and comparison for large object-oriented multi-threaded programs using both static and dynamic data. We use processor emulation, knowledge filtering, and probabilistic ranking to overcome the difficulties of collecting dynamic data, i.e., imprecision and noise. We use model transformations to compare and to visualise identified features. We compare our approach with a naive approach and a concept analysis-based approach using a case study on a real-life large object-oriented multi-threaded program, Mozilla, to show the advantages of our approach. We also use the case study to compare processor emulation with statistical profiling.},
    }
  • C. D. Grosso, G. Antoniol, M. D. Penta, P. Galinier, and E. Merlo, “Improving network applications security: a new heuristic to generate stress testing data,” in Gecco, 2005, pp. 1037-1043.
    [Abstract]

    Buffer overflows cause serious problems in different categories of software systems. For example, if present in network or security applications, they can be exploited to gain unauthorized grant or access to the system. In embedded systems, such as avionics or automotive systems, they can be the cause of serious accidents. This paper proposes to combine static analysis and program slicing with evolutionary testing, to detect buffer overflow threats. Static analysis identifies vulnerable statements, while slicing and data dependency analysis identify the relationship between these statements and program or function inputs, thus reducing the search space. To guide the search towards discovering buffer overflow in this work we define three multi-objective fitness functions and compare them on two open-source systems. These functions account for terms such as the statement coverage, the coverage of vulnerable statements, the distance form buffer boundaries and the coverage of unconstrained nodes of the control flow graph.

    [Bibtex]

    @inproceedings{GrossoAPGM05,
    author = {Concettina Del Grosso and Giuliano Antoniol and Massimiliano Di Penta and Philippe Galinier and Ettore Merlo},
    title = {Improving network applications security: a new heuristic to generate stress testing data},
    booktitle = {GECCO},
    year = {2005},
    pages = {1037-1043},
    ee = {http://doi.acm.org/10.1145/1068009.1068185},
    crossref = {DBLP:conf/gecco/2005},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    abstract = {Buffer overflows cause serious problems in different categories of software systems. For example, if present in network or security applications, they can be exploited to gain unauthorized grant or access to the system. In embedded systems, such as avionics or automotive systems, they can be the cause of serious accidents. This paper proposes to combine static analysis and program slicing with evolutionary testing, to detect buffer overflow threats. Static analysis identifies vulnerable statements, while slicing and data dependency analysis identify the relationship between these statements and program or function inputs, thus reducing the search space. To guide the search towards discovering buffer overflow in this work we define three multi-objective fitness functions and compare them on two open-source systems. These functions account for terms such as the statement coverage, the coverage of vulnerable statements, the distance form buffer boundaries and the coverage of unconstrained nodes of the control flow graph.},
    }
  • G. Antoniol, M. D. Penta, and M. Harman, “Search-based techniques applied to optimization of project planning for a massive maintenance project,” in Icsm, 2005, pp. 240-249.
    [Bibtex]
    @inproceedings{conf/icsm/AntoniolPH05,
    author = {Giuliano Antoniol and Massimiliano Di Penta and Mark Harman},
    title = {Search-Based Techniques Applied to Optimization of Project Planning for a Massive Maintenance Project},
    booktitle = {ICSM},
    year = {2005},
    pages = {240-249},
    ee = {http://doi.ieeecomputersociety.org/10.1109/ICSM.2005.79},
    crossref = {DBLP:conf/icsm/2005},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    }
  • G. Antoniol, V. F. Rollo, and G. Venturi, “Detecting groups of co-changing files in cvs repositories,” in Iwpse, 2005, pp. 23-32.
    [Bibtex]
    @inproceedings{conf/iwpse/AntoniolRV05,
    author = {Giuliano Antoniol and Vincenzo Fabio Rollo and Gabriele Venturi},
    title = {Detecting groups of co-changing files in CVS repositories},
    booktitle = {IWPSE},
    year = {2005},
    pages = {23-32},
    ee = {http://doi.ieeecomputersociety.org/10.1109/IWPSE.2005.11},
    crossref = {DBLP:conf/iwpse/2005},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    }
  • J. I. Maletic, G. Antoniol, J. Cleland-Huang, and J. H. Hayes, “3rd international workshop on traceability in emerging forms of software engineering (tefse 2005),” in Ase, 2005, p. 462.
    [Bibtex]
    @inproceedings{conf/kbse/MaleticACH05,
    author = {Jonathan I. Maletic and Giuliano Antoniol and Jane Cleland-Huang and Jane Huffman Hayes},
    title = {3rd international workshop on traceability in emerging forms of software engineering (TEFSE 2005)},
    booktitle = {ASE},
    year = {2005},
    pages = {462},
    ee = {http://doi.acm.org/10.1145/1101908.1102002},
    crossref = {DBLP:conf/kbse/2005},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    }

2004

  • [PDF] G. Antoniol, M. D. Penta, and E. Merlo, “An automatic approach to identify class evolution discontinuities,” in Iwpse, 2004, pp. 31-40.
    [Abstract]

    When a software system evolves, features are added, removed and changed. Moreover, refactoring activities are periodically performed to improve the software internal structure. A class may be replaced by another, two classes can be merged, or a class may be split in two others. As a consequence, it may not be possible to trace software features between a release and another. When studying software evolution, we should be able to trace a class lifetime even when it disappears because it is replaced by a similar one, split or merged. Such a capability is also essential to perform impact analysis. This work proposes an automatic approach, inspired on vector space information retrieval, to identify class evolution discontinuities and, therefore, cases of possible refactoring. The approach has been applied to identify refactorings performed over 40 releases of a Java open source domain name server. Almost all the refactorings found were actually performed in the analyzed system, thus indicating the helpfulness of the approach and of the developed tool.

    [Bibtex]

    @inproceedings{01334766,
    author = {Giuliano Antoniol and Massimiliano Di Penta and Ettore Merlo},
    title = {An Automatic Approach to identify Class Evolution Discontinuities},
    booktitle = {IWPSE},
    year = {2004},
    pages = {31-40},
    ee = {http://doi.ieeecomputersociety.org/10.1109/IWPSE.2004.1334766},
    crossref = {DBLP:conf/iwpse/2004},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    abstract = {
    When a software system evolves, features are added, removed and changed. Moreover, refactoring activities are periodically performed to improve the software internal structure. A class may be replaced by another, two classes can be merged, or a class may be split in two others. As a consequence, it may not be possible to trace software features between a release and another. When studying software evolution, we should be able to trace a class lifetime even when it disappears because it is replaced by a similar one, split or merged. Such a capability is also essential to perform impact analysis. This work proposes an automatic approach, inspired on vector space information retrieval, to identify class evolution discontinuities and, therefore, cases of possible refactoring. The approach has been applied to identify refactorings performed over 40 releases of a Java open source domain name server. Almost all the refactorings found were actually performed in the analyzed system, thus indicating the helpfulness of the approach and of the developed tool.
    },
    pdf = {2004/01334766.pdf},
    }
  • [PDF] G. Antoniol and M. Ceccarelli, “A markov random field approach to microarray image gridding,” in Icpr (3), 2004, pp. 550-553.
    [Abstract]

    This paper reports a novel approach for the problem of automatic gridding in microarray images. The solution is modeled as a Bayesian random field with a Gibbs prior possibly containing first order cliques (1-clique). On the contrary of previously published contributions, this paper does not assume second order cliques, instead it relies on a two step procedure to locate microarray spots. First a set of guide spots is used to interpolate a reference grid. The final grid is then produced by an a-posteriori maximization, which takes into account the reference rectangular grid, and local deformations. The algorithm is completely automatic and no human intervention is required, the only critical parameter being the range of the radius of the guide spots.

    [Bibtex]

    @inproceedings{01334588,
    author = {Giuliano Antoniol and Michele Ceccarelli},
    title = {A Markov Random Field Approach to Microarray Image Gridding},
    booktitle = {ICPR (3)},
    year = {2004},
    pages = {550-553},
    ee = {http://doi.ieeecomputersociety.org/10.1109/ICPR.2004.50},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    abstract = {
    This paper reports a novel approach for the problem of automatic gridding in microarray images. The solution is modeled as a Bayesian random field with a Gibbs prior possibly containing first order cliques (1-clique). On the contrary of previously published contributions, this paper does not assume second order cliques, instead it relies on a two step procedure to locate microarray spots. First a set of guide spots is used to interpolate a reference grid. The final grid is then produced by an a-posteriori maximization, which takes into account the reference rectangular grid, and local deformations. The algorithm is completely automatic and no human intervention is required, the only critical parameter being the range of the radius of the guide spots.
    },
    pdf = {2004/01334588.pdf},
    }
  • [PDF] G. Antoniol, M. Ceccarelli, P. Petrillo, and A. Petrosino, “An ica approach to unsupervised change detection in multispectral images,” in Wirn, 2004, pp. 299-311.
    [Abstract]

    Detecting regions of change in multiple images of the same scene taken at dif- ferent times is of widespread interest due to a large number of applications in diverse disciplines, including remote sensing, surveillance, medical diagnosis and treatment, civil infrastructure, and underwater sensing. The paper proposes a data dependent change detection approach based on textural features extracted by the Independent Component Analysis (ICA) model. The properties of ICA allow to create energy features for computing multispec- tral and multitemporal difference images to be classified. Our experiments on remote sensing images show that the proposed method can efficiently and effec- tively classify temporal discontinuities corresponding to changed areas over the observed scenes

    [Bibtex]

    @inproceedings{chp3A1010072F140203432635,
    author = {Giuliano Antoniol and Michele Ceccarelli and P. Petrillo and Alfredo Petrosino},
    title = {An ICA Approach to Unsupervised Change Detection in Multispectral Images},
    booktitle = {WIRN},
    year = {2004},
    pages = {299-311},
    ee = {http://dx.doi.org/10.1007/1-4020-3432-6_35},
    crossref = {DBLP:conf/wirn/2004},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    abstract = {
    Detecting regions of change in multiple images of the same scene taken at dif-
    ferent times is of widespread interest due to a large number of applications in
    diverse disciplines, including remote sensing, surveillance, medical diagnosis
    and treatment, civil infrastructure, and underwater sensing.
    The paper proposes a data dependent change detection approach based on
    textural features extracted by the Independent Component Analysis (ICA) model.
    The properties of ICA allow to create energy features for computing multispec-
    tral and multitemporal difference images to be classified. Our experiments on
    remote sensing images show that the proposed method can efficiently and effec-
    tively classify temporal discontinuities corresponding to changed areas over the
    observed scenes
    },
    pdf = {2004/chp3A1010072F140203432635.pdf},
    }
  • G. Antoniol, M. D. Penta, and M. Harman, “Search-based techniques for optimizing software project resource allocation,” in Gecco (2), 2004, pp. 1425-1426.
    [Abstract]

    We present a search–based approach for planning resource allocation in large software projects, which aims to find an optimal or near optimal order in which to allocate work packages to programming teams, in order to minimize the project duration. The approach is validated by an empirical study of a large, commercial Y2K massive maintenance project, comparing random scheduling, hill climbing, simulating annealing and genetic algorithms, applied to two different problem encodings. Results show that a genome encoding the work package ordering, and a fitness function obtained by queuing simulation constitute the best choice, both in terms of quality of results and number of fitness evaluations required to achieve them.

    [Bibtex]

    @inproceedings{conf/gecco/AntoniolPH04,
    author = {Giuliano Antoniol and Massimiliano Di Penta and Mark Harman},
    title = {Search-Based Techniques for Optimizing Software Project Resource Allocation},
    booktitle = {GECCO (2)},
    year = {2004},
    pages = {1425-1426},
    ee = {http://dx.doi.org/10.1007/978-3-540-24855-2_162},
    crossref = {DBLP:conf/gecco/2004-2},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    abstract = {We present a search--based approach for planning resource allocation in large software projects, which aims to find an optimal or near optimal order in which to allocate work packages to programming teams, in order to minimize the project duration. The approach is validated by an empirical study of a large, commercial Y2K massive maintenance project, comparing random scheduling, hill climbing, simulating annealing and genetic algorithms, applied to two different problem encodings. Results show that a genome encoding the work package ordering, and a fitness function obtained by queuing simulation constitute the best choice, both in terms of quality of results and number of fitness evaluations required to achieve them.},
    }
  • G. Antoniol, M. D. Penta, and M. Zazzara, “Understanding web applications through dynamic analysis,” in Iwpc, 2004, pp. 120-131.
    [Bibtex]
    @inproceedings{conf/iwpc/AntoniolPZ04,
    author = {Giuliano Antoniol and Massimiliano Di Penta and Michele Zazzara},
    title = {Understanding Web Applications through Dynamic Analysis},
    booktitle = {IWPC},
    year = {2004},
    pages = {120-131},
    ee = {http://doi.ieeecomputersociety.org/10.1109/WPC.2004.1311054},
    crossref = {DBLP:conf/iwpc/2004},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    }
  • G. Antoniol, M. D. Penta, and M. Harman, “A robust search-based approach to project management in the presence of abandonment, rework, error and uncertainty,” in Ieee metrics, 2004, pp. 172-183.
    [Bibtex]
    @inproceedings{conf/metrics/AntoniolPH04,
    author = {Giuliano Antoniol and Massimiliano Di Penta and Mark Harman},
    title = {A Robust Search-Based Approach to Project Management in the Presence of Abandonment, Rework, Error and Uncertainty},
    booktitle = {IEEE METRICS},
    year = {2004},
    pages = {172-183},
    ee = {http://doi.ieeecomputersociety.org/10.1109/METRIC.2004.1357901},
    crossref = {DBLP:conf/metrics/2004},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    }
  • E. Merlo, G. Antoniol, M. D. Penta, and V. F. Rollo, “Linear complexity object-oriented similarity for clone detection and software evolution analyses,” in Icsm, 2004, pp. 412-416.
    [Bibtex]
    @inproceedings{conf/icsm/MerloAPR04,
    author = {Ettore Merlo and Giuliano Antoniol and Massimiliano Di Penta and Vincenzo Fabio Rollo},
    title = {Linear Complexity Object-Oriented Similarity for Clone Detection and Software Evolution Analyses},
    booktitle = {ICSM},
    year = {2004},
    pages = {412-416},
    ee = {http://doi.ieeecomputersociety.org/10.1109/ICSM.2004.1357826},
    crossref = {DBLP:conf/icsm/2004},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    }
  • G. Antoniol and M. D. Penta, “A distributed architecture for dynamic analyses on user-profile data,” in Csmr, 2004, pp. 319-328.
    [Abstract]

    Combining static and dynamic information is highly relevant in many reverse engineering, program comprehension and maintenance task. Dynamic analysis is particularly effective when information is collected during a long period of time in a real user environment. This, however, poses several challenges. First and foremost, it is necessary to model the extraction of any relevant dynamic information from execution traces, thus avoiding to collect a large amount of unmanageable data. Second, we need a distributed architecture that allows to collect and compress such an information from geographically distributed users. We propose a probabilistic model for representing dynamic information, as well as a web-service based distributed architecture for its collection and compression. The new architecture has been instantiated to collect interprocedural program execution traces up to a selectable level of calling context sensitivity. The paper details the role and responsibilities of the architecture components, as well as performance and compression ratios achieved on a set of C and Java programs.

    [Bibtex]

    @inproceedings{conf/csmr/AntoniolP04,
    author = {Giuliano Antoniol and Massimiliano Di Penta},
    title = {A Distributed Architecture for Dynamic Analyses on User-Profile Data},
    booktitle = {CSMR},
    year = {2004},
    pages = {319-328},
    ee = {http://dx.doi.org/10.1109/CSMR.2004.1281434, http://doi.ieeecomputersociety.org/10.1109/CSMR.2004.1281434},
    crossref = {DBLP:conf/csmr/2004},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    abstract = {Combining static and dynamic information is highly relevant in many reverse engineering, program comprehension and maintenance task. Dynamic analysis is particularly effective when information is collected during a long period of time in a real user environment. This, however, poses several challenges. First and foremost, it is necessary to model the extraction of any relevant dynamic information from execution traces, thus avoiding to collect a large amount of unmanageable data. Second, we need a distributed architecture that allows to collect and compress such an information from geographically distributed users. We propose a probabilistic model for representing dynamic information, as well as a web-service based distributed architecture for its collection and compression. The new architecture has been instantiated to collect interprocedural program execution traces up to a selectable level of calling context sensitivity. The paper details the role and responsibilities of the architecture components, as well as performance and compression ratios achieved on a set of C and Java programs.},
    }

2003

  • [PDF] G. Antoniol, M. D. Penta, G. Masone, and U. Villano, “Xogastan: xml-oriented gcc ast analysis and transformations,” in Scam, 2003, pp. 173-182.
    [Abstract]

    Software maintenance, program analysis and transformation tools almost always rely on static source code analysis as the first and fundamental step to gather information. In the past, two different strategies have been adopted to develop tool suites. There are tools encompassing or implementing the source parse step, where the parser is internal to the toolkit, developed and maintained with it. A different approach builds tools on the top of external, already available, components such as compilers that output the abstract syntax tree, or make it available via an API. We present an approach and a tool, XOgastan, developed exploiting the gcc/g++ ability to save a representation of the intermediate abstract syntax tree into a file. XOgastan translates the gcc/g++ format into a graph exchange language representation, thus taking advantage of the high number of currently available XML tools for the subsequent analysis phases. The tool is illustrated and its design is discussed, showing its architecture and the main implementation choices.

    [Bibtex]

    @inproceedings{01238043,
    author = {Giuliano Antoniol and Massimiliano Di Penta and Gianluca Masone and Umberto Villano},
    title = {XOgastan: XML-Oriented gcc AST Analysis and Transformations},
    booktitle = {SCAM},
    year = {2003},
    pages = {173-182},
    ee = {http://doi.ieeecomputersociety.org/10.1109/SCAM.2003.1238043},
    crossref = {DBLP:conf/scam/2003},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    abstract = {
    Software maintenance, program analysis and transformation tools almost always rely on static source code analysis as the first and fundamental step to gather information. In the past, two different strategies have been adopted to develop tool suites. There are tools encompassing or implementing the source parse step, where the parser is internal to the toolkit, developed and maintained with it. A different approach builds tools on the top of external, already available, components such as compilers that output the abstract syntax tree, or make it available via an API. We present an approach and a tool, XOgastan, developed exploiting the gcc/g++ ability to save a representation of the intermediate abstract syntax tree into a file. XOgastan translates the gcc/g++ format into a graph exchange language representation, thus taking advantage of the high number of currently available XML tools for the subsequent analysis phases. The tool is illustrated and its design is discussed, showing its architecture and the main implementation choices.
    },
    pdf = {2003/01238043.pdf},
    }
  • [PDF] E. Merlo, G. Antoniol, and P. Brunelle, “Fast flow analysis to compute fuzzy estimates of risk levels,” in Csmr, 2003, p. 351-.
    [Abstract]

    In the context of software quality assessment, this paper proposes original flow analyses which propagate numerical estimates of blocking risks along an inter-procedural control flow graph (CFG) and which combine these estimates along the different CFG paths using fuzzy logic operations. Two specialized analyses can be further defined in terms of definite and possible flow analysis. The definite analysis computes the minimum blocking risk levels that statements may encounter on every path, while the possible analysis computes the highest blocking risk levels encountered by statements on at least one path. This paper presents original flow equations to compute the definite and possible blocking risk levels for statements in source code. The described fix-point algorithm presents a linear execution time and memory complexity and it is also fast in practice. The experimental context used to validate the presented approach is described and results are reported and discussed for eight publicly available systems written in C whose total size is about 300 KLOC Results show that the analyses can be used to compute, identify, and compare definite and possible blocking risks in software systems. Furthermore, programs which are known to be synchronized like "samba" show a relatively high level of blocking risks. On the other hand, the approach allows to identify even low levels of blocking risks as those presented by programs like "gawk".

    [Bibtex]

    @inproceedings{01192443,
    author = {Ettore Merlo and Giuliano Antoniol and Pierre-Luc Brunelle},
    title = {Fast Flow Analysis to Compute Fuzzy Estimates of Risk Levels},
    booktitle = {CSMR},
    year = {2003},
    pages = {351-},
    ee = {http://dx.doi.org/10.1109/CSMR.2003.1192443, http://doi.ieeecomputersociety.org/10.1109/CSMR.2003.1192443},
    crossref = {DBLP:conf/csmr/2003},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    abstract = {
    In the context of software quality assessment, this paper proposes original flow analyses which propagate numerical estimates of blocking risks along an inter-procedural control flow graph (CFG) and which combine these estimates along the different CFG paths using fuzzy logic operations. Two specialized analyses can be further defined in terms of definite and possible flow analysis. The definite analysis computes the minimum blocking risk levels that statements may encounter on every path, while the possible analysis computes the highest blocking risk levels encountered by statements on at least one path. This paper presents original flow equations to compute the definite and possible blocking risk levels for statements in source code. The described fix-point algorithm presents a linear execution time and memory complexity and it is also fast in practice. The experimental context used to validate the presented approach is described and results are reported and discussed for eight publicly available systems written in C whose total size is about 300 KLOC Results show that the analyses can be used to compute, identify, and compare definite and possible blocking risks in software systems. Furthermore, programs which are known to be synchronized like "samba" show a relatively high level of blocking risks. On the other hand, the approach allows to identify even low levels of blocking risks as those presented by programs like "gawk".
    },
    pdf = {2003/01192443.pdf},
    }
  • [PDF] P. Brunelle, E. Merlo, and G. Antoniol, “Investigating java type analyses for the receiver-classes testing criterion,” in Issre, 2003, pp. 419-429.
    [Abstract]

    his paper investigates the precision of three linear-complexity type analyses for Java software: Class Hierarchy Analysis (CHA), Rapid Type Analysis (RTA) and Variable Type Analysis (VTA). Precision is measured relative to class targets. Class targets results are useful in the context of the receiver-classes criterion, which is an object-oriented testing strategy that aims to exercise every possible class binding of the receiver object reference at each dynamic call site. In this context, using a more precise analysis decreases the number of infeasible bindings to cover, thus it reduces the time spent on conceiving test data sets. This paper also introduces two novel variations to VTA, called the iteration and intersection variants. We present experimental results about the precision of CHA, RTA and VTA on a set of 17 Java programs, corresponding to a total of 600 kLOC of source code. Results show that, on average, RTA suggests 13\% less bindings than CHA, standard VTA suggests 23\% less bindings than CHAt and VTA with the two variations together suggests 32\% less bindings than CHA.

    [Bibtex]

    @inproceedings{01251063,
    author = {Pierre-Luc Brunelle and Ettore Merlo and Giuliano Antoniol},
    title = {Investigating Java Type Analyses for the Receiver-Classes Testing Criterion},
    booktitle = {ISSRE},
    year = {2003},
    pages = {419-429},
    ee = {http://doi.ieeecomputersociety.org/10.1109/ISSRE.2003.1251063},
    crossref = {DBLP:conf/issre/2003},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    abstract = {
    his paper investigates the precision of three linear-complexity type analyses for Java software: Class Hierarchy Analysis (CHA), Rapid Type Analysis (RTA) and Variable Type Analysis (VTA). Precision is measured relative to class targets. Class targets results are useful in the context of the receiver-classes criterion, which is an object-oriented testing strategy that aims to exercise every possible class binding of the receiver object reference at each dynamic call site. In this context, using a more precise analysis decreases the number of infeasible bindings to cover, thus it reduces the time spent on conceiving test data sets. This paper also introduces two novel variations to VTA, called the iteration and intersection variants. We present experimental results about the precision of CHA, RTA and VTA on a set of 17 Java programs, corresponding to a total of 600 kLOC of source code. Results show that, on average, RTA suggests 13\% less bindings than CHA, standard VTA suggests 23\% less bindings than CHAt and VTA with the two variations together suggests 32\% less bindings than CHA.
    },
    pdf = {2003/01251063.pdf},
    }
  • G. Antoniol, M. Ceccarelli, A. Maratea, and F. Russo, “Classification of digital terrain models through fuzzy clustering: an application,” in Wilf, 2003, pp. 174-182.
    [Bibtex]
    @inproceedings{conf/wilf/AntoniolCMR03,
    author = {Giuliano Antoniol and Michele Ceccarelli and Antonio Maratea and F. Russo},
    title = {Classification of Digital Terrain Models Through Fuzzy Clustering: An Application},
    booktitle = {WILF},
    year = {2003},
    pages = {174-182},
    ee = {http://dx.doi.org/10.1007/10983652_22},
    crossref = {DBLP:conf/wilf/2003},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    }
  • G. Antoniol, M. D. Penta, and E. Merlo, “Yaab (yet another ast browser): using ocl to navigate asts,” in Iwpc, 2003, p. 13-.
    [Bibtex]
    @inproceedings{conf/iwpc/AntoniolPM03,
    author = {Giuliano Antoniol and Massimiliano Di Penta and Ettore Merlo},
    title = {YAAB (Yet Another AST Browser): Using OCL to Navigate ASTs},
    booktitle = {IWPC},
    year = {2003},
    pages = {13-},
    ee = {http://computer.org/proceedings/iwpc/1883/18830013abs.htm},
    crossref = {DBLP:conf/iwpc/2003},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    }
  • G. Antoniol, M. Ceccarelli, V. F. Rollo, W. Longo, T. Nutile, M. Ciullo, E. Colonna, A. Calabria, M. Astore, A. Lembo, P. Toriello, and G. M. Persico, “Browsing large pedigrees to study of the isolated populations in the "parco nazionale del cilento e vallo di diano",” in Wirn, 2003, pp. 258-268.
    [Bibtex]
    @inproceedings{conf/wirn/AntoniolCRLNCCCALTP03,
    author = {Giuliano Antoniol and Michele Ceccarelli and Vincenzo Fabio Rollo and Wanda Longo and Teresa Nutile and Marina Ciullo and Enza Colonna and Antonietta Calabria and Maria Astore and Anna Lembo and Paola Toriello and M. Grazia Persico},
    title = {Browsing Large Pedigrees to Study of the Isolated Populations in the "Parco Nazionale del Cilento e Vallo di Diano"},
    booktitle = {WIRN},
    year = {2003},
    pages = {258-268},
    ee = {http://dx.doi.org/10.1007/978-3-540-45216-4_29},
    crossref = {DBLP:conf/wirn/2003},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    }
  • G. Antoniol, M. D. Penta, and M. Neteler, “Moving to smaller libraries via clustering and genetic algorithms,” in Csmr, 2003, pp. 307-316.
    [Bibtex]
    @inproceedings{conf/csmr/AntoniolPN03,
    author = {Giuliano Antoniol and Massimiliano Di Penta and Markus Neteler},
    title = {Moving to Smaller Libraries via Clustering and Genetic Algorithms},
    booktitle = {CSMR},
    year = {2003},
    pages = {307-316},
    ee = {http://dx.doi.org/10.1109/CSMR.2003.1192439, http://doi.ieeecomputersociety.org/10.1109/CSMR.2003.1192439},
    crossref = {DBLP:conf/csmr/2003},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    }
  • G. Antoniol and M. D. Penta, “Library miniaturization using static and dynamic information,” in Icsm, 2003, p. 235-.
    [Bibtex]
    @inproceedings{conf/icsm/AntoniolP03,
    author = {Giuliano Antoniol and Massimiliano Di Penta},
    title = {Library Miniaturization Using Static and Dynamic Information},
    booktitle = {ICSM},
    year = {2003},
    pages = {235-},
    ee = {http://doi.ieeecomputersociety.org/10.1109/ICSM.2003.1235426},
    crossref = {DBLP:conf/icsm/2003},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    }

2002

  • [PDF] E. Merlo, M. Dagenais, P. Bachand, J. S. Sormani, S. Gradara, and G. Antoniol, “Investigating large software system evolution: the linux kernel,” in Compsac, 2002, pp. 421-426.
    [Abstract]

    Large multi-platform multi-million lines of codes software systems evolve to cope with new platform or to meet user ever changing needs. While there has been several studies focused on the similarity of code fragments or modules few studies addressed the need to monitor the overall system evolution. Meanwhile the decision to evolve or to refactor a large software system needs to be supported by high level information representing the system overall picture abstracting from unnecessary details. This paper proposes to extend the concept of similarity of code fragments to quantify similarities at the release/system level. Similarities are captured by four software metrics representative of the commonalities and differences within and among software artifacts. To show the feasibility of characterizing large software system with the new metrics 365 releases of the Linux kernel were analyzed. The metrics the experimental results as well as the lessons learned are presented in the paper.

    [Bibtex]

    @inproceedings{01045038,
    author = {Ettore Merlo and Michel Dagenais and P. Bachand and J. S. Sormani and Sara Gradara and Giuliano Antoniol},
    title = {Investigating Large Software System Evolution: The Linux Kernel},
    booktitle = {COMPSAC},
    year = {2002},
    pages = {421-426},
    ee = {http://doi.ieeecomputersociety.org/10.1109/CMPSAC.2002.1045038},
    crossref = {DBLP:conf/compsac/2002},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    pdf = {2002/01045038.pdf},
    abstract = {Large multi-platform multi-million lines of codes software systems evolve to cope with new platform or to meet user ever changing needs. While there has been several studies focused on the similarity of code fragments or modules few studies addressed the need to monitor the overall system evolution. Meanwhile the decision to evolve or to refactor a large software system needs to be supported by high level information representing the system overall picture abstracting from unnecessary details. This paper proposes to extend the concept of similarity of code fragments to quantify similarities at the release/system level. Similarities are captured by four software metrics representative of the commonalities and differences within and among software artifacts. To show the feasibility of characterizing large software system with the new metrics 365 releases of the Linux kernel were analyzed. The metrics the experimental results as well as the lessons learned are presented in the paper.},
    }
  • [PDF] G. Antoniol, L. C. Briand, M. D. Penta, and Y. Labiche, “A case study using the round-trip strategy for state-based class testing,” in Issre, 2002, pp. 269-279.
    [Abstract]

    A number of strategies have been proposed for state- based class testing. An important proposal made by Chow, that was subsequently adapted by Binder, consists in deriving test sequences covering all round-trip paths in a finite state machine (FSMs). Based on a number of (rather strong) assumptions, and for traditional FSMs, it can be demonstrated that all operation and transfer errors in the implementation can be uncovered. Through experimentation, this paper investigates this strategy when used in the context of UML statecharts. Based on a set of mutation operators proposed for object-oriented code we seed a significant number of faults in an implementation of a specific container class. We then investigate the effectiveness of four test teams at uncovering faults, based on the round-trip path strategy, and analyze the faults that seem to be difficult to detect. Our main conclusion is that the round-trip path strategy is reasonably effective at detecting faults (87\% average as opposed to 69\% for size-equivalent, random test cases) but that a significant number of faults can only exhibit a high detection probability by augmenting the round-trip strategy with a traditional black-box strategy such as category-partition testing. This increases the number of test cases to run —and therefore the cost of testing— and a cost-benefit analysis weighting the increase of testing effort and the likely gain in fault detection is necessary

    [Bibtex]

    @inproceedings{01173268,
    author = {Giuliano Antoniol and Lionel C. Briand and Massimiliano Di Penta and Yvan Labiche},
    title = {A Case Study Using the Round-Trip Strategy for State-Based Class Testing},
    booktitle = {ISSRE},
    year = {2002},
    pages = {269-279},
    ee = {http://doi.ieeecomputersociety.org/10.1109/ISSRE.2002.1173268},
    crossref = {DBLP:conf/issre/2002},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    abstract = {
    A number of strategies have been proposed for state-
    based class testing. An important proposal made by
    Chow, that was subsequently adapted by Binder, consists
    in deriving test sequences covering all round-trip paths in
    a finite state machine (FSMs). Based on a number of
    (rather strong) assumptions, and for traditional FSMs, it
    can be demonstrated that all operation and transfer
    errors in the implementation can be uncovered. Through
    experimentation, this paper investigates this strategy
    when used in the context of UML statecharts. Based on a
    set of mutation operators proposed for object-oriented
    code we seed a significant number of faults in an
    implementation of a specific container class. We then
    investigate the effectiveness of four test teams at
    uncovering faults, based on the round-trip path strategy,
    and analyze the faults that seem to be difficult to detect.
    Our main conclusion is that the round-trip path strategy
    is reasonably effective at detecting faults (87\% average as
    opposed to 69\% for size-equivalent, random test cases)
    but that a significant number of faults can only exhibit a
    high detection probability by augmenting the round-trip
    strategy with a traditional black-box strategy such as
    category-partition testing. This increases the number of
    test cases to run —and therefore the cost of testing— and
    a cost-benefit analysis weighting the increase of testing
    effort and the likely gain in fault detection is necessary
    },
    pdf = {2002/01173268.pdf},
    }
  • M. D. Penta, M. Neteler, G. Antoniol, and E. Merlo, “Knowledge-based library re-factoring for an open source project,” in Wcre, 2002, pp. 319-328.
    [Bibtex]
    @inproceedings{conf/wcre/PentaNAM02,
    author = {Massimiliano Di Penta and Markus Neteler and Giuliano Antoniol and Ettore Merlo},
    title = {Knowledge-Based Library Re-Factoring for an Open Source Project},
    booktitle = {WCRE},
    year = {2002},
    pages = {319-328},
    ee = {http://computer.org/proceedings/wcre/1799/17990319abs.htm},
    crossref = {DBLP:conf/wcre/2002},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    }
  • M. D. Penta, S. Gradara, and G. Antoniol, “Traceability recovery in rad software systems,” in Iwpc, 2002, pp. 207-218.
    [Bibtex]
    @inproceedings{PentaGA02,
    author = {Massimiliano Di Penta and Sara Gradara and Giuliano Antoniol},
    title = {Traceability Recovery in RAD Software Systems},
    booktitle = {IWPC},
    year = {2002},
    pages = {207-218},
    ee = {http://computer.org/proceedings/iwpc/1495/14950207abs.htm},
    crossref = {DBLP:conf/iwpc/2002},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    }

2001

  • [PDF] G. Antoniol, U. Villano, M. D. Penta, G. Casazza, and E. Merlo, “Identifying clones in the linux kernel,” in Scam, 2001, pp. 92-99.
    [Abstract]

    Large multi-platform software systems are likely to encompass hardware-dependent code or sub-systems. However, analyzing multi-platform source code is challenging, due to the variety of supported configurations. Often, the system was originally developed for a single platform, and then new target platforms were added. This practice promotes the presence of duplicated code, also called "cloned" code. The paper presents the clone percentage of a multi-platform-multi-million lines of code, Linux kernel version 2.4.0, detected with a metric-based approach. After a brief description of the procedure followed for code analysis and clone identification, the obtained results are commented upon

    [Bibtex]

    @inproceedings{00972670,
    author = {Giuliano Antoniol and Umberto Villano and Massimiliano Di Penta and Gerardo Casazza and Ettore Merlo},
    title = {Identifying Clones in the Linux Kernel},
    booktitle = {SCAM},
    year = {2001},
    pages = {92-99},
    ee = {http://doi.ieeecomputersociety.org/10.1109/SCAM.2001.10003},
    crossref = {DBLP:conf/scam/2001},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    abstract = {
    Large multi-platform software systems are likely to encompass hardware-dependent code or sub-systems. However, analyzing multi-platform source code is challenging, due to the variety of supported configurations. Often, the system was originally developed for a single platform, and then new target platforms were added. This practice promotes the presence of duplicated code, also called "cloned" code. The paper presents the clone percentage of a multi-platform-multi-million lines of code, Linux kernel version 2.4.0, detected with a metric-based approach. After a brief description of the procedure followed for code analysis and clone identification, the obtained results are commented upon
    },
    pdf = {2001/00972670.pdf},
    }
  • [PDF] G. Antoniol, G. Casazza, G. D. A. Lucca, M. D. Penta, and E. Merlo, “Predicting web site access: an application of time series,” in Wse, 2001, pp. 57-61.
    [Abstract]

    The Internet and Web pervasiveness are changing the landscape of several different areas ranging from information gathering/managing and commerce to software development. This paper presents a case study where time series were adopted to forecast future Web site access. In order to measure the applicability of time series to the prediction of Web site accesses, an experimental activity was performed. The log-access file of an academic Web site (http://www.ing.unisannio.it) was analyzed and its data used as test set. The analyzed Web site contains general information about the Faculty of Engineering of University of Sannio at Benevento (Italy). Preliminary results were encouraging: the average number of connections per week could be predicted with an acceptable error.

    [Bibtex]

    @inproceedings{00988786,
    author = {Giuliano Antoniol and Gerardo Casazza and Giuseppe A. Di Lucca and Massimiliano Di Penta and Ettore Merlo},
    title = {Predicting Web Site Access: An Application of Time Series},
    booktitle = {WSE},
    year = {2001},
    pages = {57-61},
    ee = {http://doi.ieeecomputersociety.org/10.1109/WSE.2001.988786},
    crossref = {DBLP:conf/wse/2001},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    abstract = {
    The Internet and Web pervasiveness are changing the landscape of several different areas ranging from information gathering/managing and commerce to software development. This paper presents a case study where time series were adopted to forecast future Web site access. In order to measure the applicability of time series to the prediction of Web site accesses, an experimental activity was performed. The log-access file of an academic Web site (http://www.ing.unisannio.it) was analyzed and its data used as test set. The analyzed Web site contains general information about the Faculty of Engineering of University of Sannio at Benevento (Italy). Preliminary results were encouraging: the average number of connections per week could be predicted with an acceptable error.
    },
    pdf = {2001/00988786.pdf},
    }
  • B. Malenfant, G. Antoniol, E. Merlo, and M. Dagenais, “Flow analysis to detect blocked statements,” in Icsm, 2001, p. 62-.
    [Abstract]

    In the context of software quality assessment, the paper proposes two new kinds of data which can be extracted from source code. The first, definitely blocked statements, can never be executed because preceding code prevents the execution of the program. The other data, called possibly blocked statements, may be blocked by blocking code. The paper presents original flow equations to compute definitely and possibly blocked statements in source code. The experimental context is described and results are shown and discussed. Suggestions for further research are also presented.

    [Bibtex]

    @inproceedings{conf/icsm/MalenfantAMD01,
    author = {Bruno Malenfant and Giuliano Antoniol and Ettore Merlo and Michel Dagenais},
    title = {Flow Analysis to Detect Blocked Statements},
    booktitle = {ICSM},
    year = {2001},
    pages = {62-},
    ee = {http://computer.org/proceedings/icsm/1189/11890062abs.htm},
    abstract = {
    In the context of software quality assessment, the paper proposes two new kinds of data which can be extracted from source code. The first, definitely blocked statements, can never be executed because preceding code prevents the execution of the program. The other data, called possibly blocked statements, may be blocked by blocking code. The paper presents original flow equations to compute definitely and possibly blocked statements in source code. The experimental context is described and results are shown and discussed. Suggestions for further research are also presented.
    },
    bibsource = {DBLP, http://dblp.uni-trier.de},
    }
  • M. D. Penta, G. Casazza, G. Antoniol, and E. Merlo, “Modeling web maintenance centers through queue models,” in Csmr, 2001, pp. 131-138.
    [Abstract]

    The Internet and WEB pervasiveness are changing the landscape of several different areas ranging from information gathering/managing and commerce to software development maintenance and evolution. Traditionally phone-centric services such as ordering of goods maintenance/repair intervention requests and bug/defect reporting are moving towards WEB-centric solutions. This paper proposes the adoption of queue theory to support the design staffing management and assessment of WEB-centric service centers. Data from a mailing list archiving a mixture of corrective maintenance and information requests were used to mimic a service center. Queue theory was adopted to model the relation between the number of servants and the performance level. Empirical evidence revealed that by adding an express lane and a dispatcher service time variability is greatly reduced and more complex business rules may be implemented. Moreover express lane customers experience a reduction of service time even in the presence of a significant percentage of requests erroneously routed by the dispatcher.

    [Bibtex]

    @inproceedings{conf/csmr/PentaCAM01,
    author = {Massimiliano Di Penta and Gerardo Casazza and Giuliano Antoniol and Ettore Merlo},
    title = {Modeling Web Maintenance Centers through Queue Models},
    booktitle = {CSMR},
    year = {2001},
    pages = {131-138},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    abstract = {The Internet and WEB pervasiveness are changing the landscape of several different areas ranging from information gathering/managing and commerce to software development maintenance and evolution. Traditionally phone-centric services such as ordering of goods maintenance/repair intervention requests and bug/defect reporting are moving towards WEB-centric solutions. This paper proposes the adoption of queue theory to support the design staffing management and assessment of WEB-centric service centers. Data from a mailing list archiving a mixture of corrective maintenance and information requests were used to mimic a service center. Queue theory was adopted to model the relation between the number of servants and the performance level. Empirical evidence revealed that by adding an express lane and a dispatcher service time variability is greatly reduced and more complex business rules may be implemented. Moreover express lane customers experience a reduction of service time even in the presence of a significant percentage of requests erroneously routed by the dispatcher.},
    }
  • G. Antoniol, G. Casazza, G. D. A. Lucca, M. D. Penta, and F. Rago, “A queue theory-based approach to staff software maintenance centers,” in Icsm, 2001, pp. 510-519.
    [Abstract]

    The Internet and WEB pervasiveness are changing the landscape of several different areas ranging from information gathering/managing and commerce to software development maintenance and evolution. Software companies having a geographically distributed structure or geographically distributed customers are adopting information communication technologies to cooperate. Communicating and exchanging knowledge between different company branches and with customers creates de facto a virtual software factory. This paper proposes to adopt queue theory to deal with an economically relevant category of problems: the staffing the process management and the service level evaluation of massive maintenance projects in a virtual software factory.

    [Bibtex]

    @inproceedings{conf/icsm/AntoniolCLPR01,
    author = {Giuliano Antoniol and Gerardo Casazza and Giuseppe A. Di Lucca and Massimiliano Di Penta and Francesco Rago},
    title = {A Queue Theory-Based Approach to Staff Software Maintenance Centers},
    booktitle = {ICSM},
    year = {2001},
    pages = {510-519},
    ee = {http://computer.org/proceedings/icsm/1189/11890510abs.htm},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    abstract = {The Internet and WEB pervasiveness are changing the landscape of several different areas ranging from information gathering/managing and commerce to software development maintenance and evolution. Software companies having a geographically distributed structure or geographically distributed customers are adopting information communication technologies to cooperate. Communicating and exchanging knowledge between different company branches and with customers creates de facto a virtual software factory. This paper proposes to adopt queue theory to deal with an economically relevant category of problems: the staffing the process management and the service level evaluation of massive maintenance projects in a virtual software factory.},
    }
  • G. D. A. Lucca, M. D. Penta, G. Antoniol, and G. Casazza, “An approach for reverse engineering of web-based application,” in Wcre, 2001, pp. 231-240.
    [Bibtex]
    @inproceedings{conf/wcre/LuccaPAC01,
    author = {Giuseppe A. Di Lucca and Massimiliano Di Penta and Giuliano Antoniol and Gerardo Casazza},
    title = {An Approach for Reverse Engineering of Web-Based Application},
    booktitle = {WCRE},
    year = {2001},
    pages = {231-240},
    ee = {http://computer.org/proceedings/wcre/1303/13030231abs.htm},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    }
  • G. Antoniol, G. Casazza, M. D. Penta, and E. Merlo, “Modeling clones evolution through time series,” in Icsm, 2001, pp. 273-280.
    [Abstract]

    The actual effort to evolve and maintain a software system is likely to vary depending on the amount of clones (i.e. duplicated or slightly different code fragments) present in the system. This paper presents a method for monitoring and predicting clones evolution across subsequent versions of a software system. Clones are firstly identified using a metric-based approach then they are modeled in terms of time series identifying a predictive models. The proposed method has been validated with an experimental activity performed on 27 subsequent versions of mSQL a medium-size software system written in C. The time span period of the analyzed mSQL releases covers four years from May 1995 (mSQL 1.0.6) to May 1999 (mSQL 2.0.10). For any given software release the identified models was able to predict the clone percentage of the subsequent release with an average error below 4 \%. An higher prediction error was observed only in correspondence of major system redesign.The actual effort to evolve and maintain a software system is likely to vary depending on the amount of clones (i.e. duplicated or slightly different code fragments) present in the system. This paper presents a method for monitoring and predicting clones evolution across subsequent versions of a software system. Clones are firstly identified using a metric-based approach then they are modeled in terms of time series identifying a predictive models. The proposed method has been validated with an experimental activity performed on 27 subsequent versions of mSQL a medium-size software system written in C. The time span period of the analyzed mSQL releases covers four years from May 1995 (mSQL 1.0.6) to May 1999 (mSQL 2.0.10). For any given software release the identified models was able to predict the clone percentage of the subsequent release with an average error below 4 \%. An higher prediction error was observed only in correspondence of major system redesign.

    [Bibtex]

    @inproceedings{conf/icsm/AntoniolCPM01,
    author = {Giuliano Antoniol and Gerardo Casazza and Massimiliano Di Penta and Ettore Merlo},
    title = {Modeling Clones Evolution through Time Series},
    booktitle = {ICSM},
    year = {2001},
    pages = {273-280},
    ee = {http://computer.org/proceedings/icsm/1189/11890273abs.htm},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    abstract = {The actual effort to evolve and maintain a software system is likely to vary depending on the amount of clones (i.e. duplicated or slightly different code fragments) present in the system. This paper presents a method for monitoring and predicting clones evolution across subsequent versions of a software system. Clones are firstly identified using a metric-based approach then they are modeled in terms of time series identifying a predictive models. The proposed method has been validated with an experimental activity performed on 27 subsequent versions of mSQL a medium-size software system written in C. The time span period of the analyzed mSQL releases covers four years from May 1995 (mSQL 1.0.6) to May 1999 (mSQL 2.0.10). For any given software release the identified models was able to predict the clone percentage of the subsequent release with an average error below 4 \%. An higher prediction error was observed only in correspondence of major system redesign.The actual effort to evolve and maintain a software system is likely to vary depending on the amount of clones (i.e. duplicated or slightly different code fragments) present in the system. This paper presents a method for monitoring and predicting clones evolution across subsequent versions of a software system. Clones are firstly identified using a metric-based approach then they are modeled in terms of time series identifying a predictive models. The proposed method has been validated with an experimental activity performed on 27 subsequent versions of mSQL a medium-size software system written in C. The time span period of the analyzed mSQL releases covers four years from May 1995 (mSQL 1.0.6) to May 1999 (mSQL 2.0.10). For any given software release the identified models was able to predict the clone percentage of the subsequent release with an average error below 4 \%. An higher prediction error was observed only in correspondence of major system redesign.},
    }
  • G. Antoniol, M. D. Penta, G. Casazza, and E. Merlo, “A method to re-organize legacy systems via concept analysis,” in Iwpc, 2001, pp. 281-292.
    [Bibtex]
    @inproceedings{conf/iwpc/AntoniolDCM01,
    author = {Giuliano Antoniol and Massimiliano Di Penta and Gerardo Casazza and Ettore Merlo},
    title = {A Method to Re-Organize Legacy Systems via Concept Analysis},
    booktitle = {IWPC},
    year = {2001},
    pages = {281-292},
    ee = {http://computer.org/proceedings/iwpc/1131/11310281abs.htm},
    crossref = {DBLP:conf/iwpc/2001},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    }

2000

  • G. Antoniol, G. Canfora, A. D. Lucia, G. Casazza, and E. Merlo, “Tracing object-oriented code into functional requirements,” in Iwpc, 2000, pp. 79-86.
    [Bibtex]
    @inproceedings{conf/iwpc/AntoniolCLCM00,
    author = {Giuliano Antoniol and Gerardo Canfora and Andrea De Lucia and Gerardo Casazza and Ettore Merlo},
    title = {Tracing Object-Oriented Code into Functional Requirements},
    booktitle = {IWPC},
    year = {2000},
    pages = {79-86},
    ee = {http://computer.org/proceedings/iwpc/0656/06560079abs.htm},
    crossref = {DBLP:conf/iwpc/2000},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    }
  • G. Antoniol, G. Casazza, and E. Merlo, “Identification of lower-level artifacts,” in Iwpc, 2000, p. 253.
    [Bibtex]
    @inproceedings{conf/iwpc/AntoniolCM00,
    author = {Giuliano Antoniol and Gerardo Casazza and Ettore Merlo},
    title = {Identification of Lower-Level Artifacts},
    booktitle = {IWPC},
    year = {2000},
    pages = {253},
    ee = {http://computer.org/proceedings/iwpc/0656/06560253abs.htm},
    crossref = {DBLP:conf/iwpc/2000},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    }
  • G. Antoniol, G. Canfora, G. Casazza, and A. D. Lucia, “Information retrieval models for recovering traceability links between code and documentation,” in Icsm, 2000, p. 40-.
    [Abstract]

    The research described in this paper is concerned with the application of information retrieval to software maintenance and in particular to the problem of recovering traceability links between the source code of a system and its free text documentation. We introduce a method based on the general idea of vector space information retrieval and apply it in two case studies to trace C++ source code onto manual pages and Java code onto functional requirements. The case studies discussed in this paper replicate the studies presented in previous works where a probabilistic information retrieval model was applied. We compare the results of vector space and probabilistic models and formulate hypotheses to explain the differences.

    [Bibtex]

    @inproceedings{conf/icsm/AntoniolCCL00,
    author = {Giuliano Antoniol and Gerardo Canfora and Gerardo Casazza and Andrea De Lucia},
    title = {Information Retrieval Models for Recovering Traceability Links between Code and Documentation},
    booktitle = {ICSM},
    year = {2000},
    pages = {40-},
    ee = {http://computer.org/proceedings/icsm/0753/07530040abs.htm},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    abstract = {The research described in this paper is concerned with the application of information retrieval to software maintenance and in particular to the problem of recovering traceability links between the source code of a system and its free text documentation. We introduce a method based on the general idea of vector space information retrieval and apply it in two case studies to trace C++ source code onto manual pages and Java code onto functional requirements. The case studies discussed in this paper replicate the studies presented in previous works where a probabilistic information retrieval model was applied. We compare the results of vector space and probabilistic models and formulate hypotheses to explain the differences.},
    }
  • G. Antoniol, G. Casazza, and A. Cimitile, “Traceability recovery by modeling programmer behavior,” in Wcre, 2000, pp. 240-247.
    [Bibtex]
    @inproceedings{conf/wcre/AntoniolCC00,
    author = {Giuliano Antoniol and Gerardo Casazza and Aniello Cimitile},
    title = {Traceability Recovery by Modeling Programmer Behavior},
    booktitle = {WCRE},
    year = {2000},
    pages = {240-247},
    ee = {http://computer.org/proceedings/wcre/0881/08810240abs.htm},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    }
  • G. Antoniol, G. Casazza, A. Cimitile, and M. Tortorella, “An approach to limit the wynot problem,” in Icsm, 2000, pp. 207-215.
    [Abstract]

    Software evolution in a cooperative environment where a pool of maintainers/developers contribute to the overall system changes is challanging due to several factors such as the poor communication among individuals and the high number of produced changes. Conflicting or contradictory changes unforeseen or unexpected dependencies may result in a non working system. We propose a strategy aimed to reduce the risk of conflicting changes in a maintenance cooperative environment. To evaluate the feasibility of our approach and to attempt to estimate the size of the code to be scrutined per single changed line we developed a number of tools and tested our approach on 30 release of DDD software system. The preliminary results are encouraging: potentially impacted LOCS per single changed LOC is on the average less than 4.

    [Bibtex]

    @inproceedings{conf/icsm/AntoniolCCT00,
    author = {Giuliano Antoniol and Gerardo Casazza and Aniello Cimitile and Maria Tortorella},
    title = {An Approach to Limit the Wynot Problem},
    booktitle = {ICSM},
    year = {2000},
    pages = {207-215},
    ee = {http://computer.org/proceedings/icsm/0753/07530207abs.htm},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    abstract = {Software evolution in a cooperative environment where a pool of maintainers/developers contribute to the overall system changes is challanging due to several factors such as the poor communication among individuals and the high number of produced changes. Conflicting or contradictory changes unforeseen or unexpected dependencies may result in a non working system. We propose a strategy aimed to reduce the risk of conflicting changes in a maintenance cooperative environment. To evaluate the feasibility of our approach and to attempt to estimate the size of the code to be scrutined per single changed line we developed a number of tools and tested our approach on 30 release of DDD software system. The preliminary results are encouraging: potentially impacted LOCS per single changed LOC is on the average less than 4.},
    }
  • G. Antoniol, G. Canfora, G. Casazza, and A. D. Lucia, “Identifying the starting impact set of a maintenance request: a case study,” in Csmr, 2000, pp. 227-230.
    [Bibtex]
    @inproceedings{conf/csmr/AntoniolCCL00,
    author = {Giuliano Antoniol and Gerardo Canfora and Gerardo Casazza and Andrea De Lucia},
    title = {Identifying the Starting Impact Set of a Maintenance Request: A Case Study},
    booktitle = {CSMR},
    year = {2000},
    pages = {227-230},
    ee = {http://www.computer.org/proceedings/csmr/0546/05460227abs.htm},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    }

1999

  • [PDF] G. Antoniol, F. Calzolari, and P. Tonella, “Impact of function pointers on the call graph,” in Csmr, 1999, pp. 51-61.
    [Abstract]

    Maintenance activities are made more difficult when pointers are heavily used in source code: the programmer needs to build a mental model of memory locations and of the way they are accessed by means of pointers in order to comprehend the functionalities of the system. Although several points-to analysis algorithms have been proposed in literature to provide information about memory locations referenced by pointers there are no quantitative evaluations of the impact of pointers on the overall program understanding activities. Program comprehension activities are usually supported by tools providing suitable views of the source program. One of the most widely used code views is the Call Graph a graph representing calls between functions in the given program. Unfortunately when pointers and especially function pointers are heavily used in the code the extracted call graph is highly inaccurate and thus of little usage if a points-to analysis is not preliminarly performed. In this paper we will address the problem of evaluating the impact of pointers analysis on the Call Graph. The results obtained on a set of real world programs provide a quantitative evaluation and show the key role of pointer analysis in Call Graph construction.

    [Bibtex]

    @inproceedings{00756682,
    author = {Giuliano Antoniol and F. Calzolari and Paolo Tonella},
    title = {Impact of Function Pointers on the Call Graph},
    booktitle = {CSMR},
    year = {1999},
    pages = {51-61},
    ee = {http://dx.doi.org/10.1109/CSMR.1999.756682, http://doi.ieeecomputersociety.org/10.1109/CSMR.1999.756682},
    crossref = {DBLP:conf/csmr/1999},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    pdf = {1999/00756682.pdf},
    abstract = {Maintenance activities are made more difficult when pointers are heavily used in source code: the programmer needs to build a mental model of memory locations and of the way they are accessed by means of pointers in order to comprehend the functionalities of the system. Although several points-to analysis algorithms have been proposed in literature to provide information about memory locations referenced by pointers there are no quantitative evaluations of the impact of pointers on the overall program understanding activities. Program comprehension activities are usually supported by tools providing suitable views of the source program. One of the most widely used code views is the Call Graph a graph representing calls between functions in the given program. Unfortunately when pointers and especially function pointers are heavily used in the code the extracted call graph is highly inaccurate and thus of little usage if a points-to analysis is not preliminarly performed. In this paper we will address the problem of evaluating the impact of pointers analysis on the Call Graph. The results obtained on a set of real world programs provide a quantitative evaluation and show the key role of pointer analysis in Call Graph construction.},
    }
  • [PDF] G. Antoniol, G. Canfora, and A. D. Lucia, “Estimating the size of changes for evolving object oriented systems: a case study,” in Ieee metrics, 1999, p. 250-.
    [Abstract]

    Size related measures have traditionally been the basis for effort estimation models to predict costs of software activities along the entire software product life cycle. Object-Oriented (OO) systems are developed and evolve by adding/removing new classes and modifying existing entities. We propose an approach to predict the size of changes of evolving OO systems based on the analysis of the classes impacted by a change request. Our approach can be used both in iterative development processes or during software maintenance. A first empirical evaluation of the proposed approach has been obtained by applying our tools to the post-release evolution of OO software systems available on the net. The systems were analyzed and models to predict added/modified LOCs from added/modified classes were statistically validated. In the paper preliminary results of the above outlined evaluation is presented.

    [Bibtex]

    @inproceedings{00809746,
    author = {Giuliano Antoniol and Gerardo Canfora and Andrea De Lucia},
    title = {Estimating the Size of Changes for Evolving Object Oriented Systems: A Case Study},
    booktitle = {IEEE METRICS},
    year = {1999},
    pages = {250-},
    ee = {http://doi.ieeecomputersociety.org/10.1109/METRIC.1999.809746},
    crossref = {DBLP:conf/metrics/1999},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    pdf = {1999/00809746.pdf},
    abstract = {Size related measures have traditionally been the basis for effort estimation models to predict costs of software activities along the entire software product life cycle. Object-Oriented (OO) systems are developed and evolve by adding/removing new classes and modifying existing entities. We propose an approach to predict the size of changes of evolving OO systems based on the analysis of the classes impacted by a change request. Our approach can be used both in iterative development processes or during software maintenance. A first empirical evaluation of the proposed approach has been obtained by applying our tools to the post-release evolution of OO software systems available on the net. The systems were analyzed and models to predict added/modified LOCs from added/modified classes were statistically validated. In the paper preliminary results of the above outlined evaluation is presented.},
    }
  • [PDF] E. Merlo and G. Antoniol, “A static measure of a subset of intra-procedural data flow testing coverage based on node coverage,” in Cascon, 1999, p. 7.
    [Abstract]

    In the past years a number of research works which have been mostly based on pre and post dominator analysis have been presented about finding subsets of nodes and edges (called unrestricted subsets) such that their traversal during execution (if feasible) exercises respectively all feasible nodes and edges in a Control Flow Graph. This paper presents an approach to statically measure a subset of intra-procedural data flow (all uses) coverage obtained by exercising an unrestricted subset of nodes during testing. This measure indicates the possible degree of data flow testing obtainable while using a weaker test coverage criteria. The approach has been implemented in C++ on a PC under Linux and results obtained from the analysis of Gnu find tool which is about 16 KLOC of C-lan guage source code are presented together with discussions and conclusions.

    [Bibtex]

    @inproceedings{p7-merlo,
    author = {Ettore Merlo and Giuliano Antoniol},
    title = {A static measure of a subset of intra-procedural data flow testing coverage based on node coverage},
    booktitle = {CASCON},
    year = {1999},
    pages = {7},
    ee = {http://doi.acm.org/10.1145/781995.782002},
    crossref = {DBLP:conf/cascon/1999},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    pdf = {1999/p7-merlo.pdf},
    abstract = {In the past years a number of research works which have been mostly based on pre and post dominator analysis have been presented about finding subsets of nodes and edges (called unrestricted subsets) such that their traversal during execution (if feasible) exercises respectively all feasible nodes and edges in a Control Flow Graph. This paper presents an approach to statically measure a subset of intra-procedural data flow (all uses) coverage obtained by exercising an unrestricted subset of nodes during testing. This measure indicates the possible degree of data flow testing obtainable while using a weaker test coverage criteria. The approach has been implemented in C++ on a PC under Linux and results obtained from the analysis of Gnu find tool which is about 16 KLOC of C-lan guage source code are presented together with discussions and conclusions.},
    }
  • P. Tonella and G. Antoniol, “Object-oriented design pattern inference,” in Icsm, 1999, p. 230-.
    [Abstract]

    When designing a new application experienced software engineers usually try to employ solutions that proved successful in previous projects. Such reuse of code organizations is seldom made explicit. Nevertheless it represents important information about the system that can be extremely valuable in the maintenance phase by documenting the design choices underlying the implementation. In addition having it available it can be reused whenever a similar problem is encountered. In this paper an approach is proposed to the inference of recurrent design patterns directly from the code or the design. No assumption is made on the availability of any pattern library and the concept analysis algorithm adapted for this purpose is able to infer the presence of class groups which instantiate a common repeated pattern. In fact concept analysis provides sets of objects sharing attributes which in the case of object oriented design patterns become class members or inter-class relations. The approach was applied to a C++ application for which the structural relations among classes led to the extraction of a set of structural design patterns which could be enriched with non structural information about class members and method invocations. The resulting patterns could be interpreted as meaningful organizations aimed at solving general problems which have several instances in the analyzed application.

    [Bibtex]

    @inproceedings{conf/icsm/TonellaA99,
    author = {Paolo Tonella and Giuliano Antoniol},
    title = {Object-Oriented Design Pattern Inference},
    booktitle = {ICSM},
    year = {1999},
    pages = {230-},
    ee = {http://computer.org/proceedings/icsm/0016/00160230abs.htm},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    abstract = {When designing a new application experienced software engineers usually try to employ solutions that proved successful in previous projects. Such reuse of code organizations is seldom made explicit. Nevertheless it represents important information about the system that can be extremely valuable in the maintenance phase by documenting the design choices underlying the implementation. In addition having it available it can be reused whenever a similar problem is encountered. In this paper an approach is proposed to the inference of recurrent design patterns directly from the code or the design. No assumption is made on the availability of any pattern library and the concept analysis algorithm adapted for this purpose is able to infer the presence of class groups which instantiate a common repeated pattern. In fact concept analysis provides sets of objects sharing attributes which in the case of object oriented design patterns become class members or inter-class relations. The approach was applied to a C++ application for which the structural relations among classes led to the extraction of a set of structural design patterns which could be enriched with non structural information about class members and method invocations. The resulting patterns could be interpreted as meaningful organizations aimed at solving general problems which have several instances in the analyzed application.},
    }
  • G. Antoniol, A. Potrich, P. Tonella, and R. Fiutem, “Evolving object oriented design to improve code traceability,” in Iwpc, 1999, p. 151-.
    [Bibtex]
    @inproceedings{conf/iwpc/AntoniolPTF99,
    author = {Giuliano Antoniol and Alessandra Potrich and Paolo Tonella and Roberto Fiutem},
    title = {Evolving Object Oriented Design to Improve Code Traceability},
    booktitle = {IWPC},
    year = {1999},
    pages = {151-},
    ee = {http://computer.org/proceedings/iwpc/0179/01790151abs.htm},
    crossref = {DBLP:conf/iwpc/1999},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    }
  • S. Lapierre, E. Merlo, G. Savard, G. Antoniol, R. Fiutem, and P. Tonella, “Automatic unit test data generation using mixed-integer linear programming and execution trees,” in Icsm, 1999, pp. 189-198.
    [Abstract]

    This paper presents an approach to automatic unit test data generation for branch coverage using mixed-integer linear programming execution trees and symbolic execution. This approach can be useful to both general testing and regression testing after software maintenance and reengineering activities. Several strategies including original algorithms to move towards practical test data generation have been investigated in this paper. Methods include: the analysis of minimum path-length partial execution trees for unconstrained arcs thus increasing the generation performance and reducing the difficulties originated by infeasible paths the reduction of the difficulties originated by non-linear path conditions by considering alternative linear paths the reduction of the number of test cases which are needed to achieve the desired coverage based on the concept of unconstrained arcs in a control flow graph the extension of symbolic execution to deal with dynamic memory allocation and deallocation pointers and pointers to functions Execution trees are symbolically executed to produce Extended Path Constraints which are then partially mapped by an original algorithm into linear problems whose solutions correspond to the test data to be used as input to cover program branches. Partially mapping this problem into a linear optimization problem avoids infeasible and non-linear path problems if a feasible linear alternate path exists in the same execution tree. The presented approach has been implemented in C++ and tested on C-language programs on a Pentium/Linux system. Preliminary results are encouraging and show that a high percentage of the program branches can be covered by the test data automatically produced. The approach is flexible to branch selection criteria coming from general testing as well as regression testing.

    [Bibtex]

    @inproceedings{conf/icsm/LapierreMSAFT99,
    author = {S{\'e}bastien Lapierre and Ettore Merlo and Gilles Savard and Giuliano Antoniol and Roberto Fiutem and Paolo Tonella},
    title = {Automatic Unit Test Data Generation Using Mixed-Integer Linear Programming and Execution Trees},
    booktitle = {ICSM},
    year = {1999},
    pages = {189-198},
    ee = {http://computer.org/proceedings/icsm/0016/00160189abs.htm},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    abstract = {This paper presents an approach to automatic unit test data generation for branch coverage using mixed-integer linear programming execution trees and symbolic execution. This approach can be useful to both general testing and regression testing after software maintenance and reengineering activities. Several strategies including original algorithms to move towards practical test data generation have been investigated in this paper. Methods include: the analysis of minimum path-length partial execution trees for unconstrained arcs thus increasing the generation performance and reducing the difficulties originated by infeasible paths the reduction of the difficulties originated by non-linear path conditions by considering alternative linear paths the reduction of the number of test cases which are needed to achieve the desired coverage based on the concept of unconstrained arcs in a control flow graph the extension of symbolic execution to deal with dynamic memory allocation and deallocation pointers and pointers to functions Execution trees are symbolically executed to produce Extended Path Constraints which are then partially mapped by an original algorithm into linear problems whose solutions correspond to the test data to be used as input to cover program branches. Partially mapping this problem into a linear optimization problem avoids infeasible and non-linear path problems if a feasible linear alternate path exists in the same execution tree. The presented approach has been implemented in C++ and tested on C-language programs on a Pentium/Linux system. Preliminary results are encouraging and show that a high percentage of the program branches can be covered by the test data automatically produced. The approach is flexible to branch selection criteria coming from general testing as well as regression testing.},
    }
  • G. Antoniol, G. Canfora, A. D. Lucia, and E. Merlo, “Recovering code to documentation links in oo systems,” in Wcre, 1999, pp. 136-144.
    [Abstract]

    Software system documentation is almost always expressed informally in natural language and free text. Examples include requirement specifications design documents manual pages system development journals error logs and related maintenance reports. We propose an approach to establish and maintain traceability links between the source code and free text documents. A premise of our work is that programmers use meaningful names for program\’s items such as functions variables types classes and methods. We believe that the application-domain knowledge that programmers process when writing the code is often captured by the mnemonics for identifiers; therefore the analysis of these mnemonics can help to associate high level concepts with program concepts and vice-versa. In this paper the approach is applied to software written in an object-oriented language namely C++ to trace classes to manual sections.

    [Bibtex]

    @inproceedings{conf/wcre/AntoniolCLM99,
    author = {Giuliano Antoniol and Gerardo Canfora and Andrea De Lucia and Ettore Merlo},
    title = {Recovering Code to Documentation Links in OO Systems},
    booktitle = {WCRE},
    year = {1999},
    pages = {136-144},
    ee = {http://computer.org/proceedings/wcre/0303/03030136abs.htm},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    abstract = {Software system documentation is almost always expressed informally in natural language and free text. Examples include requirement specifications design documents manual pages system development journals error logs and related maintenance reports. We propose an approach to establish and maintain traceability links between the source code and free text documents. A premise of our work is that programmers use meaningful names for program\'s items such as functions variables types classes and methods. We believe that the application-domain knowledge that programmers process when writing the code is often captured by the mnemonics for identifiers; therefore the analysis of these mnemonics can help to associate high level concepts with program concepts and vice-versa. In this paper the approach is applied to software written in an object-oriented language namely C++ to trace classes to manual sections.},
    }
  • G. Antoniol, G. Canfora, and A. D. Lucia, “Maintaining traceability during object-oriented software evolution: a case study,” in Icsm, 1999, pp. 211-219.
    [Bibtex]
    @inproceedings{conf/icsm/AntoniolCL99,
    author = {Giuliano Antoniol and Gerardo Canfora and Andrea De Lucia},
    title = {Maintaining Traceability During Object-Oriented Software Evolution: A Case Study},
    booktitle = {ICSM},
    year = {1999},
    pages = {211-219},
    ee = {http://computer.org/proceedings/icsm/0016/00160211abs.htm},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    }

1998

  • [PDF] G. Antoniol, R. Fiutem, and L. Cristoforetti, “Using metrics to identify design patterns in object-oriented software,” in Ieee metrics, 1998, p. 23-.
    [Abstract]

    Object-Oriented design patterns are an emergent technology: they are reusable micro-architectures high level building blocks. This paper presents a conservative approach based on a multi-stage reduction strategy using OO software metrics and structural properties to extract structural design patterns from OO design or code. Code and design are mapped into an intermediate representation called Abstract Object Language to maintain independence from the programming language and the adopted CASE tools. To assess the effectiveness of the pattern recovery process a portable environment written in Java remotely accessible by means of any WEB browser has been developed. Based on this environment experimental results obtained on public domain and industrial software are discussed in the paper.

    [Bibtex]

    @inproceedings{00731224,
    author = {Giuliano Antoniol and Roberto Fiutem and L. Cristoforetti},
    title = {Using Metrics to Identify Design Patterns in Object-Oriented Software},
    booktitle = {IEEE METRICS},
    year = {1998},
    pages = {23-},
    ee = {http://doi.ieeecomputersociety.org/10.1109/METRIC.1998.731224},
    crossref = {DBLP:conf/metrics/1998},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    pdf = {1998/00731224.pdf},
    abstract = {Object-Oriented design patterns are an emergent technology: they are reusable micro-architectures high level building blocks. This paper presents a conservative approach based on a multi-stage reduction strategy using OO software metrics and structural properties to extract structural design patterns from OO design or code. Code and design are mapped into an intermediate representation called Abstract Object Language to maintain independence from the programming language and the adopted CASE tools. To assess the effectiveness of the pattern recovery process a portable environment written in Java remotely accessible by means of any WEB browser has been developed. Based on this environment experimental results obtained on public domain and industrial software are discussed in the paper.},
    }
  • R. Fiutem and G. Antoniol, “Identifying design-code inconsistencies in object-oriented software: a case study,” in Icsm, 1998, p. 94-.
    [Abstract]

    Traceability is a key issue to ensure consistency among software artifacts of subsequent phases of the development cycle. However few works have addressed the theme of tracing object oriented design into its software. This paper presents an approach to check the compliance of OO design with respect to source code. The process works on design artefacts expressed in OMT notation and accepts C++ source code. It recovers an “as is” design from the code compares recovered design with the actual design and helps the user to deal with inconsistency by pointing out regions of code which do not match with design. The recovery process exploits regular expression and edit distance to bridge the gap between code and design. Results as well as consideration related to presentation issues are reported in the paper.

    [Bibtex]

    @inproceedings{conf/icsm/FiutemA98,
    author = {Roberto Fiutem and Giuliano Antoniol},
    title = {Identifying Design-Code Inconsistencies in Object-Oriented Software: A Case Study},
    booktitle = {ICSM},
    year = {1998},
    pages = {94-},
    ee = {http://computer.org/proceedings/icsm/8779/87790094abs.htm},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    abstract = {Traceability is a key issue to ensure consistency among software artifacts of subsequent phases of the development cycle. However few works have addressed the theme of tracing object oriented design into its software. This paper presents an approach to check the compliance of OO design with respect to source code. The process works on design artefacts expressed in OMT notation and accepts C++ source code. It recovers an ``as is'' design from the code compares recovered design with the actual design and helps the user to deal with inconsistency by pointing out regions of code which do not match with design. The recovery process exploits regular expression and edit distance to bridge the gap between code and design. Results as well as consideration related to presentation issues are reported in the paper.},
    }
  • G. Caldiera, G. Antoniol, R. Fiutem, and C. J. Lokan, “Definition and experimental evaluation of function points for object-oriented systems,” in Ieee metrics, 1998, p. 167-.
    [Abstract]

    We present a method for estimating the size, and consequently effort and duration, of object oriented software development projects. Different estimates may be made in different phases of the development process, according to the available information. We define an adaptation of traditional function points, called Object Oriented Function Points, to enable the measurement of object oriented analysis and design specifications. Tools have been constructed to automate the counting method. The novel aspect of our method is its flexibility. An organisation can experiment with different counting policies, to find the most accurate predictors of size, effort, etc. in its environment. The method and preliminary results of its application in an industrial environment are presented and discussed.

    [Bibtex]

    @inproceedings{conf/metrics/CaldieraAFL98,
    author = {Gianluigi Caldiera and Giuliano Antoniol and Roberto Fiutem and Christopher J. Lokan},
    title = {Definition and Experimental Evaluation of Function Points for Object-Oriented Systems},
    booktitle = {IEEE METRICS},
    year = {1998},
    pages = {167-},
    ee = {http://doi.ieeecomputersociety.org/10.1109/METRIC.1998.731242},
    crossref = {DBLP:conf/metrics/1998},
    abstract = {
    We present a method for estimating the size, and consequently effort and duration, of object oriented software development projects. Different estimates may be made in different phases of the development process, according to the available information. We define an adaptation of traditional function points, called Object Oriented Function Points, to enable the measurement of object oriented analysis and design specifications. Tools have been constructed to automate the counting method. The novel aspect of our method is its flexibility. An organisation can experiment with different counting policies, to find the most accurate predictors of size, effort, etc. in its environment. The method and preliminary results of its application in an industrial environment are presented and discussed.
    },
    bibsource = {DBLP, http://dblp.uni-trier.de},
    }
  • F. Calzolari, P. Tonella, and G. Antoniol, “Modeling maintenance effort by means of dynamic systems,” in Csmr, 1998, pp. 150-156.
    [Abstract]

    The dynamic evolution of ecological systems in which predators and preys compete for surviving has been investigated by applying suitable mathematical models. Dynamic systems theory provides a useful way to model interspecie competition and thus the evolution of predators and preys populations. This kind of mathematical framework has been shown to be well suited to describe evolution of economical systems as well where instead of predators and preys there are consumers and resources. This paper suggests how dynamic systems could be usefully applied to maintenance context namely to model the dynamic evolution of maintenance effort. When maintainers starts trying to recognize and correct code defects while the number of residual defects decreases the effort spent to find out any new defect has an initial increase followed by a decline in a similar way as preys and predators populations do. The feasibility of this approach is supported by the experimental data about a 67 months maintenance task of a software project and its successive releases.

    [Bibtex]

    @inproceedings{conf/csmr/CalzolariTA98,
    author = {F. Calzolari and Paolo Tonella and Giuliano Antoniol},
    title = {Modeling Maintenance Effort by Means of Dynamic Systems},
    booktitle = {CSMR},
    year = {1998},
    pages = {150-156},
    ee = {http://dx.doi.org/10.1109/CSMR.1998.665787, http://doi.ieeecomputersociety.org/10.1109/CSMR.1998.665787},
    crossref = {DBLP:conf/csmr/1998},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    abstract = {The dynamic evolution of ecological systems in which predators and preys compete for surviving has been investigated by applying suitable mathematical models. Dynamic systems theory provides a useful way to model interspecie competition and thus the evolution of predators and preys populations. This kind of mathematical framework has been shown to be well suited to describe evolution of economical systems as well where instead of predators and preys there are consumers and resources. This paper suggests how dynamic systems could be usefully applied to maintenance context namely to model the dynamic evolution of maintenance effort. When maintainers starts trying to recognize and correct code defects while the number of residual defects decreases the effort spent to find out any new defect has an initial increase followed by a decline in a similar way as preys and predators populations do. The feasibility of this approach is supported by the experimental data about a 67 months maintenance task of a software project and its successive releases.},
    }
  • G. Antoniol, F. Calzolari, L. Cristoforetti, R. Fiutem, and G. Caldiera, “Adapting function points to object-oriented information systems,” in Caise, 1998, pp. 59-76.
    [Bibtex]
    @inproceedings{conf/caise/AntoniolCCFC98,
    author = {Giuliano Antoniol and F. Calzolari and L. Cristoforetti and Roberto Fiutem and Gianluigi Caldiera},
    title = {Adapting Function Points to Object-Oriented Information Systems},
    booktitle = {CAiSE},
    year = {1998},
    pages = {59-76},
    ee = {http://dx.doi.org/10.1007/BFb0054219},
    crossref = {DBLP:conf/caise/1998},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    }
  • F. Calzolari, P. Tonella, and G. Antoniol, “Dynamic model for maintenance and testing effort,” in Icsm, 1998, pp. 104-112.
    [Abstract]

    The dynamic evolution of ecological systems in which predators and prey compete for surviving has been investigated by applying suitable mathematical models. Dynamic systems theory provides a useful way to model interspecie competition and thus the evolution of predators and prey populations. This kind of mathematical framework has been shown to be well suited to describe evolution of economical systems as well where instead of predators and prey there are consumers and resources. Maintenance and testing activities absorbe the most relevant part of total life-cycle cost of software. Such economic relevance strongly suggests to investigate the maintenance and testing processes in order to find new models allowing software engineers to better estimate plan and manage costs and activities. In this paper we show how dynamic systems theory could be usefully applied to maintenance and testing context namely to model the dynamic evolution of the effort. When programmers start trying to recognize and correct code defects while the number of residual defects decreases the effort spent to find out any new defect has an initial increase followed by a decline in a similar way as prey and predators populations do.

    [Bibtex]

    @inproceedings{conf/icsm/CalzolariTA98,
    author = {F. Calzolari and Paolo Tonella and Giuliano Antoniol},
    title = {Dynamic Model for Maintenance and Testing Effort},
    booktitle = {ICSM},
    year = {1998},
    pages = {104-112},
    ee = {http://computer.org/proceedings/icsm/8779/87790104abs.htm},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    abstract = {The dynamic evolution of ecological systems in which predators and prey compete for surviving has been investigated by applying suitable mathematical models. Dynamic systems theory provides a useful way to model interspecie competition and thus the evolution of predators and prey populations. This kind of mathematical framework has been shown to be well suited to describe evolution of economical systems as well where instead of predators and prey there are consumers and resources. Maintenance and testing activities absorbe the most relevant part of total life-cycle cost of software. Such economic relevance strongly suggests to investigate the maintenance and testing processes in order to find new models allowing software engineers to better estimate plan and manage costs and activities. In this paper we show how dynamic systems theory could be usefully applied to maintenance and testing context namely to model the dynamic evolution of the effort. When programmers start trying to recognize and correct code defects while the number of residual defects decreases the effort spent to find out any new defect has an initial increase followed by a decline in a similar way as prey and predators populations do.},
    }
  • G. Antoniol, R. Fiutem, and L. Cristoforetti, “Design pattern recovery in object-oriented software,” in Iwpc, 1998, p. 153-.
    [Bibtex]
    @inproceedings{conf/iwpc/AntoniolFC98,
    author = {Giuliano Antoniol and Roberto Fiutem and L. Cristoforetti},
    title = {Design Pattern Recovery in Object-Oriented Software},
    booktitle = {IWPC},
    year = {1998},
    pages = {153-},
    ee = {http://dlib2.computer.org/conferen/iwpc/8560/pdf/85600153.pdf},
    crossref = {DBLP:conf/iwpc/1998},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    }

1997

  • [PDF] G. Antoniol, R. Fiutem, G. Lutteri, P. Tonella, S. Zanfei, and E. Merlo, “Program understanding and maintenance with the canto environment,” in Icsm, 1997, p. 72-.
    [Abstract]

    During, maintenance activities the availability of integrated conceptual views that present software at different levels of abstraction from software architecture to control and data flow relations at code level is fundamental to understand and modify legacy systems. This paper presents CANTO a comprehensive program understanding and maintenance environment which integrates fine grained information with architectural views extracted from source code giving the user control on what is being computed by analyses. The capabilities and usefulness of CANTO are illustrated with reference to a real understanding and maintenance task.

    [Bibtex]

    @inproceedings{05726937,
    author = {Giuliano Antoniol and Roberto Fiutem and G. Lutteri and Paolo Tonella and S. Zanfei and Ettore Merlo},
    title = {Program Understanding and Maintenance with the CANTO Environment},
    booktitle = {ICSM},
    year = {1997},
    pages = {72-},
    ee = {http://doi.ieeecomputersociety.org/10.1109/ICSM.1997.624233},
    crossref = {DBLP:conf/icsm/1997},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    pdf = {1997/05726937.pdf},
    abstract = {During, maintenance activities the availability of integrated conceptual views that present software at different levels of abstraction from software architecture to control and data flow relations at code level is fundamental to understand and modify legacy systems. This paper presents CANTO a comprehensive program understanding and maintenance environment which integrates fine grained information with architectural views extracted from source code giving the user control on what is being computed by analyses. The capabilities and usefulness of CANTO are illustrated with reference to a real understanding and maintenance task.},
    }
  • [PDF] P. Tonella, G. Antoniol, R. Fiutem, and E. Merlo, “Flow insensitive c++ pointers and polymorphism analysis and its application to slicing,” in Icse, 1997, pp. 433-443.
    [Abstract]

    Large software systems are difficult to understand and maintain. Code analysis tools can provide programmers with different views of the software which may help their understanding activity. To be applicable to real programs written in modern programming languages these tools need to efficiently handle pointers. In the case of C++ analysis object oriented peculiarities (like e.g. polymorphism) have to be accounted for as well. We propose a flow insensitive context insensitive points-to analysis capable of dealing with the features of the object oriented code. It is extremely promising because of the positive trade-off between complexity and accuracy. The integration of the points-to results with other analyses such as reaching definitions and slicing is also discussed in the context of our program understanding environment.

    [Bibtex]

    @inproceedings{p433-tonella,
    author = {Paolo Tonella and Giuliano Antoniol and Roberto Fiutem and Ettore Merlo},
    title = {Flow Insensitive C++ Pointers and Polymorphism Analysis and its Application to Slicing},
    booktitle = {ICSE},
    year = {1997},
    pages = {433-443},
    ee = {http://doi.acm.org/10.1145/253228.253371},
    crossref = {DBLP:conf/icse/1997},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    pdf = {1997/p433-tonella.pdf},
    abstract = {Large software systems are difficult to understand and maintain. Code analysis tools can provide programmers with different views of the software which may help their understanding activity. To be applicable to real programs written in modern programming languages these tools need to efficiently handle pointers. In the case of C++ analysis object oriented peculiarities (like e.g. polymorphism) have to be accounted for as well. We propose a flow insensitive context insensitive points-to analysis capable of dealing with the features of the object oriented code. It is extremely promising because of the positive trade-off between complexity and accuracy. The integration of the points-to results with other analyses such as reaching definitions and slicing is also discussed in the context of our program understanding environment.},
    }
  • P. Tonella, G. Antoniol, R. Fiutem, and E. Merlo, “Points-to analysis for program understanding,” in Wpc, 1997, p. 90-.
    [Abstract]

    Program understanding activities are more difficult for programs written in languages (such as C) that heavily make use of pointers for data structure manipulation because the programmer needs to build a mental model of the memory use and of the pointers to its locations. Pointers also pose additional problems to the tools supporting program understanding since they introduce additional dependences that have to be accounted for. This paper extends the flow insensitive context insensitive points-to analysis algorithm proposed by Steensgaard to cover arbitrary combinations of pointer dereferences array subscripts and field selections. It exhibits interesting properties among which scalability resulting from the low complexity and good performances. The results of the analysis are valuable by themselves as their graphical display represents the points-to links between locations. They are also integrated with other program understanding techniques like e.g. call graph construction slicing plan recognition and architectural recovery. The use of this algorithm in the framework of the program understanding environment CANTO is discussed.

    [Bibtex]

    @inproceedings{conf/iwpc/TonellaAFM97,
    author = {Paolo Tonella and Giuliano Antoniol and Roberto Fiutem and Ettore Merlo},
    title = {Points-to Analysis for Program Understanding},
    booktitle = {WPC},
    year = {1997},
    pages = {90-},
    ee = {http://computer.org/proceedings/wpc/7993/79930090abs.htm},
    crossref = {DBLP:conf/iwpc/1997},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    abstract = {Program understanding activities are more difficult for programs written in languages (such as C) that heavily make use of pointers for data structure manipulation because the programmer needs to build a mental model of the memory use and of the pointers to its locations. Pointers also pose additional problems to the tools supporting program understanding since they introduce additional dependences that have to be accounted for. This paper extends the flow insensitive context insensitive points-to analysis algorithm proposed by Steensgaard to cover arbitrary combinations of pointer dereferences array subscripts and field selections. It exhibits interesting properties among which scalability resulting from the low complexity and good performances. The results of the analysis are valuable by themselves as their graphical display represents the points-to links between locations. They are also integrated with other program understanding techniques like e.g. call graph construction slicing plan recognition and architectural recovery. The use of this algorithm in the framework of the program understanding environment CANTO is discussed.},
    }
  • P. Tonella, G. Antoniol, R. Fiutem, and E. Merlo, “Variable precision reaching definitions analysis for software maintenance,” in Csmr, 1997, pp. 60-67.
    [Abstract]

    A flow analyzer can be very helpful in the process of program understanding by providing the programmer with different views of the code. As the documentation is often incomplete or inconsistent it is extremely useful to extract the information a programmer may need directly from the code. Program understanding activities are interactive thus program analysis tools may be asked for quick answers by the maintainer. Therefore the control on the trade-off between accuracy and efficiency should be given to the user. This paper presents an approach to interprocedural reaching definitions flow analysis based on three levels of precision depending on the sensitivity to the calling context and the control flow. A lower precision degree produces an overestimate of the data dependences in a program. The result is anyhow conservative (all dependences which hold are surely reported) and definitely faster than the more accurate counterparts. A tool supporting reaching definition analysis in the three variants has been developed. The results on a test suite show that three orders of magnitude can be gained in execution times by the less accurate analysis but 57.4 \% extra dependences are on average added. The intermediate variant is much more precise (1.6 \% extra dependences) but gains less in times (one order of magnitude)

    [Bibtex]

    @inproceedings{conf/csmr/TonellaAFM97,
    author = {Paolo Tonella and Giuliano Antoniol and Roberto Fiutem and Ettore Merlo},
    title = {Variable Precision Reaching Definitions Analysis for Software Maintenance},
    booktitle = {CSMR},
    year = {1997},
    pages = {60-67},
    ee = {http://dx.doi.org/10.1109/CSMR.1997.583007, http://doi.ieeecomputersociety.org/10.1109/CSMR.1997.583007},
    crossref = {DBLP:conf/csmr/1997},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    abstract = {A flow analyzer can be very helpful in the process of program understanding by providing the programmer with different views of the code. As the documentation is often incomplete or inconsistent it is extremely useful to extract the information a programmer may need directly from the code. Program understanding activities are interactive thus program analysis tools may be asked for quick answers by the maintainer. Therefore the control on the trade-off between accuracy and efficiency should be given to the user. This paper presents an approach to interprocedural reaching definitions flow analysis based on three levels of precision depending on the sensitivity to the calling context and the control flow. A lower precision degree produces an overestimate of the data dependences in a program. The result is anyhow conservative (all dependences which hold are surely reported) and definitely faster than the more accurate counterparts. A tool supporting reaching definition analysis in the three variants has been developed. The results on a test suite show that three orders of magnitude can be gained in execution times by the less accurate analysis but 57.4 \% extra dependences are on average added. The intermediate variant is much more precise (1.6 \% extra dependences) but gains less in times (one order of magnitude)},
    }

1996

  • P. Tonella, R. Fiutem, G. Antoniol, and E. Merlo, “Augmenting pattern-based architectural recovery with flow analysis: mosaic -a case study,” in Wcre, 1996, pp. 198-207.
    [Bibtex]
    @inproceedings{conf/wcre/TonellaFAM96,
    author = {Paolo Tonella and Roberto Fiutem and Giuliano Antoniol and Ettore Merlo},
    title = {Augmenting Pattern-Based Architectural Recovery with Flow Analysis: Mosaic -A Case Study},
    booktitle = {WCRE},
    year = {1996},
    pages = {198-207},
    ee = {http://computer.org/proceedings/wcre/7674/76740198abs.htm},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    }
  • R. Fiutem, P. Tonella, G. Antoniol, and E. Merlo, “A cliche-based environment to support architectural reverse engineering,” in Wcre, 1996, pp. 277-286.
    [Bibtex]
    @inproceedings{conf/wcre/FiutemTAM96,
    author = {Roberto Fiutem and Paolo Tonella and Giuliano Antoniol and Ettore Merlo},
    title = {A Cliche-Based Environment to Support Architectural Reverse Engineering},
    booktitle = {WCRE},
    year = {1996},
    pages = {277-286},
    ee = {http://computer.org/proceedings/wcre/7674/76740277abs.htm},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    }
  • R. Fiutem, E. Merlo, G. Antoniol, and P. Tonella, “Understanding the architecture of software systems,” in Wpc, 1996, p. 187-.
    [Bibtex]
    @inproceedings{conf/iwpc/FiutemMAT96,
    author = {Roberto Fiutem and Ettore Merlo and Giuliano Antoniol and Paolo Tonella},
    title = {Understanding the architecture of software systems},
    booktitle = {WPC},
    year = {1996},
    pages = {187-},
    ee = {http://computer.org/proceedings/wpc/7283/72830187abs.htm},
    crossref = {DBLP:conf/iwpc/1996},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    }
  • R. Fiutem, P. Tonella, G. Antoniol, and E. Merlo, “A cliche’-based environment to support architectural reverse engineering,” in Icsm, 1996, pp. 319-328.
    [Bibtex]
    @inproceedings{conf/icsm/FiutemTAM96,
    author = {Roberto Fiutem and Paolo Tonella and Giuliano Antoniol and Ettore Merlo},
    title = {A Cliche'-Based Environment to Support Architectural Reverse Engineering},
    booktitle = {ICSM},
    year = {1996},
    pages = {319-328},
    ee = {http://computer.org/proceedings/icsm/7677/76770319abs.htm},
    crossref = {DBLP:conf/icsm/1996},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    }

1995

  • G. Antoniol, R. Fiutem, E. Merlo, and P. Tonella, “Application and user interface migration from basic to visual c++,” in Icsm, 1995, p. 76-.
    [Abstract]

    In this paper an approach to reengineer BASIC PC legacy code into modern graphical systems is proposed. BASIC has historically been one of the first languages available on PCs. Based on it small or medium size companies have developed throughout the time systems that represent valuable company assets to be preserved. Our goal is the automatic migration from the BASIC character oriented user interface to a graphical environment which includes a GUI builder and compiles event driven C/C++ code. For this purpose a conceptual representation in terms of abstract graphical objects and callbacks was inferred from the original code and a translator from BASIC to C was developed. Moreover the GUI builder internal representation was generated so that the user interface can be interactively fine-tuned by the programmer. We present and discuss BASIC peculiarities with preliminary results on code translation. For the explanation of our approach to user interface migration an example are used throughout the text.

    [Bibtex]

    @inproceedings{conf/icsm/AntoniolFMT95,
    author = {Giuliano Antoniol and Roberto Fiutem and Ettore Merlo and Paolo Tonella},
    title = {Application and user interface migration from BASIC to Visual C++},
    booktitle = {ICSM},
    year = {1995},
    pages = {76-},
    ee = {http://computer.org/proceedings/icsm/7141/71410076abs.htm},
    crossref = {DBLP:conf/icsm/1995},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    abstract = {In this paper an approach to reengineer BASIC PC legacy code into modern graphical systems is proposed. BASIC has historically been one of the first languages available on PCs. Based on it small or medium size companies have developed throughout the time systems that represent valuable company assets to be preserved. Our goal is the automatic migration from the BASIC character oriented user interface to a graphical environment which includes a GUI builder and compiles event driven C/C++ code. For this purpose a conceptual representation in terms of abstract graphical objects and callbacks was inferred from the original code and a translator from BASIC to C was developed. Moreover the GUI builder internal representation was generated so that the user interface can be interactively fine-tuned by the programmer. We present and discuss BASIC peculiarities with preliminary results on code translation. For the explanation of our approach to user interface migration an example are used throughout the text.},
    }

1994

  • G. Antoniol, F. Brugnara, M. Cettolo, and M. Federico, “Language model estimations and representations for real-time continuous speech recognition,” in Icslp, 1994.
    [Abstract]

    This paper compares different ways of estimating bi- gram language models and of representing them in a finite state network used by a beam-search based, con- tinuous speech, and speaker independent HMM recog- nizer. Attention is focused on the n-gram interpolation scheme for which seven models are considered. Among them, the Stacked estimated linear interpolated model favourably compares with the best known ones. Fur- ther, two different static representations of the search space are investigated: “linear” and “tree-based”. Re- sults show that the latter topology is better suited to the beam-search algorithm. Moreover, this represen- tation can be reduced by a network optimization tech- nique, which allows the dynamic size of the recognition process to be decreased by 60\%. Extensive recognition experiments on a 10,000-word dictation task with four speakers are described in which an average word accu- racy of 93\% is achieved with real-time response.

    [Bibtex]

    @inproceedings{conf/interspeech/AntoniolBCF94,
    author = {Giuliano Antoniol and Fabio Brugnara and Mauro Cettolo and Marcello Federico},
    title = {Language model estimations and representations for real-time continuous speech recognition},
    booktitle = {ICSLP},
    year = {1994},
    ee = {http://www.isca-speech.org/archive/icslp_1994/i94_0859.html},
    crossref = {DBLP:conf/interspeech/1994},
    abstract = {
    This paper compares different ways of estimating bi- gram language models and of representing them in a finite state network used by a beam-search based, con- tinuous speech, and speaker independent HMM recog- nizer. Attention is focused on the n-gram interpolation scheme for which seven models are considered. Among them, the Stacked estimated linear interpolated model favourably compares with the best known ones. Fur- ther, two different static representations of the search space are investigated: “linear” and “tree-based”. Re- sults show that the latter topology is better suited to the beam-search algorithm. Moreover, this represen- tation can be reduced by a network optimization tech- nique, which allows the dynamic size of the recognition process to be decreased by 60\%. Extensive recognition experiments on a 10,000-word dictation task with four speakers are described in which an average word accu- racy of 93\% is achieved with real-time response.
    },
    bibsource = {DBLP, http://dblp.uni-trier.de},
    }
  • B. Angelini, G. Antoniol, F. Brugnara, M. Cettolo, M. Federico, R. Fiutem, and G. Lazzari, “Radiological reporting by speech recognition: the a.re.s. system,” in Icslp, 1994.
    [Bibtex]
    @inproceedings{conf/interspeech/AngeliniABCFFL94,
    author = {Bianca Angelini and Giuliano Antoniol and Fabio Brugnara and Mauro Cettolo and Marcello Federico and Roberto Fiutem and Gianni Lazzari},
    title = {Radiological reporting by speech recognition: the a.re.s. system},
    booktitle = {ICSLP},
    year = {1994},
    ee = {http://www.isca-speech.org/archive/icslp_1994/i94_1267.html},
    crossref = {DBLP:conf/interspeech/1994},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    }

1993

  • G. Antoniol, M. Cettolo, and M. Federico, “Techniques for robust recognition in restricted domains,” in Eurospeech, 1993.
    [Abstract]

    This paper describes an Automatic Speech Understanding (ASU) system used in a human-robot interface for the re- mote control of a mobile robot. The intended application is that of an operator issuing telecontrol commands to one or more robots from a remote workstation. ASU is sup- posed to be performed with spontaneous continuous speech and quasi real time conditions. Training and testing of the system was based on speech data collected by means of Wizard of Oz simulations. Two kinds of robustness factors are introduced: the first is a recognition error-tolerant ap- proach to semantic interpretation, the second is based on a technique for evaluating the reliability of the ASU system output with respect to the input utterance. Preliminary re- sults are 90.9\% of correct semantic interpretations, and 89.1\% of correct detection of out-of-domain sentences at the cost of rejecting 16.4\% of correct in-domain sentences.

    [Bibtex]

    @inproceedings{conf/interspeech/AntoniolCF93,
    author = {Giuliano Antoniol and Mauro Cettolo and Marcello Federico},
    title = {Techniques for robust recognition in restricted domains},
    booktitle = {EUROSPEECH},
    year = {1993},
    ee = {http://www.isca-speech.org/archive/eurospeech_1993/e93_2219.html},
    crossref = {DBLP:conf/interspeech/1993},
    abstract = {
    This paper describes an Automatic Speech Understanding (ASU) system used in a human-robot interface for the re- mote control of a mobile robot. The intended application is that of an operator issuing telecontrol commands to one or more robots from a remote workstation. ASU is sup- posed to be performed with spontaneous continuous speech and quasi real time conditions. Training and testing of the system was based on speech data collected by means of Wizard of Oz simulations. Two kinds of robustness factors are introduced: the first is a recognition error-tolerant ap- proach to semantic interpretation, the second is based on a technique for evaluating the reliability of the ASU system output with respect to the input utterance. Preliminary re- sults are 90.9\% of correct semantic interpretations, and 89.1\% of correct detection of out-of-domain sentences at the cost of rejecting 16.4\% of correct in-domain sentences.
    },
    bibsource = {DBLP, http://dblp.uni-trier.de},
    }
  • G. Antoniol, R. Fiutem, R. Flor, and G. Lazzari, “Radiological reporting based on voice recognition,” in Ewhci, 1993, pp. 242-253.
    [Bibtex]
    @inproceedings{conf/ewhci/AntoniolFFL93,
    author = {Giuliano Antoniol and Roberto Fiutem and R. Flor and Gianni Lazzari},
    title = {Radiological Reporting Based on Voice Recognition},
    booktitle = {EWHCI},
    year = {1993},
    pages = {242-253},
    ee = {http://dx.doi.org/10.1007/3-540-57433-6_53},
    crossref = {DBLP:conf/ewhci/1993},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    }

1991

  • G. Antoniol, F. Brugnara, and D. Giuliani, “Admissible strategies for acoustic matching with a large vocabulary,” in Eurospeech, 1991.
    [Bibtex]
    @inproceedings{conf/interspeech/AntoniolBG91,
    author = {Giuliano Antoniol and Fabio Brugnara and Diego Giuliani},
    title = {Admissible strategies for acoustic matching with a large vocabulary},
    booktitle = {EUROSPEECH},
    year = {1991},
    ee = {http://www.isca-speech.org/archive/eurospeech_1991/e91_0589.html},
    crossref = {DBLP:conf/interspeech/1991},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    }
  • G. Antoniol, F. Brugnara, D. F. Palma, G. Lazzari, and E. Moser, “A. re. s. : an interface for automatic reporting by speech,” in Eurospeech, 1991.
    [Bibtex]
    @inproceedings{conf/interspeech/AntoniolBPLM91,
    author = {Giuliano Antoniol and Fabio Brugnara and F. Dalla Palma and Gianni Lazzari and E. Moser},
    title = {A. RE. s. : an interface for automatic reporting by speech},
    booktitle = {EUROSPEECH},
    year = {1991},
    ee = {http://www.isca-speech.org/archive/eurospeech_1991/e91_0973.html},
    crossref = {DBLP:conf/interspeech/1991},
    bibsource = {DBLP, http://dblp.uni-trier.de},
    }