1st India Software Engineering Conference
Feb 19-22, 2008



Call for Papers

Important Dates



Preliminary Programme


ICSE and FSE Sessions



Location and Sight seeing


In cooperation with

Sponsor Information




  • Testing Evolving Software: Current Practice and Future Promise - Mary Jean Harrold - Dr. Harrold is ADVANCE Professor of Computing at College of Computing. Georgia Institute of Technology. She is general chair for ACM SIGSOFT Foundations of Software Engineering(2008).

    Testing is the most common way to increase confidence in the correctness and reliability of software. Studies report that testing consumes about half the cost of software development. Studies also show that maintenance can consume up to 80% of the cost for the entire software lifecycle, and much of that cost is devoted to testing. Rapidly changing software and computing environments present many challenges for effective and efficient testing in practice. Past research in testing of evolving software has resulted in techniques that attempt to automate or partially automate the process. Although few of these techniques have been successfully transferred to practice, existing techniques show promise for use in industry. By combining program analysis, machine learning, and visualization techniques, we can expect significant improvement in the process of testing evolving software that will provide reduction in cost and improvement in quality.

    Mary Jean Harrold is the ADVANCE Professor of Computing at Georgia Institute of Technology. She performs research in analysis and testing of large, evolving software, fault localization and failure identification using statistical analysis, machine learning, and visualization, monitoring deployed software to improve quality, and software self-awareness through real-time assessment and response. Professor Harrold received an NSF NYI Award and was named an ACM Fellow. She serves on the editorial board of ACM TOSEM, on the Board of Directors for the Computing Research Association (CRA), and on the CRA Committee on the Status of Women in Computing (CRA-W). She received the Ph.D. from the University of Pittsburgh.

  • Learning from Software - Andreas Zeller - Professor Zeller is software engineering chair at Saarland University, Dept. of Informatics, Saarbrucken, Germany. He is author of the book "Why Programs Fail" that won Software Development Jolt Productivity Award.

    During software development and maintenance, programmers conduct several activities—tracking bug reports, changing the software, discussing features, or running tests. As more and more of these activities are organized using tools, they leave data behind that is automatically accessible in software archives such as change or bug databases. By data mining these archives, one can leverage the resulting patterns and rules to increase program quality and programmer productivity.

    Analyzing software engineering data is, of course, a standard practice in empirical software engineering. What is new, though, is that we can now automate current empirical approaches. This leads to automated assistance in all development decisions for programmers and managers alike: “For this task, you should collaborate with Joe, because it will likely require risky work on the ‘Mailbox’ class.”

    Andreas Zeller is computer science professor at Saarland University; he researches large programs and their history, and has developed a number of methods to determine the causes of program failures–on open-source programs as well as in industrial contexts at IBM, Microsoft, SAP and others. His book "Why Programs Fail" has received the Software Development Magazine productivity award in 2006.

Special ICSE/FSE session talks

  • Mining Specifications of Malicious Behavior - Mihai Christodorescu (IBM Research), Somesh Jha (University of Wisconsin), Christopher Kruegel (Technical University)

    Malware detectors require a speci cation of malicious behavior. Typically, these speci cations are manually constructed by investigating known malware. We present an automatic technique to overcome this laborious manual process. Our technique derives such a speci cation by comparing the execution behavior of a known malware against the execution behaviors of a set of benign programs. In other words, we mine the malicious behavior present in a known malware that is not present in a set of benign programs. The output of our algorithm can be used by malware detectors to detect malware variants. Since our algorithm provides a succinct description of malicious behavior present in a malware, it can also be used by security analysts for understanding the malware. We have implemented a prototype based on our algorithm and tested it on several malware programs. Experimental results obtained from our prototype indicate that our algorithm is elective in extracting malicious behaviors that can be used to detect malware variants.

  • Predicting Faults from Cached History - Sunghun Kim (Massachusetts Institute of Technology), Thomas Zimmermann (Saarland University), E. James Whitehead Jr. (University of California at Santa Cruz), Andreas Zeller (Saarland University)

    We analyze the version history of 7 software systems to predict the most fault prone entities and files. The basic assumption is that faults do not occur in isolation, but rather in bursts of several related faults. Therefore, we cache locations that are likely to have faults: starting from the location of a known (fixed) fault, we cache the location itself, any locations changed together with the fault, recently added locations, and recently changed locations. By consulting the cache at the moment a fault is fixed, a developer can detect likely fault-prone locations. This is useful for prioritizing verification and validation resources on the most fault prone files or entities. In our evaluation of seven open source projects with more than 200,000 revisions, the cache selects 10% of the source code files; these files account for 73%-95% of faults—a significant advance beyond the state of the art.

  • Globally Distributed Software Development Project Performance: An Empirical Analysis - Narayan Ramasubbu, Rajesh Krishna Balan (Singapore Management University)

    Software firms are increasingly distributing their software development effort across multiple locations. In this paper we present the results of a two year field study that investigated the effects of dispersion on the productivity and quality of distributed software development. We first develop a model of distributed software development. We then use the model, along with our empirically observed data, to understand the consequences of dispersion on software project performance. Our analysis reveals that, even in high process maturity environments, a) dispersion significantly reduces development productivity and has effects on conformance quality, and b) these negative effects of dispersion can be significantly mitigated through deployment of structured software engineering processes.

  • Tracking Code Clones in Evolving Software - Ekwa Duala-Ekoko, Martin P. Robillard (McGill University)

    Code clones are generally considered harmful in software development, and the predominant approach is to try to eliminate them through refactoring. However, recent research has provided evidence that it may not always be practical, feasible, or cost-effective to eliminate certain clone groups. We propose a technique for tracking clones in evolving software. Our technique relies on the concept of abstract clone region descriptors (CRD), which describe clone regions within methods in a robust way that is independent from the exact text of the clone region or its location in a file. We present our definition of CRDs, and describe a complete clone tracking system capable of producing CRDs from the output of a clone detection tool, notify developers of modifications to clone regions, and support the simultaneous editing of clone regions. We report on two experiments and a case study conducted to assess the performance and usefulness of our approach.