IEEE Transactions on Evolutionary Computation Special Issue on
Benchmarking Sampling-Based Optimization Heuristics: Methodology and Software (BENCH)
The submission deadline has been extended to September 15, 2021.
The official call is available as PDF on the
TEVC journal web page.
It is mostly identical to this web page. In any case, please do not hesitate to get in touch if you have any questions about the scope of the special issue, the submission, or the review process.
Benchmarking plays an important role in the study of evolutionary computation methods and other optimization algorithms. Among other benefits, benchmarking helps us analyze strengths and weaknesses of different techniques -- knowledge that can be used to design more efficient optimization approaches. Core to benchmarking is a well-designed experimental setup, which ranges from the selection of algorithms, problem instances, and performance metrics over efficient experimentation to a sound evaluation of the benchmark data. To assist researchers and users of evolutionary computation methods, a number of different tools addressing the various different aspects of benchmarking are available. However, most of them are developed in isolation, without a possible integration to already existing software in mind. This hinders knowledge transfer between the different research groups and between academic and industrial practitioners of evolutionary computation methods.
The goal of this special issue is to provide an overview of state-of-the-art software packages, methods, and data sets that facilitate sound benchmarking of evolutionary algorithms and other optimization techniques. By providing an overview of today's benchmarking landscape, new synergies will be laid open, helping the community to converge towards a higher compatibility between tools, towards better reproducibility and replicability of our research, a better use of resources, and, ultimately, towards higher standards in our benchmarking practices.
We welcome submissions on the following topics.
- Generation, selection, and analysis of problems and problem instances
- benchmark-driven algorithm design, selection, and analysis
- experimental design
- benchmark data collections and their organization
- performance analysis and visualization, including statistical evaluation
- holistic performance analysis in the context of real-world optimization problems
e.g., algorithm robustness, problem class coverage, implementation complexity
- other aspects of benchmarking optimization algorithms
We are particularly interested in submissions that present methods, software, and data collections that are applicable beyond a specific use case and that relate to problem classes of wider impact and interest.
Availability of code, software, and data in an open-source format is strongly encouraged, but not formally required. They must be well documented and made available for public download or in special cases
(in particular for real-world problems) upon request.
We are particularly interested in submissions that discuss the role of the contributed techniques to the existing benchmarking landscape; this includes a discussion of interfaces with other existing software and/or the challenges in implementing these.
No restriction is made on the type of optimization problems that are being analyzed; contributions on constrained/unconstrained, noisy/ non-noisy, static/dynamic, single-/ multi-objective, combinatorial/continuous/mixed-integer problems, etc. are equally welcome.
Algorithm comparisons are as much in the scope as are methods to analyze problem characteristics or any other component of algorithm benchmarking. We also welcome
submissions which propose comparisons between sampling-based heuristics and other optimization techniques.
NOT in the scope of this special issue are reports which mostly focus on technical aspects such as descriptions of software architecture or user manuals. That is, submission shall focus on the contribution that the proposed method, data collection, or software makes towards better benchmarking practices of our community.
Manuscripts should be prepared according to the
``Information for Authors'' section of the journal
and submissions should be made through the
journal submission website, by selecting the Manuscript Type ``BENCH Special Issue Papers'' and clearly adding ``Benchmarking Special Issue Paper'' to the comments to the Editor-in-Chief.
Submission of a manuscript implies that it is the authors' original unpublished work and is not being submitted for possible publication elsewhere.
- Submission opens: January 1, 2021
- Submission deadline: September 15, 2021
- Tentative publication date: Summer 2022
- Thomas Bäck, Leiden Institute of Advanced Computer Science (LIACS), Leiden University, The Netherlands
- Carola Doerr, CNRS researcher at LIP6, Sorbonne Université, Paris, France
- Bernhard Sendhoff, Honda Research Institute Japan Co., Ltd. Japan
- Thomas Stützle, IRIDIA laboratory, Université libre de Bruxelles (ULB), Belgium
Page last modified:
August 24, 2021.