How to deliver pizza faster than ever before using ML (or, Learning-Augmented Algorithms for Online TSP on the Line)
Recently, there has emerged a research area called Learning-Augmented Algorithms. The idea is to utilize possibly erroneous predictions about the input of (mainly) Online Problems to circumvent the difficulties associated with the uncertainty therein. One wishes for good performance with perfect predictions (consistency), respectable performance with arbitrarily bad predictions (robustness) and some sort of continuous degradation of performance between the two extremes (smoothness). We focus on the following problem. Imagine yourself in the place of a pizza delivery person with n pizza slices which you are to deliver to n hungry people across some street. These people may call you from any point along this street and at any time. You can give a pizza slice to a person after they have called you by moving to their position. But, your motorbike is limited to unit speed. Your goal is to be done with all the deliveries as soon as possible (we don't care about individual waiting times), where 'done' might (closed variant) or might not (open variant) demand that you finally return to your point of departure. The abstract version of this problem is called Online TSP on the Line. The performance is measured by the competitive ratio (ratio of amounts of time spent) of our algorithm against an optimal offline algorithm (which knows the requests in advance). We augment the setting with predictions about the locations of the requests and obtain consistent, smooth and robust algorithms for both the closed and open variant. For the open variant, we also define a more extensive prediction model which also specifies a request 'best served last' and derive an improved algorithm for this case. For all settings, we also show lower bounds for the consistency value (guaranteed competitive ratio with perfect predictions) of any algorithm, the one for the closed variant being tight.
A Novel Prediction Setup for Online Speed-Scaling
Given the rapid rise in energy demand by data centers and computing systems in general, it is fundamental to incorporate energy considerations when designing (scheduling) algorithms. Machine learning can be a useful approach in practice by predicting the future load of the system based on, for example, historical data. However, the effectiveness of such an approach highly depends on the quality of the predictions and can be quite far from optimal when predictions are sub-par. On the other hand, while providing a worst-case guarantee, classical online algorithms can be pessimistic for large classes of inputs arising in practice. This paper, in the spirit of the new area of machine learning augmented algorithms, attempts to obtain the best of both worlds for the classical, deadline based, online speed-scaling problem: Based on the introduction of a novel prediction setup, we develop algorithms that (i) obtain provably low energy-consumption in the presence of adequate predictions, and (ii) are robust against inadequate predictions, and (iii) are smooth, i.e., their performance gradually degrades as the prediction error increases.
Pricing for Matching
We study the problem of setting prices to items, so to maximize the number of successful transactions for a specific customer model. It consists of a given graph, where items are the vertices, and customers the edges. Every customer wants to buy exactly two specific items and has a budget of 1 Euro. Customer arrival order is adversarial, hence the objective value is the size of the minimum maximal matching in the graph. Now you have the possibility to evict some edges in the graph, by setting the prices, such that for some edges the total price of the endpoints exceeds 1 Euro. The problem consists in choosing prices so to maximize the objective value. I will present some results on this problem.
Truthful Matching with Online Items and Offline Agents (https://arxiv.org/abs/2211.02004)
We study truthful mechanisms for online maximum weight bipartite matching. In our setting, every buyer is associated with a (possibly private) desired set of items, and has a private value for being assigned an item in her desired set. Unlike most online matching settings, where agents arrive online, in our setting the items arrive online in an adversarial order while the buyers are present for the entire duration of the process. This poses a significant challenge to the design of truthful mechanisms, due to the ability of buyers to strategize over future rounds. We provide an almost full picture of the competitive ratios in different scenarios, including myopic vs. non-myopic agents, tardy vs. prompt payments, and private vs. public desired sets. Among other results, we identify the frontier for which the celebrated e/(e−1) competitive ratio for the vertex-weighted online matching of Karp, Vazirani and Vazirani extends to truthful agents and online items.
The Domain of Euclidean Preferences
This talk is meant to be a gentle introduction to the domain Euclidean preferences. We dispose a set of voters who express their preferences over a set of candidates. These preferences are called Euclidean if voters and candidates can be embedded in a decision space in a way that each voter prefer those candidates closer to him. But what is that decision space? How do we measure the proximity? Can we (easily) recognize a Euclidean profile? And why every profile should definitely want to be Euclidean? These and other points will be discussed. At the end of the talk, you will not have all the answers - better, you will have many questions to think about.".
The Canadian Traveller Problem with Predictions
In this talk, we consider the k-Canadian Traveller Problem (k-CTP) under the learning-augmented framework. k-CTP is a generalization of the shortest path problem, and involves a traveller who knows the entire graph in advance and wishes to find the shortest route from a source vertex s to a destination vertex t, but discovers online that some edges (up to k) are blocked once reaching them. In the learning-augmented framework, a potentially imperfect predictor gives us the number and the locations of the blocked edges. We aim at finding (deterministic or randomized) online algorithms that achieve the best possible tradeoff between consistency (quality of the solution when the prediction is correct) and robustness (quality of the solution when there are errors in the prediction).
A Constraint-Based Tool for Generating Benchmark Instances
Benchmarking is fundamental for assessing the relative performance of alternative solving approaches. For an informative benchmark we often need a sufficient quantity of instances with different levels of difficulty and the ability to explore subsets of the instance space to detect performance discrimination among solvers. In this talk, I will present AutoIG, an automated tool for generating benchmark instances for constraint solvers. AutoIG supports generating two types of instances: graded instances (i.e., solvable at a certain difficult level by a solver), and discriminating instances (i.e., favouring a solver over another). The usefulness of the tool in benchmarking is demonstrated via an application on five problems taken from the MiniZinc Challenges. Our experiments show that the large number of instances found by AutoIG can provide more detailed insights into the performance of the solvers rather than just a ranking. Cases where a solver is weak or even faulty can be detected, providing valuable information to solver developers. Moreover, discriminating instances can reveal parts of the instance space where a generally weak solver actually performs well relative to others, and therefore could be useful as part of an algorithm portfolio.
Advanced Mine Optimisation under Uncertainty
In many real-world scenarios, it is necessary to handle uncertainty, and potentially disruptive events that violate constraints in stochastic settings need to be avoided. A lot of evolutionary multi-objective algorithms have recently been analyzed and applied to submodular problems with different types of constraints. This talk will provide you insights on the impact of uncertainty in advanced mine optimisation on the real-word stochastic optimisation problem which allow one to mitigate the uncertainty in the deposit while maintaining high profitability. We will also show the behavior of evolutionary multi-objective algorithms on different submodular chance constrained network problems.
Evolutionary Diversity Optimization for Combinatorial Optimization
Diversity plays a crucial role in the area of evolutionary computation. Traditionally, diversity has been used to enable successful crossover operations and prevent premature convergence. In recent years, computing sets of solutions that have structural or behavioural diversity has gained significant attention under the terms evolutionary diversity optimization and quality diversity. This talk will give an overview on this area of research and point out some recent results. We will cover results in the area of evolutionary diversity optimization for the classical traveling salesperson problem and show how quality diversity approaches can be used to achieve better solutions for the traveling thief problem.
Geometric graphs and density of sets avoiding distance 1
Algorithms for Bayesian Persuasion and Delegated Search
We consider a game of strategic communication with asymmetric information between two parties. The uninformed agent chooses an action with an a priori unknown type. This action type determines the individual utility for both agents, who might have very different objectives. Hence, the informed agent tries to influence this decision by sending a signal. We study this model under the assumption of commitment power, allowing the agent holding it to credibly commit to an action scheme before the state of nature is realized. In Bayesian persuasion, the informed party has commitment power, in delegated search, it is the uninformed party. We try to find (near-)optimal action schemes for the respective party with commitment power in several different settings, some of which can be seen as extensions to the classic prophet inequality and secretary problems.
Online Algorithms: A Journey from Theory to Practice
This talk will present an overview of recent advances in studying online algorithms. We will use the classic bin packing problem to illustrate alternative models and analysis techniques that yield practical online algorithms. We review modern bin packing applications, such as fault-tolerant server consolidation, and explore the framework of online algorithms with predictions, which has become increasingly influential in the last few years. General themes in this talk include positive versus negative results, analysis techniques that go beyond worst-case, and bridging the gap between theory and practice.
Design of nonconventional composite shells using lamination parameters
The use of laminated composite materials has been continuously increasing over the recent decades. This trend also gave rise to the development of different laminate modeling and optimization methods. One particular technique is the lamination parameters, which represent the overall stiffness directionality within the structure. This approach eliminates the solution dependency on the initial assumptions on the laminate configuration and yields convex responses in many cases. It also eases the optimization process by reducing the number of design variables and provides graphical information via contour plots in the feasible domain. In this talk, I will highlight the advantages and limitations of using lamination parameters in the design of nonconventional composite structures. Then, I will summarize my doctoral and postdoctoral studies on this topic. I will wrap up by outlining the major remaining challenges and potential future directions.
Bayesian Optimization for High-Dimensional Black-Box Optimization
Ontologies for black-box optimization
Black-Box Optimization: From Theory to Practice
After introducing the key notions and performance measures used in black-box optimization, I will summarize a few of our activities around black-box optimization, carried out in collaboration with different members in the RO team. Our research ranges from theoretical analysis of randomized search heuristics and complexity theory all the way to practical applications that we carry out in collaboration with academic and industrial institutions.
Collective schedules : How to find a consensus schedule ?
Given a profile of v schedules representing the preferences of voters on n tasks, we want to find a consensus schedule that aggregates the preferences of the voters. The preference aggregation problem has been widely studied when the candidates that the voters rank are similar. However, in our context, tasks may have different lengths and could therefore be treated differently. We study two classic preference aggregation rules and extend them to fit in our context, we then study their axiomatic and computational properties.
Incorporating Decision-Maker’s Preferences into the Automatic Configuration of Bi-objective Optimisation Algorithms
Automatic configuration (AC) methods are increasingly used to tune and design optimisation algorithms for problems with multiple objectives.
Most AC methods use unary quality metrics, such as the hypervolume, to compare the performance of different parameter configurations.
These quality metrics, however, imply preferences beyond Pareto-optimality that may differ from those of the decision maker (DM).
In this talk, we propose to elicit preferences from the DM by means of the empirical attainment function (EAF) in order to automatically guide the parameter configuration of bi-objective optimisers.
Reference: This talk is based on joint work by J.E. Diaz and M. López-Ibáñez published in the European Journal of Operational Research (https://doi.org/10.1016/j.ejor.2020.07.059).
Approches par horizon roulant pour un problème de planification stochastique
On s'intéresse à un problème à temps discret dans lequel les tâches arrivent de manière aléatoire avec date d'échéance et date de réalisation, une taille et un gain si la tâche est traitée. Les slots sont de capacité finies et nous voudrions maximiser les gains apportés par les tâches traitées sur un horizon fini. Nous proposons de résoudre ce problème par une approche à horizon roulant.
Un survol sur les boîtes de Pandore, le problème de secrétaire et les inégalités de prophète
Optimisation multistage
L'optimisation multistage s'intéresse à des problèmes dont les données sont susceptibles d'évoluer dans le temps. Nous considérons dans ce cadre que nous avons une séquence d'instances I_1,...,I_T d'un problème d'optimisation. Nous devons construire une séquence S_1,...,S_T de solutions, en tenant compte de la présence de coûts de transition pour passer d'une solution S_t à une solution S_{t+1}. Je présenterai dans cet exposé quelques exemples de résultats de complexité et d'approximation obtenus dans ce cadre.
Spyros Angelopoulos: Online Algorithms with predictions
I will give a short overview of some recent work on the recently emerged area of enhancing online algorithms with predictions. In particular, I will discuss problems related to online search and online resource allocation.
Nguyen Kim Thang: Primal-Dual Methods in Online Algorithms with Predictions
In this talk, I will show a primal-dual method aiming to incorporate (machine learning) predictions to online algorithms in order to go beyond the worst-case. Applications will be shown. We will end up with open questions and potential directions for collaborations.
Titre: Ordonnancement sous contraintes de maintenance préventive et temps de préparation dépendants de la séquence pour minimiser les coûts de rejet ou la somme pondérée des dates de fin.
L'ordonnancement est considéré comme l'une des tâches les plus importantes en industrie, notamment dans les ateliers de production. Son but principal est d'allouer les ressources disponibles aux tâches sur une période donnée, tout en optimisant un ou plusieurs objectifs tels que la minimisation des délais de production et les coûts de stockage. En France, ces industries contribuent de manière significative à l'économie régionale et nationale, faisant de la région Hauts-de-France la quatrième région économique française. Pour rester compétitives, ces sociétés doivent reposer, d'une part, sur un système de production fiable et disponible à tout moment, et d'autre part, sur de puissants outils d'aide à la décision permettant de réagir rapidement à toute situation imprévue telle qu'une panne ou un retard de livraison de matières premières, des annulations de commandes, etc. Par ailleurs, la maintenance est un autre aspect étroitement lié à l'ordonnancement de la production. L'une des hypothèses les plus courantes dans la littérature est que les machines ou les ressources sont toujours disponibles à tout moment, or, en pratique, il peut être nécessaire de les arrêter en raison de pannes ou de maintenance préventive. Compte tenu du fait que les machines sont un élément essentiel du processus de production et que les coûts de maintenance représentent un grand pourcentage du budget total des opérations, il est souhaitable de bien coordonner la planification de la maintenance et l'ordonnancement de la production. Mes travaux de recherche abordent exactement ce problème, tout en considérant d'autres contraintes comme les temps de préparation dépendant de la séquence. L'objectif principal de ce travail était de concevoir et de développer des méthodes d'optimisation pour l'aide à la décision appliquées aux problèmes d'ordonnancement avec contraintes d'indisponibilité dues à la maintenance préventive. Les modèles et outils proposés sont validés à travers des problèmes académiques et industriels simplifiés. Ceci a conduit au développement de nouveaux algorithmes et modèles basés sur la programmation linéaire en nombres entiers, des heuristiques et des métaheuristiques pour résoudre des problèmes d'ordonnancement de la production.
Two-Sided Matching Markets with Correlated Random Preferences
Stable matching in a community consisting of men and women is a classical combinatorial problem that has been the subject of intense theoretical and empirical study since its introduction in 1962 in a seminal paper by Gale and Shapley, who designed the celebrated ``deferred acceptance'' algorithm for the problem. In the input, each participant ranks participants of the opposite type, so the input consists of a collection of permutations, representing the preference lists. A bipartite matching is unstable if some man-woman pair is blocking: both strictly prefer each other to their partner in the matching. Stability is an important economics concept in matching markets from the viewpoint of manipulability. The unicity of a stable matching implies non-manipulability, and near-unicity implies limited manipulability, thus these are mathematical properties related to the quality of stable matching algorithms. We study the effect of correlations on approximate manipulability of stable matching algorithms. Our approach is to go beyond worst case, assuming that some of the input preference lists are drawn from a distribution. Our model encompasses a discrete probabilistic process inspired by a popularity model introduced by Immorlica and Mahdian, that provides a way to capture correlation between preference lists. Approximate manipulability is approached from several angles : when all stable partners of a person have approximately the same rank; or when most persons have a unique stable partner. Another quantity of interest is a person's number of stable partners. Our results aim to paint a picture of the manipulability of stable matchings in a ``beyond worst case'' setting.
Novel notions of fairness and resource allocation for congested networked systems
In networking and computing fairness issues come up in resource allocation that is a phase, in a network protocol or system management stack, when a group of individual users or clients have to receive a portion of the resource in order to provide a service. The legacy approach to solve these situations is to consider a single-decision maker problem using classical resource allocation rules as the proportional or the max-min fair one. With the evolution of telecommunication network technologies and thanks to the advances in computing power and software design, new paradigms are emerging and a real-time auditability of the system is possible. In this talk we provide a theoretical and formal analysis of fairness and resource allocation in new technology able to capture the enhanced view users can have on the system when resources are limited and not enough to fully satisfy users’ demand. Modeling the resource allocation problem as a bankruptcy game, we define a new measure of users satisfaction together with a new resource allocation and a new measure of fairness. We then investigate how we should move from legacy single-resource approaches to novel multi-resource approaches in order ensure fairness in 5G environments.
De la complexité du problème de Steiner dynamique / On the complexity of the dynamic Steiner tree problem
Le problème bien connu de l'arbre de Steiner consiste à trouver, dans un graphe donné, un arbre de poids minimum permettant de connecter un ensemble de sommets appelés terminaux. Cet exposé discutera de la manière d'étendre ce problème aux graphes dynamiques. Nous considérons des graphes dynamiques dont nous connaissons à l'avance l'évolution et pour lesquels les sommets terminaux ainsi que les sommets intermédiaires assurant la connexité des terminaux sont fixes au cours du temps. L'objectif est de trouver l'arbre de poids minimum qui évolue dans le temps en gardant les terminaux connectés. On montre que ce problème est NP-hard même avec seulement deux terminaux et des poids unitaires sur les arêtes, ce qui est l'une des rares versions polynomiales du problème de Steiner dans le cas statique.
Études et Analyses de Problèmes d’Optimisation Combinatoire moyennant des Algorithmes Évolutionnaires
Analyser un problème d’optimisation combinatoires (POC) permet de comprendre sa structure et de prédire les meilleures stratégies pour atteindre les meilleures solutions dans les meilleurs délais. Une fonction de hachage a été proposée pour le problème du voyageur de commerce (TSP) et qui permet de différentier les différentes solutions d’une instance avec un nombre de collisions très faible par rapport à la fonction de fitness. Une méthode de réduction pour le TSP généralisé (GTSP) permet de réduire la taille de l’espace de recherche avec des taux de réductions assez importants, permettant ainsi de résoudre les instances dans des délais largement inférieurs par rapport aux instances complètes. Une analyse du paysage de fitness du problème de voleur itinérant (Travelling Thief Problem, TTP) via les réseaux d’optima locaux a permis de comprendre la structure de l’espace de recherche de différentes classes d’instances en fonction des heuristiques de recherche locale sollicitées.
An introduction to forbidden-transition graphs
Graphs have proved to be an extremely useful tool to model routing problems in a very wide range of applications. However, in some of them, we sometimes need to express constraints on the permitted walks that are stronger than what the standard graph model allows for. For example, in a road network, there can be a crossroad where drivers are not allowed to turn right and in this case, many walks in the underlying graph would correspond to routes that a driver is not allowed to use. To overcome this limitation, Kotzig introduced the stronger model of forbidden-transition graphs. A transition is a pair of adjacent edges (or consecutive arcs in the directed case) and a forbidden-transition graph is therefore a graph defined together with a set of pairs of adjacent edges that one may not use consecutively. Because of their expressiveness and practical interest, the study of forbidden-transition graphs is a fast-emerging field but we are still very far from understanding them as well as regular graphs. Problems of routing, connectivity or robustness in those graphs have received growing attention in the last few decades but unfortunately, those problems generally turn out to be algorithmically very difficult, even on restricted subclasses of graphs. In this talk, I will give an introduction to forbidden-transition graphs, present some of the challenges that their study raises and some results I obtained with several co-authors.
Online Search With Maximum Clearance
https://hal.archives-ouvertes.fr/hal-03094824
A polyhedral approach to some max-min problems
We consider a max-min variation of the classical problem of maximizing a linear function over the base of a polymatroid. In our problem we assume that the vector of coefficients of the linear function is not a known parameter of the problem but is some vertex of a simplex, and we maximize the linear function in the worst case. Equivalently, we view the problem as a zero-sum game between a maximizing player whose mixed strategy set is the base of the polymatroid and a minimizing player whose mixed strategy set is a simplex. We show how to efficiently obtain optimal strategies for both players and an expression for the value of the game. Furthermore, we give a characterization of the set of optimal strategies for Player 2. We discuss the implications of our results for problems in search, sequential testing and queuing. All concepts will be explained in the talk. This is joint work with Lisa Hellerstein.
A Study of the Past, Present, and Future Me
Martin Krejca recently joined the RO team as a Post-Doc. In this talk he will present himself and his main research topics.
La propriété de Monge
Dans cet exposé je vais donner une introduction informelle de la propriété de Monge et montrer comment elle peut être exploitée pour la programmation dynamique et pour une recherche binaire.
An improved approximation algorithm for ATSP
In a recent breakthrough, Svensson, Tarnawski, and Végh gave the first constant-factor approximation algorithm for the asymmetric traveling salesman problem (ATSP). In this work we revisit their algorithm. While following their overall framework, we improve on each part of it. Svensson, Tarnawski, and Végh perform several steps of reducing ATSP to more and more structured instances. We avoid one of their reduction steps (to irreducible instances) and thus obtain a simpler and much better reduction to vertebrate pairs. Moreover, we show that a slight variant of their algorithm for vertebrate pairs has a much smaller approximation ratio. Overall we improve the approximation ratio from 506 to 22 + ? for any ? > 0. We also improve the upper bound on the integrality ratio of the standard LP relaxation from 319 to 22.
Designing coresets for the classic \(k\)-median and \(k\)-means problems with minimal dependency in the number of clusters \(k\), the number of input points \(n\), or the underlying dimension, has been an important research direction over the last 15 years. We present a new, simple, coreset construction that achieves the following bounds:
The iteratively reweighted least squares method (IRLS) is a popular
technique used in practice for solving regression problems. Various
versions of this method have been proposed, but theoretical analyses
usually fail to capture their good practical performance.
I will present a simple and natural version of IRLS for solving L? and
L1 regression, which provably converges to a (1+?)-approximate solution
in O(m1/3 log(1/?)/?^(2/3) + log(m/?)/?^2) iterations, where m is the
number of rows of the input matrix. This running time is independent of
the conditioning of the input, and up to poly(1/?) beats the O(m1/2)
iterations required by standard interior-point methods.
This improves upon the highly specialized algorithms of Chin et al. (ITCS
’12), and Christiano et al. (STOC ’11), and yields a truly efficient
natural algorithm for the slime mold dynamics (Straszak-Vishnoi, SODA
’16, ITCS ’16, ITCS ’17).
I will also highlight a few connections to packing/covering LP solvers
and higher order optimization methods.
We prove that the problem is NP-hard and analyze a greedy algorithm,
proving that it is a (1/2)-approximation. We then give a polynomial-time
approximation scheme (PTAS) for this problem with resource augmentation,
i.e., allowing an additional set of very few unit disks that are not
drawn from the input. Additionally, for two special cases of the problem
we design a PTAS without resource augmentation.
Submodular and diminishing-returns (DR) submodular optimization are
important optimization problems with many real-world applications in
machine learning, economics and communication systems. Moreover,
DR-submodular optimization captures a subclass of non-convex optimization
that provides both practical and theoretical guarantees.
In this talk, we present algorithms with performance guarantees for the
fundamental problems of maximizing non-monotone submodular and
DR-submodular functions over convex sets in offline, online and bandit
settings.
Hardness amplification is the task of taking a problem that is hard to
solve on some small fraction of inputs, and producing a (sometimes
different) problem that is hard to compute on a large fraction of inputs.
Hardness amplification is an important step towards understanding average
case hardness, and is motivated by modern cryptography. The problems of
focus in hardness amplification have typically been non feasible
problems, such as problems in EXP and NP. In this talk, we will explore
some new arenas in hardness amplification, mainly hardness amplification
in P.
Based on joint work with Elazar Goldenberg.
We introduce the Itinerant List-Update Problem (ILU), which is a
relaxation of the classic List-Update Problem (LU) in which the pointer
no longer has to return to a home location after each request. The
motivation to introduce ILU arises from the application of track
management in Domain Wall Memory. We first show an Omega(log n) lower
bound on the competitive ratio for any randomized online algorithm for
ILU. This shows that online ILU is harder than online LU, for which
O(1)-competitive algorithms, like Move-To-Front, are known. We then show
that ILU is essentially equivalent to a variation of the Minimum Linear
Arrangement Problem (MLA), which we call the Dynamic Minimum Linear
Arrangement (DMLA) problem. We then give an offline polynomial-time
algorithm for DMLA and show that it has an polylogarithmic approximation
guarantee.
This is joint work with Neil Olver, Kirk Pruhs, René Sitters, and Leen
Stougie.
Differential Evolution (DE) emerged as a simple yet very competitive evolutionary algorithm for optimization over continuous parameter spaces. For more than two decades, DE and its variants (the "DE family" to be precise) have been exhibiting brilliant performance over numerical benchmarks as well as several real world optimization problems. Unlike the traditional evolution strategies or real-coded genetic algorithms, DE does not sample the perturbation step-size for its population members from a parameterized probability distribution. Instead, it uses a kind of self-referential mutation where the scaled vector difference(s) of the current population members are used to perturb others. This talk will discuss the basic working principle of DE and will highlight some of the recent DE variants that are extensively in use for single-objective, multi-modal and non-convex optimization problems. It will also highlight some future research issues including the theoretical studies that need to be undertaken to fully understand DE.
Multi-component problems play a crucial role in real-world applications, especially in the area of supply chain management. Recently, the traveling thief problem (TTP) has been introduced to study multi-component problems in a systematic way and many heuristic search algorithms have been proposed for the TTP. Although a lot of algorithmic advances have been made on this problem, determining an optimal solution, even for small instances, is very challenging. In this talk, we will present exact and hybrid approaches for this problem. We start by investigating the already NP-hard Packing While Traveling (PWT) problem which results from TTP when the TSP tour is fixed. We present an exact dynamic programming approach for PWT and give a fully polynomial time approximation scheme (FPTAS) for PWT over its baseline travel cost. Afterwards, we extend the approach to give a dynamic programming (DP) approach for TTP and report on some experimental results. Furthermore, we will show how the DP for PWT can be incorporated into an evolutionary multi-objective algorithm to tackle a multi-objective formulation of TTP. Joint work with Sergey Polyakovskiy, Martin Skutella, Leen Stougie, Junhua Wu, Markus Wagner
Evolutionary algorithms have recently been used to create a wide range of artistic work. In this paper, we propose a new approach for the composition of new images from existing ones, that retain some salient features of the original images. We introduce evolutionary algorithms that create new images based on a fitness function that incorporates feature covariance matrices associated with different parts of the images. This approach is very flexible in that it can work with a wide range of features and enables targeting specific regions in the images. For the creation of the new images, we propose a population-based evolutionary algorithm with mutation and crossover operators based on random walks. Our experimental results reveal a spectrum of aesthetically pleasing images that can be obtained with the aid of our evolutionary process.
We give a summary of the iterative method from the book "Iterative methods in combinatorial optimization" by Lap Chi Lau, R.Ravi and Mohit Singh.
We introduce a family of polytopes, called primitive zonotopes, which can be seen as a generalization of the permutahedron of type Bd. We discuss connections to the largest diameter of lattice polytopes and to the computational complexity of multicriteria matroid optimization. Complexity results and open questions are also presented. In particular, we answer a question raised in 1986 by Colbourn, Kocay, and Stinson by showing that deciding whether a given sequence is the degree sequence of a 3-hypergraph is computationally prohibitive. Based on joint works with Asaf Levin (Technion), George Manoussakis (Paris Sud), Syed Meesum (IMSc Chennai), and Shmuel Onn (Technion).
Hash functions have become ubiquitous tools in modern data analysis, e.g., the construction of small randomized sketches of large data streams. We like to think of abstract hash functions, assigning independent uniformly random hash values to keys, but in practice, we have to choose a hash function that only has an element of randomness, e.g., 2-independence. While this works for sufficiently random input, the real world has structured data where such simple hash functions fail, calling for the need of more powerful hash functions. In this talk, we focus hashing for set similarity, which is an integral component in the analysis of Big Data. The basic idea is to use the same hash function to do coordinated sampling from different sets. Depending on the context, we want subsets sampled without replacement, or fixed-length vectors of samples that may be with replacement. The latter is used as input to support vector machines (SVMs) and locality sensitive hashing (LSH). The most efficient constructions require very powerful hash functions that are also needed for efficient size estimation. We discuss the interplay between the hash functions and the algorithms using them. Finally, we present experiments on both real and synthetic data where standard 2-independent hashing yield systematically poor similarity estimates, while the right theoretical choice is sharply concentrated, and faster than standard cryptographic hash functions with no proven guarantees.
We will present algorithms for the evacuation problem by a pair of distinct-speed robots on an infinite line. In this problem, two mobile robots with different maximal speeds are initially placed at the same point on an infinite line. The robots need to find a stationary target (i.e., the exit), which is placed at an unknown location on the line. The search is completed when both robots arrive at the exit and the goal is to conclude evacuation in as little time as possible. The robot that discovers the exit first may communicate it to the other robot. We consider two models of communication between the robots, namely wireless communication and face-to-face communication. We present an optimal algorithm for any combination of robot speeds in the case of face-to-face communication. In the case of wireless communication, our algorithm is optimal if the slow robot is at most 6 times slower than the fast robot.
I will present different projects I am currently working on and questions I am interested in. I have presented simple stochastic games at seminar S a few month ago, therefore I will focus on periodic scheduling and enumeration algorithms.
In a so-called mixed-shop scheduling problem, the operations of some
jobs have to be processed in a fixed order (as in the job-shop problem);
the other ones can be processed in an arbitrary order (as in the
open-shop problem). In this paper we present a new exact polynomial-time
algorithm for the mixed-shop problems with preemptions and at most two
unit operations per job.
Joint work with Aldar Dugarzhapov.
We consider a novel single-machine scheduling problem where the processing time of a job can potentially be reduced (by an a priori unknown amount) by testing the job. Testing a job \(j\) takes one unit of time and may reduce its processing time from the given upper limit \(\bar{p}_j\) (which is the time taken to execute the job if it is not tested) to any value between \(0\) and \(\bar{p}_j\). This setting is motivated e.g. by applications where a code optimizer can be run on a job before executing it. We consider the objective of minimizing the sum of completion times. All jobs are available from the start, but the reduction in their processing times as a result of testing is unknown, making this an online problem that is amenable to competitive analysis. The need to balance the time spent on tests and the time spent on job executions adds a novel flavor to the problem. We give first and nearly tight lower and upper bounds on the competitive ratio for deterministic and randomized algorithms. We also show that minimizing the makespan is a considerably easier problem for which we give optimal deterministic and randomized online algorithms.
Joint work with Thomas Erlebach, Nicole Megow, and Julie Meißner.
An algorithm is a set of instructions to process the input for a given problem. In the classical setting, algorithms have access to the entire input and the algorithm is a function applied to this input. The result of the function being the output. In contrast, in the online setting, the input is revealed sequentially, piece by piece; these pieces are called requests. Moreover, after receiving each request, the algorithm must take an action before the next request is revealed. That is, the algorithm must make irrevocable decisions based on the input revealed so far without any knowledge of the future input. Since the future is unknown, these decisions could prove very costly. Online problems have many real-world applications such as paging, routing and scheduling. In this talk, I'll review the topic of online computation, some classic online problems, and some techniques used to analyze online algorithms that have been developed over the last 30 years. Then, I'll show how our new techniques (the bijective ratio and approximate stochastic dominance) fit into this rich domain and apply them to classic problems with a particular focus on the greedy algorithm for the k-server problem, an algorithm that performs well in practice (on certain metric spaces) when the classic analysis tools claim it should not.
We give the first polynomial-time approximation schemes (PTASs) for the following problems: (1) uniform facility location in edge-weighted planar graphs; (2) k-median and k-means in edge-weighted planar graphs; (3) k-means in Euclidean space of bounded dimension. Our first and second results extend to minor-closed families of graphs. All our results extend to cost functions that are the p-th power of the shortest-path distance. The algorithm is local search where the local neighborhood of a solution S consists of all solutions obtained from S by removing and adding \(1/\epsilon^{O(1)}\) centers.
Joint work with Philip N. Klein, and Claire Mathieu.
Dantzig–Wolfe decomposition (or reformulation) is well-known to provide strong dual bounds for specially structured mixed integer programs (MIPs). However, the method was not implemented in any state-of-the-art MIP solver as it is considered to require structural problem knowledge and tailoring to this structure. We provide a computational proof-of-concept that the reformulation can be automated. We demonstrate that for generic MIPs strong dual bounds can be obtained from the automatically reformulated model using column generation. In the second part of the talk we apply a recursive Automatic Dantzig–Wolfe reformulation to the Temporal Knapsack Problem (TKP) which is a generalization of the standard Knapsack Problem where a time horizon is considered, and each item consumes the knapsack capacity during a limited time interval only. We then show that this new method allows us to solve TKP instances to proven optimality through computation of extremely strong dual bounds.
In this talk we are going to talk about the dynamic control of resource-sharing systems that arise in various domains: e.g. inventory management, communication networks. We aim at efficiently allocating the available resources among competing projects according to a certain performance criteria. In particular, we will focus on Restless Bandit (RB) type of allocation problems. These type of problems have a stochastic nature and may be very complex to solve. We will go through different possible techniques to solve RB problems using scaling and relaxation techniques. The latter allow us to obtain simple and ready to implement suboptimal policies. We will discuss on the asymptotic optimality of these policies in interesting regimes such as Heavy-traffic and Light-traffic regimes and also the Many-Users regime. We will provide several application examples for which near-optimal heuristics have been obtained.
The optimal value computation for turned-based stochastic games with reachability objectives, also known as simple stochastic games, is one of the few problems in NP ? coNP which are not known to be in P. However, there are some cases where these games can be easily solved, as for instance when the underlying graph is acyclic. I will present three classes of games that can be thought as ”almost” acyclic, by restricting parameters such as the number of cycles or the size of the minimal feedback vertex set. For these classes, we provide several polynomial algorithms or fixed-parameter algorithms.
We address the problem of coordinating the planning decisions for a single product in a supply chain composed of one supplier and one retailer. We assume that the retailer has private information about his cost structure and that he has the market power, he can impose his optimal replenishment plan. In the case where the actors of the supply chain act individually, the supplier's cost can be large since he has to satisfy the retailer's optimal replenishment plan. However, in order to decrease the supplier's cost, side payment can be allowed between the actors. We propose to design contracts between the actors under the asymmetric information assumption in order to decrease the supplier's cost.
We propose several special cases of the MUCP in order to discuss the complexity issues of the problem. We will present two open questions that still annoy us.
We will discuss the complexity of one of the most basic problems in pattern matching, that of approximating the Hamming distance. Given a pattern P of length n the task is to output an approximation of the Hamming distance (that is, the number of mismatches) between P and every n-length substring of a longer text. We provide the first efficient one-way randomised communication protocols as well as a new, fast and space efficient streaming algorithm for this problem.
In data centers, many tasks (services, virtual machines or
computational jobs) share a single physical machine. Machines are used
more efficiently, but tasks' performance deteriorates, as colocated tasks
compete for shared resources. As tasks are heterogeneous (CPU-, memory-,
network- or disk-intensive), the resulting performance dependencies are
complex.
We explore a new resource management model for such colocation. Our model
uses two parameters of a task - its size and its type - to characterize
how a task influences the performance of the other tasks allocated on the
same machine. The performance of a task is a function of the loads of all
tasks assigned to the machine. The load of each type is counted
separately.
We consider minimization of the total cost (utilitarian fairness). We
show that for a linear cost function the problem is strongly NP-hard, but
polynomially-solvable in some particular cases. We propose an algorithm
polynomial in the number of tasks (but exponential in the number of types
and machines); and another algorithm polynomial in the number of tasks
and machines (but exponential in the number of types and admissible sizes
of tasks). We also propose a polynomial approximation algorithm, and, in
a case of a single type, a polynomial exact algorithm. For convex costs,
we prove that, even for a single type, the problem becomes NP-hard; we
propose an approximation algorithm.
Clique clustering is the problem of partitioning the vertices of a graph into disjoint clusters, where each cluster forms a clique in the graph, while optimizing some objective function. In online clustering, the input graph is given one vertex at a time, and any vertices that have previously been clustered together are not allowed to be separated. The goal is to maintain a clustering with an objective value close to the optimal solution. For the variant where we want to maximize the number of edges in the clusters, we propose an online strategy based on the doubling technique. It has an asymptotic competitive ratio at most 15.646 and an absolute competitive ratio at most 22.641. We also show that no deterministic strategy can have an asymptotic competitive ratio better than 6. For the variant where we want to minimize the number of edges between clusters, we show that the deterministic competitive ratio of the problem is n-omega(1), where n is the number of vertices in the graph.