Seminars

  • Tuesday, November 25, 12.30 PM
    Quantitative Hospital Resilience Framework. A Case Study of Surgery Ward Resilience by Gabriela Ciolacu (PhD student at the Karlsruhe Institute of Technology, Germany)

    Mismanaging a hospital’s operations during any adverse events can lead not only to financial losses but also to a non-financial impact, such as patient neglect, higher casualty rates, and personnel burnout.

    To minimize such losses, hospital decision-makers evaluate resilience. Resilience is the hospital’s capacity to prepare, resist, absorb, and quickly recover from adverse events. This study examines quantitative resilience indicators and their application to hospitals following adverse events.

    Despite the evident benefit of quantitative resilience frameworks, extant works that examine hospitals and hospital units fail to incorporate contextually relevant requirements. Incorporating hospital requirements in resilience frameworks ensures meaningful results to decision-makers, alignment of resilience goals and indicators, integrity, and fair comparison.

    Hence, we propose a novel hospital resilience framework and indicator to better assist decision-makers in accurately evaluating resilience, understanding possible bottlenecks during adverse events, and assessing whether a policy could enhance or hinder a hospital’s adverse event performance. To demonstrate the applicability of the proposed framework and indicator, we present a real-life case study focusing on a surgical ward and its adjacent units. The case study examines system performance under three adverse events: a demand surge, a supply shock, and a combination of both scenarios.

  • Tuesday, September 30, 12.30 PM
    “Online Optimization for the Robust Capacitated Team Orienteering Problem under Uncertainty.” by Siamak Khayyati (HEC Liège)

  • Tuesday, October 28, 12.30 PM
    “Spatially dynamic microsimulations” by Jan Weymeirsch (Universitat Trier)

    Spatially dynamic microsimulations have particular potential for simulating populations at a very detailed geographical level, such as neighbourhoods, blocks of houses or addresses. This typically requires a detailed building and housing data set in order to model migration flows, particularly in view of the highly dynamic housing market and local dwelling capacities. However, there is currently no comprehensive building register for Germany that meets the requirements for the planned use, in particular one that is openly accessible to the research community and distinguishes between buildings in terms of their use as residential space or as potential workplaces.

    In an initial pilot study, I have already evaluated possibilities for using publicly available data, in particular OpenStreetMap (OSM) and locally available official data, as a basis for generating such a data set in a major German city. Building on the conclusions drawn from this, I now want to extend my approach to cover the whole of Germany. To do this, I am linking official data with public data sources such as OSM and classify buildings according to their use and living space. The  resulting dataset is intended for scientific use such as spatial dynamic  microsimulations.

  • Wednesday, November 6, 12.30 PM
    Binary linear programming formulation for a two-stage dual bin packing problem for wood reuse by Pauline Bessemans (HEC Liège)

    The increasing demand for raw materials such as wood is undoubtedly contributing to the depletion of natural resources and global warming. To curb this phenomenon, a more sustainable and circular management of wood could be developed by intelligently handling wood waste. This wood waste can be in the form of beams or pallets and could be considered as wooden slats. They could be combined, assembled, and glued to build Cross-Laminated Timber (CLT) panels for the construction industry. We aim to develop optimization techniques to recycle raw wood waste by providing assembly schemes to create CLT panels. The goal is to minimize the waste, which is the wood that could not be reused in the CLT panels. We conducted a literature review to identify the closest problems in the field of operations research and to name our problem accordingly. The skiving stock problem and the dual bin packing problem, which is not a dual version of the cutting stock/bin packing problem, are the two closest problems to ours. The present work addresses for the very first time an exact case of the two-stage two-dimensional dual bin packing problem (E-2S-2D-DBPP) in the context of wood reuse. We propose a description of the problem and a mathematical formulation with cuts. We also present the results of several numerical experiments based on realistic instances from the wood industry and identify the size limit of the instances for which the problem can still be solved in a reasonable amount of time.

  • Wednesday, April 2, 12.30 PM
    “Multi-product maritime inventory routing problem” by Homayoun Shaabani (HEC Liège)

    This presentation covers two logistical drivers of the supply chain  inventory and transportation  in the context of the inventory routing problem (IRP). The IRP is based on the connection between inventory management and transportation decisions. It involves determining the optimal routing of vehicles while maintaining required inventory levels and satisfying customer demand, thereby minimizing stockouts and reducing transportation costs. Efficient IRP solutions ensure the timely availability of products to meet customer demand and enablebusinesses to enhance their competitiveness.

    The presentation includes a summary of two papers. The first paper presents a matheuristic approach to optimize the multi-product maritime IRP (MIRP). The second paper introduces stability metrics for handling uncertainty in sailing times, enhancing resilience in maritime operations within the context of MIRP. 

  • Wednesday, April 16, 12.30 PM
    “Machine learning for the analysis of queueing systems” by Siamak Khayyati (HEC Liège)

  • Wednesday, May 7, 12.30 PM
    “What many operations researchers have done wrong and what is the remedy ?” by Thomas Stützle (F.R.S.-FNRS and Université Libre de Bruxelles (ULB), IRIDIA)

This is about a problem that arises in optimization algorithms. A common mismatch is between what have operations researcher (and probably many others) done wrong in their research and what would be the right thing to do. A question arises: how strong is the error’s role in optimization? Well, fortunately the problem is not too big, but nevertheless it persists. In this talk we also detail some ways how this problem can be resolved.

 

  • Tuesday, March 5, 11 AM
    “Crowd-shipping under Uncertainty: Models, Solution Approaches, and Compensation Issues” by Michel Gendreau (Polytechnique Montréal – Canada)

    E-commerce continues to grow all over the world. The recent pandemic caused by COVID-19 has increased this trend. Concurrently, crowd-shipping is emerging as a viable solution to fulfill last-mile deliveries, with AmazonFlex taking the lead in implementing such distribution models.
    In this talk, we consider the situation of a crowd-shipping platform that must fulfill delivery requests from a central depot with a fleet of professional vehicles and a pool of crowd drivers. The supply (number) of crowd drivers is uncertain. We thus propose stochastic programming models for two variants of the problem: one in which we consider a single pool of drivers and one in which the territory served by the platform is divided into geographical sectors and drivers are characterized by their destination sector. Exact solution approaches are provided in each case.
    We also briefly discuss drivers’ compensation issues. We assume that drivers can accept or reject routes and that the probability of route acceptance is dependent on the compensation offered. We determine the market equilibrium when the stochastic route acceptance of crowd-drivers is considered.

    Where: HEC-University of Liege – 14, rue Louvrex (N1)- 4000 Liège room TBD

  • Tuesday, February 6, 1 PM
    “Digital Twins and Scientific simulations with NVIDIA Omniverse” by Johan Barthélemy (NVIDIA)

    NVIDIA Omniverse is a software development platform based on the OpenUSD format that enables the creation of AI-enabled 3D pipelines, tools, applications, and services. It offers advanced simulation capabilities, including direct integration of PhysX, Blast, and Flow for realistic physics, destruction, fire, and smoke simulation.
    It also includes an advanced multi-GPU real-time hardware-accelerated RTX Renderer, which supports real-time ray tracing and an interactive Path Tracer, enabling the creation of highly realistic simulations.
    In the context of digital twins, Omniverse can be used to create virtual worlds that accurately reflect physical systems. These digital twins can run numerous simulations to study multiple processes, benefiting from real-time data and the added computing power of a virtual environment.
    This seminar will delve into the key features of Omniverse, such as its ability to orchestrate 3D data pipelines, bridging data silos, connecting generative ai tools, creating digital avatars, and simulate large scale complex systems.

    Where: HEC-University of Liege – 14, rue Louvrex (N1)- 4000 Liège room N1b – 1711

  • Thursday, January 25, 10:30 AM
    “Optimization with Analog Quantum Machine” by Samuel Deleplanque (JUNIA-ISEN/IEMN Lille)

    This research employs analog quantum machines like those from Canada’s D-Wave, France’s Pasqal, and America’s QuEra. These machines handle binary, quadratic, and unconstrained mathematical programs (QUBO), which has spurred research in this type of model. After some explanations on the machine’s operation (from a computer scientist’s point of view), and on universal gate quantum machines (e.g., IBM), we will review some optimization and operational research problems that are modeled and solved by quantum analog systems. We will see how to solve (among others) the TSP, CVRP, JSSP, RCPSP, Max Cut, and 3-Sat. For the latter, we will see that the application of polynomial-time reduction to the MIS can simplify the resolution when the new graph obtained is less dense despite an increased number of variables. Indeed, the topology of the machines must be considered, such as the qubit graph, which is not complete.

    Where: HEC-University of Liege – 14, rue Louvrex (N1)- 4000 Liège room N1a – 223

  • Thursday, January 25, 9:45 AM
    “Irrelevant Sentences Detection for Automated Business Process Modeling” by Julie Jamar (HEC Liège Management School of the University of Liege)

    Business process modeling is a crucial task requiring considerable time and knowledge. Several approaches have been developed for automating the transformation of textual process descriptions to consistent and complete process models. However, those state-of-the-art solutions require process description sentences to be sequential and to exclude irrelevant information, necessitating the support of process modelers to clean the texts and manage the resulting models. Thus, one challenging issue largely overlooked in literature is the detection of irrelevant sentences, generally leading to erroneous model representations. In this paper, we alleviate this problem by presenting an approach based on machine learning and natural language processing techniques for automatically detecting such sentences in business process textual descriptions.
    Another key contribution of our work is the creation of a novel dataset consisting of thousands of manually annotated sentences from 1020 different processes to train our algorithm effectively. Through a quantitative evaluation on real-world data, we demonstrate that our approach efficiently detects irrelevant sentences in business process textual descriptions

    Where: HEC-University of Liege – 14, rue Louvrex (N1)- 4000 Liège room N1a – 223

  • Wednesday, November 15, 12:45 PM
    “Open Source licenses : the Good, the Bad, and the Ugly” by Jérémie Fays (University of Liège)


    Open source licenses : from Zero to Hero
    PhD student : https://www.recherche.uliege.be/books/formations-transversales/110/
    Other : https://www.recherche.uliege.be/cms/c_18864701/en/open-sources-licenses-from-zero-to-hero

    For the other trainings linked to software development, the are just next to my training about open source : https://www.recherche.uliege.be/books/formations-transversales/110/

    Where: HEC-University of Liege – 14, rue Louvrex (N1)- 4000 Liège room N1a 220

  • Thursday, October 26, 11:15 AM
    “Combinatorial Robust Optimization with Decision-Dependent Information Discovery and Polyhedral Uncertainty.” by Michaël Poss (Researcher at LIRMM, Montpellier, France)

    Given a nominal combinatorial optimization problem, we consider a robust two-stages variant with polyhedral cost uncertainty, called DDID. In the first stage, DDID selects a subset of uncertain cost coefficients to be observed, and in the second-stage, DDID selects a solution to the nominal problem, where the remaining cost coefficients are still uncertain. Given a compact linear programming formulation for the nominal problem, we provide a mixed-integer linear programming (MILP) formulation for DDID. The MILP is compact if the number of constraints describing the uncertainty polytope other than lower and upper bounds is constant. In that case, the formulation leads to polynomial-time algorithms for DDID when the number of possible observations is polynomially bounded. We extend this formulation to more general nominal problems through column generation and constraint generation algorithms. We illustrate our reformulations and algorithms numerically on the selection problem, the orienteering problem, and the spanning tree problem. Joint work with Jérémy Omer.

    Where: HEC-University of Liege – 14, rue Louvrex (N1)- 4000 Liège room TBD

 

  • Tuesday, February 14, 11:00 AM
    “About the gaps between applied mathematics and applicability, the case of train crew scheduling.” by Jérôme De Boeck (University of Fribourg and ULB)

    Railway crew scheduling problems have been studied for some decades but there still remains a gap between the mathematical methods developed and real-life applicability, not only because of mathematical challenges. The annual train and crew schedule is decomposed into several steps: the timetable planification for customers, the assignment of trains to the trips of the timetable, and finally the crew scheduling. This talk focus on the crew scheduling problem for SBB, the Swiss national train company. We will discuss set covering approaches, the column generation methods, and resource-constrained shortest paths problem which are generally used for such large-scale optimization problems. The size of real-life instances is an important issue for the crew scheduling algorithms as well as the combinatorial aspect of the problem. Column generation for integer problems can lead to poor solution quality because of the number of columns generated which is too large to handle for a MILP solver to find quality integer solutions. We will discuss an intuitive heuristic we developed for column generation based integer formulations that deals with this large number of integer variables. We will also discuss technical difficulties that can be encountered when working on a « real » applied problem by giving an overview of the whole project, from the initial demand of the SBB to want could be delivered in terms of solutions. Several technical and legal elements make the gathering of the data, the modelization of the problem, and the evaluation of the solution quality very difficult. We will try to understand why et will see there are some philosophical questions arising. Scientists working in optimization know very well the notion of gap to optimality, but there is also a notion of « gap to reality » to take into consideration in applied problems. This gap to reality is hard, if not impossible, to quantify, but is crucial to keep it in mind to deliver solutions to real-life applications. Some examples of this gap and their implications on the evolution of the research project with the SBB will be discussed.

    Where: HEC-University of Liege – 14, rue Louvrex (N1)- 4000 Liège room TBD

  • Wednesday, December 14, 11:00 AM
    “From shortest path routing to segment routing: challenges in the optimization of Internet networks” by Bernard Fortz (ULB)

    The Internet as we know it today has its origin in the late 1960s, when the first packet switching networks were developed using a variety of communications protocols. Packet switching is a networking design that divides messages up into arbitrary packets, with routing decisions made per-packet. It provides better bandwidth utilization and response times than the traditional circuit-switching technology used for legacy telephony, particularly on resource-limited interconnection links.

    IP routing is the field of routing methodologies of Internet Protocol (IP) packets within and across IP networks. In routers, packets arriving at any interface are examined for source and destination addressing and queued to the appropriate outgoing interface according to their destination address and a set of rules and performance metrics. Routing tables are maintained either manually by a network administrator, or updated dynamically with a routing protocol.

    In this talk, I will briefly review the evolution of routing protocols over the last 25 years and the optimization challenges that they raise. In particular, I will focus on current research avenues opened by the new segment routing protocol that emerged recently, and present recent results obtained with pre-processing techniques developed to solve a challenging MIP model for optimizing the choice of segments in order to minimize the congestion in the network.

    This talk is based on joint work with Hugo Callebaut, Jérôme De Boeck and Stefan Schmid.

    Where: HEC-University of Liege – 14, rue Louvrex (N1)- 4000 Liège room -186(N1d)

  • Tuesday, December 6, 3:45 PM
    “Many-to-one assignment markets: extreme core allocations” by Ata Atay (University of Barcelona)

    This paper studies many-to-one assignment markets, or matching markets with wages. Although it is well-known that the core of this model is non-empty, the structure of the core has not been fully investigated. To the known dissimilarities with the one-to-one assignment game, we add that the bargaining set does not coincide with the core, the kernel may not be included in the core, and the tau-value may also lie outside the core. Besides, not all extreme core allocations can be obtained by a procedure of lexicographic maximization, as it is the case in the one-to-one assignment game. Our main results are on the extreme core allocations. First, we characterize the set of extreme core allocations in terms of a directed graph defined on the set of workers and also provide a necessary condition for each side-optimal allocation. Finally, we prove that each extreme core allocation is the result of sequentially maximizing or minimizing the core payoffs according to a given order on the set of workers.

    (joint with Marina Núñez and Tamás Solymosi)

    Where: HEC-University of Liege – 14, rue Louvrex (N1)- 4000 Liège room -186(N1d)

  • Monday, November 21, 12:30 PM
    “Distributionally Robust Optimal Allocation with Costly Verification” by Mustafa C Pinar (Bilkent University)

    We consider the mechanism design problem of a principal allocating a single good to one of several agents without monetary transfers. Each agent derives positive utility from owning the good and uses it to create value for the principal. We designate this value as the agent’s private type.
    Even though the principal does not know the agents’ types, she can verify them at a cost. The allocation of the good thus depends on the agents’ self-declared types and the results of any verification performed, and the principal’s payoff matches her value of the allocation minus the costs of verification. It is known that if the agents’ types are independent, then a favored-agent mechanism maximizes her expected payoff. Such a mechanism assigns the good to a favored agent without verification whenever the reported types of all other agents—adjusted for the costs of verification—fall below a given threshold. Otherwise, it allocates the good to any agent for which the reported type minus the cost of verification is maximal and verifies his report.

    However, this result relies on the unrealistic assumptions that the agents’ types follow known independent probability distributions. In contrast, we assume here that the agents’ types are governed by an ambiguous joint probability distribution belonging to a commonly known ambiguity set and that the principal maximizes her worst-case expected payoff.
    We study support-only ambiguity sets, which contain all distributions supported on a rectangle, Markov ambiguity sets, which are characterized through first-order moment bounds, and Markov ambiguity sets with independent types, which additionally require agents’ types to be independent. In all cases we construct explicit favored-agent mechanisms that are not only optimal but also Pareto-robustly optimal.

    Joint work with H.I. Bayrak (Bilkent Univ.), C. Kocyigit (Univ of Luxembourg) and D. Kuhn (EPFL).

    Where: HEC-University of Liege – 14, rue Louvrex (N1)- 4000 Liège room -189 (N1d)

  • Friday, November 18, 12:45 PM
    “The use of synthetic populations for modelisation purposes”
    by Morgane Dumont (HEC Liège)

    In many different fields, modelisation is useful to adapt a strategy, a policy, forecast different scenarios… However, the models’implementation often require input data not always easily available. The creation of a synthetic population consists in proposing a population statistically similar to the real one, in terms of the agregated data available. The talk will present the advantages and challenges of this methodology, as well as two different applications : a synthetic population of the Belgian population and a synthetic population of space debris.

    Where: HEC-University of Liege – 14, rue Louvrex (N1)- 4000 Liège room 088(N1d)

  • Monday, November 7, 1:15 PM
    “Cutting Plane Algorithms for the Robust Kidney Exchange Problem”
    by Danny Blom (University of Eindhoven)

    The goal of kidney exchange programs is to match recipients having a willing but incompatible donor with another compatible donor, so as to maximize total (weighted) transplants. Nevertheless, in practice, a significant proportion of identified exchanges does not proceed to transplant, due to a variety of reasons. Planning exchanges while considering such failures, and options for recourse, is therefore crucial. In this talk, we reconsider a robust optimization model with recourse first proposed by Carvalho et al. (2020), taking into account the event that a number of pairs / donors withdraw from the KEP. After these donors have withdrawn from the program, a new set of exchanges is identified (the recourse decision), with the aim to realize as many of the originally planned transplants as possible. Current algorithmic considerations do not allow to find optimal solutions for the robust optimization model for realistic sized kidney exchanges within a reasonable time frame. We propose a new variable and constraint generation method, for two policies for recourse, based on a cutting plane algorithm. A characterization of this method is given based on two widely used integer programming models for kidney exchange programs. Furthermore, a lifting technique is proposed to obtain stronger cuts to speed up computation. Computational results show that our algorithm is very competitive, improving on the running time of the state-of-the-art method by one order of magnitude. Furthermore, our methods are able to solve a large number of previously unsolved benchmark instances within the same time limit.

    Where: HEC-University of Liege – 14, rue Louvrex (N1)- 4000 Liège room 1715(N1a)

  • Friday, October 21, 3:00 PM
    “New Methods for Anomaly Detection: Run Rules Multivariate Coefficient of Variation Control Charts”
    by Phuong Hien Tran (Dong A University, Vietnam)

    Among the anomaly detection methods, control charts have been considered as important techniques. In practice, however, even under the normal behaviour of the data, the standard deviation of a sequence is not stable. In such cases, the coefficient of variation (CV) is a more appropriate measure for assessing system stability. In this paper, we consider the statistical design of Run Rules-based control charts for monitoring the CV of multivariate data. A Markov chain approach is used to evaluate the statistical performance of the proposed charts. The computational results show that the Run Rules-based charts outperform the standard Shewhart control chart significantly. Moreover, by choosing an appropriate scheme, the Run Rules-based charts perform better than the Run Sum control chart for monitoring the multivariate CV.

    Where: HEC-University of Liege – 14, rue Louvrex (N1)- 4000 Liège room -1/86(N1d)

  • Tuesday, October 18, 3:45 PM
    “Improving Kidney Exchange Programs”
    by Joao Pedro Pedroso (University of Porto)

    Renal diseases affect thousands of patients who, to survive, must incur in dialysis — a costly treatment with many negative implications in their quality of life. As an alternative, patients may enter a waiting list for receiving a kidney from a deceased donor; however, waiting times are typically very long. For reducing the waiting time, another alternative in some countries is to find a healthy living donor — usually, a relative of a person emotionally connected — who volunteers to cede one of their kidneys. However, in some situations transplantation is not possible due to blood, or tissue-level incompatibility. In these cases, a donor-patient pair may enter a pool of pairs in the same situation, in the hope of finding compatibility in crossed transplants.

    The problem has been studied under different perspectives, but the most commonly used objective is maximizing the number of patients in the pool for which a crossed transplant is possible.

    We propose to change this objective by that of maximizing the cumulative patient survival times. This model departs from the previous deterministic setting, putting into play a method for predicting survival time based on historical data.

    Where: HEC-University of Liege – 14, rue Louvrex (N1)- 4000 Liège room -186(N1d)

  • Tuesday, October 4, 12:45 PM
    “Automatic algorithm configuration: the irace package”
    by Véronique François and Maren Ulm (HEC Liège)

    The seminar will introduce participants to the irace package, an automated algorithm configuration tool. Modern optimization and machine learning algorithms often require a large number of parameters to be set in order to maximize their performance. Determining the parameter values through automatized methods helps increase the transparency of the configuration procedure and allows to enlarge the scope of testing. The talk presents irace capabilities and the underlying iterated racing approach. Key statistical elements of the configuration procedure are discussed.

    Presentation link
    Where: HEC-University of Liege – 14, rue Louvrex (N1)- 4000 Liège room 088(N1d)

  • May 2022, Wednesday 25 (11 am):
    The cumulative vehicle routing problem with time windows: models and algorithm
    by Alejandro Fernandez Gil (Universidad Técnica Federico Santa María)

    The cumulative vehicle routing problem with time windows (CumVRP-TW) is a new vehicle routing variant that aims at minimizing a cumulative cost function while respecting customers’ time windows constraints. Mathematical formulations are proposed for soft and hard time windows constraints, where for the soft case, violations are permitted subject to penalization. By means of the cumulative objective and the time windows consideration, routing decisions incorporate the environmental impact related to CO2 emissions and permit obtaining a trade-off between emissions and time windows fulfillment. To solve this new problem variant, we propose a matheuristic approach that combines the features of the Greedy Randomized Adaptive Search Procedure (GRASP) with the exact solution of the optimization model. The solution approaches are tested on instances proposed in the literature as well as on a new benchmark suite proposed for assessing the soft time windows variant. The computational results show that the mathematical formulations provide optimal solutions for scenarios of 10 and 20 within short computational times. That performance is not observed for medium and large scenarios. In those cases, the proposed matheuristic algorithm is able to report feasible and improved routes within seconds for those instances where CPLEX does not report good results. Finally, we verify that the fuel consumption and carbon emissions are reduced when the violation of the time windows is allowed in the case of soft time windows.

    Where: HEC-University of Liege – 14, rue Louvrex (N1)- 4000 Liège room 126

  • May 2022, Wednesday 4 (10 am):
    Unlock Me
    by Christof Defryn (Maastricht University)

    The time needed to traverse a set of river segments on the inland waterways depends not solely on the speed of the vessel under consideration, but is also heavily influenced by the interaction with other vessels near river obstacles (such as locks). During the seminar we will consider the perspective of the lock operators as well as the individual skippers and discuss the impact of collaboration on the efficiency of inland waterway operations. Moreover, we will set the stage for ongoing/future research on the idea of intertemporal collaboration.

    Where: HEC-University of Liege – 14, rue Louvrex (N1)- 4000 Liège room

  • November 2021, Tuesday 23 (12 pm):
    Everything you always wanted to know about the editorial process but were afraid to ask.
    by Yves Crama (HEC Liège)

    Where: HEC-University of Liege – 14, rue Louvrex (N1)- 4000 Liège room

  • October 2021, Tuesday 14 (1 pm):
    Combining Machine Learning with Decision Optimisation for Adaptive Airline Operations
    by Bruno F. Santos (TU Delft)

    Data availability (and accessibility) and fast reaction to new information are becoming paramount in the airline industry. Airlines and passengers demand data intelligence solutions to update diagnostics and prognostics dynamically, rapidly adapting operations plans reacting to new information. On the other hand, airline operations are becoming more integrated and complex, and optimal solutions are increasingly hard to compute. By the time traditional optimisation models compute and communicate the ’optimal solution’, the world has again changed, and new disruptive factors have been added to the table, jeopardising the value of the solution computed.This seminar will discuss some of the challenges currently faced by airlines (and not only), including the need for an adaptive decision process. The discussion will be complemented with the presentation of some of the work being developed at the Air Transport & Operations group at the Delft University of Technology, combining machine learning techniques with optimisation methods. Topics will eventually include the development of mathematical tools for disruption management at the time of operations, managing the acceptance of cargo bookings under items’ size uncertainty, and the definition of aircraft maintenance schedules for a fleet of aircraft following probabilistic aircraft health prognostics.

    Where: HEC-University of Liege – 14, rue Louvrex (N1)- 4000 Liège room 1701

  • October 2021, Tuesday 5 (2 pm):
    CLocal Search with OscaR.cbls explained to my neighbour
    by Renaud De Landtsheer (CETIC)

    This presents the OscaR.cbls engine. It is an open source, declarative, local search engine for combinatorial optimization. It offers a library of invariants for modelling optimization problems, ass well as a library of local search procedures and metaheuristics. OscaR.cbls is developed primarily at CETIC (www.cetic.be)

    Where: HEC-University of Liege – 14, rue Louvrex (N1)- 4000 Liège room 138

Online seminars

  • February 2021, : by Oscar Tellez Sanchez (HEC Liège)

  • March 2021, : by Daniel Santos (TU Lisbon)

  • April 2021, : by Christine Tawfik (Zuse Institute Berlin)

  • April 2021, : by Thomas Hacardiaux (UC Louvain)

Online lunch talks

  • October 2020, Thursday 15 :
    SpeakInVR : validation d’une audience virtuelle
    by Elodie Etienne (HEC Liège)

  • November 2020, Thursday 5 :
    Etude polyédrale d’un problème de sélection
    by Marie Baratto (HEC Liège)

  • November 2020, Thursday 5 :
    Utilisation de techniques de machine learning dans le but de modéliser des Business Processes de manière automatique sur base de description textuelle
    by Julie Jamar (HEC Liège)

  • November 2020, Thursday 26 :
    Word embeddings et la topologie du language
    by Judicaël Poumay (HEC Liège)

  • November 2020, Thursday 26 :
    Chargement de containers en apprentissage par renforcement
    by Florian Peters (HEC Liège)

  • December 2020, Thursday 10 :
    Connection corridors to alleviate biodiversity loss: conception through mathematical optimisation
    by Elodie Bebronne (HEC Liège)

  • January 2021, Thursday 14 :
    A capacitated Vehicle Routing Problem with pickups, time windows and packing constraints
    by Emeline Leloup (HEC Liège)

  • December 2019, Friday 13 (2 pm): by Virginie Lurkin (Eindhoven University of Technology)

    Rail Transfer Hubs Selection in a Metropolitan Area Using Integrated Multimodal Transits

    The world’s population is increasingly city-based; and urban mobility is one of the toughest challenges that cities face today. Yet passengers are expecting a seamless, multimodal journey experience. As a result, existing mobility systems need to be reshaped to integrate multimodal transits (such as metro with railway). In this work, we propose a new Mixed-Integer Linear Program aiming at designing an integrated multimodal transit system in a metropolitan area. We do not only consider the fixed cost for the construction of the suburban railway facilities, but also the variable passengers’ travel time cost. A two-level heuristic based on the Variable Neighborhood Search framework is developed for solving large instances of this problem.

    Where: HEC-University of Liege – 14, rue Louvrex (N1)- 4000 Liège room 220 – Etilux

  • November 2019, Friday 29 (2 pm): by Oscar Tellez (INSA Lyon)

    Optimizing the transport for people with disabilities

    In the context of door-to-door transportation of people with disabilities, service quality considerations, such as maximum ride time and service time consistency, are critical requirements. These requirements together with traditional route planning define a new variant of the multi-period dial-a-ride problem called the time-consistent DARP. A perfectly consistent planning defines for each passenger the same service time all along the planning horizon. This planning can be too expensive for Medico-Social Institutions that it is necessary to find a compromise solution between costs and time consistency objectives. The time-consistent DARP is solved using an epsilon-constraint approach to illustrate the trade-off between these two objectives. The time-consistency is defined by the number of different timetables for each user. Each solution of the Pareto Front is computed using a matheuristic framework based on a master set partitioning problem and a large neighborhood search procedure.
    This research is part of the NOMAd project.

    Where: HEC-University of Liege – 14, rue Louvrex (N1)- 4000 Liège room 220 – Etilux

  • October 2019, Tuesday 21 (11 am): by Giovanni Felici (Istituto di Analisi dei Sistemi ed Informatica, Consiglio Nazionale delle Ricerche – Roma)

    Regularization methods in regression: from Ridge Regression to Mixed Integer Programming

    Feature selection is receiving increasing attention in Machine Learning and Statistics. In the context of linear regression, feature selection is often formulated as a regularization problem, where the regressors are selected with the help of a term associated with the size of the regression coefficients. Such approach has led to the well-established Ridge and Lasso methods. More recently, approaches based on Mixed Integer Programming (MIP) have been introduced to directly control the size of the active set. Although computationally demanding, such approaches exhibit interesting properties and are gaining popularity due the increasing power of solvers. In this talk I will introduce the basic concepts of regularization in regression and a recent MIP-based method with reduced computational burden and improved performances in the presence of feature collinearity and signals that vary in nature and strength.

    Where: HEC-University of Liege – 14, rue Louvrex (N1)- 4000 Liège N3- 033

  • September 2019, Monday 23 (11:15 am): by Alper Sen (Bilkent University)

    Delegation of Stocking Decisions Under Asymmetric Demand Information

    Shortages are highly costly in retail, but are less of a concern for store managers, as their exact amounts are usually not recorded. In order to align incentives and attain desired service levels, retailers need to design mechanisms in the absence of information on shortage quantities. We consider the incentive design problem of a retailer that delegates stocking decisions to its store managers who are privately informed about local demand. The headquarters knows that the underlying demand process at a store is one of J possible Wiener processes, whereas the store manager knows the specific process. The store manager creates a single order before each period. The headquarters uses an incentive scheme that is based on the end-of-period leftover inventory and on a stock-out occasion at a prespecified inspection time before the end of a period. The problem for the headquarters is to determine the inspection time and the significance of a stock-out relative to leftover inventory in evaluating the performance of the store manager. We formulate the problem as a constrained nonlinear optimization problem in the single period setting and a dynamic program in the multiperiod setting. We show that the proposed “early inspection” scheme leads to perfect alignment when J=2 under mild conditions. In more general cases, we show that the scheme performs strictly better than inspecting stock-outs at the end and achieves near-perfect alignment. Our numerical experiments, using both synthetic and real data, reveal that this scheme clearly outperforms centralized ordering systems that are common practice and can lead to considerable cost reductions.

    Where: HEC-University of Liege – 14, rue Louvrex (N1)- 4000 Liège room 1715