Multi-Objective Iterated Local Search Based on Decomposition for Job Scheduling Problems with Machine Deterioration Effect

This website provides the data used and results obtained in the work "Multi-Objective Iterated Local Search Based on Decomposition for Job Scheduling Problems with Machine Deterioration Effect".

Authors: not disclosed yet.

Article summary: This work addresses an unrelated parallel machine scheduling problem in which the jobs cause deterioration of the machines. This factor decreases the performance of the machines, increasing the processing times of the jobs over time. We propose a mixed-integer nonlinear programming model for the problem that has two objectives: to minimize the maximum completion time of jobs (makespan) and to minimize the total time of delay of the jobs. In this paper, we also develop a different approach to extend Iterated Local Search (ILS) meta-heuristic to multi-objective problems. The Iterated Local Search Based on Decomposition (ILS/D) employs the decomposition strategy similar to the Multi-objective Evolutionary Algorithm Based on Decomposition (MOEA/D), in which the ILS is used as the search engine to improve the search process within the structure of the MOEA/D. We compared the ILS/D, MOEA/D and Non-dominated Sorting Genetic Algorithm II (NSGA-II) algorithms. The results show that the ILS/D outperforms the MOEA/D and NSGA-II algorithms by a significant margin. These findings show that the decomposition strategy is beneficial not only for evolutionary algorithms, but is also an efficient way to extend the ILS to multi-objective problems.

Instances

The instances used in this work were proposed by Ruiz-Torres et al. (2015). They are available at https://ruiz-torres.uprrp.edu/nt/.

For purposes of dissemination and reproducibility of the work, we provide below the download of the instances and their descriptions in the same way as made available by their authors:

Unrelated Machines Problems

Each of the text files in the zip file represents one of the instances. Each file has the problem information. The filename has the following format: Ni_n_m-dvar_1_evar_instance.txt

Where m = number of machines, 5, 10, 20
Where n = number of jobs: 100, 200
Where dvar = 1 or 2, ... 1 = U(1%, 5%) and 2 = U(5%, 10%).
Where evar = 1, 2, 3 ... 1 indicates CR = 2, 2 indicates CR = 3.5, and 3 indicates CR = 5.
Where instance is from 1 to 25.

The text file has the following items, separated by a return:
number of jobs
number of machines
the due date for jobs 1..n
the process times for jobs 1..n for machine 1
the process times for jobs 1..n for machine 2

the deterioration for jobs 1..n for machine 1
the deterioration for jobs 1..n for machine 2

Click here to download "Problems_U.zip".
Click here to download "Small_Instances.zip".

Large Instances Results

Results - Raw Data

We are sharing the definition and data of the problem, as well as all the solutions and results obtained.

The data is organized in a csv file, where each line represents an execution of an algorithm, with the following columns:

- Instance name
- Algorithm name
- Repetition number of algorithm
- Total delay time value
- Makespan value

Click here to download "Raw Data - ILSD, NSGAII and MOEAD.zip".



Results - Calculated Hyper-volume

The following table presents the refined data, where the hyper-volume was calculated. These hyper-volume were calculated from the raw data, and for each instance the hyper-volume was calculated considering the points of greatest total delay time and makespan of the 5 repetitions of the instance. Hyper-volumes were cauculated using the R "emoa" library.

Click here to download "Calculated Hyper-volume.zip".