Bin packing problem
In the bin packing problem, items of different volumes must be packed into a finite number of bins or containers each of a fixed given volume in a way that minimizes the number of bins used. In computational complexity theory, it is a combinatorial NP-hard problem. The decision problem is NP-complete.
There are many variations of this problem, such as 2D packing, linear packing, packing by weight, packing by cost, and so on. They have many applications, such as filling up containers, loading trucks with weight capacity constraints, creating file backups in media and technology mapping in field-programmable gate array semiconductor chip design.
The bin packing problem can also be seen as a special case of the cutting stock problem. When the number of bins is restricted to 1 and each item is characterised by both a volume and a value, the problem of maximising the value of items that can fit in the bin is known as the knapsack problem.
Despite the fact that the bin packing problem has an NP-hard computational complexity, optimal solutions to very large instances of the problem can be produced with sophisticated algorithms. In addition, many heuristics have been developed: for example, the first fit algorithm provides a fast but often non-optimal solution, involving placing each item into the first bin in which it will fit. It requires Θ time, where n is the number of items to be packed. The algorithm can be made much more effective by first sorting the list of items into decreasing order, although this still does not guarantee an optimal solution, and for longer lists may increase the running time of the algorithm. It is known, however, that there always exists at least one ordering of items that allows first-fit to produce an optimal solution.
A variant of bin packing that occurs in practice is when items can share space when packed into a bin. Specifically, a set of items could occupy less space when packed together than the sum of their individual sizes. This variant is known as VM packing since when virtual machines are packed in a server, their total memory requirement could decrease due to pages shared by the VMs that need only be stored once. If items can share space in arbitrary ways, the bin packing problem is hard to even approximate. However, if the space sharing fits into a hierarchy, as is the case with memory sharing in virtual machines, the bin packing problem can be efficiently approximated.
Another variant of bin packing of interest in practice is the so-called online bin packing. Here the items of different volume are supposed to arrive sequentially and the decision maker has to decide whether to select and pack the currently observed item, or else to let it pass. Each decision is without recall.
Formal statement
In Computers and Intractability Garey and Johnson list the bin packing problem under the reference . They define its decision variant as follows.Instance: Finite set of items, a size for each, a positive integer bin capacity, and a positive integer.
Question: Is there a partition of into disjoint sets such that the sum of the sizes of the items in each is or less?
Note that in the literature often an equivalent notation is used, where and for each. Furthermore, research is mostly interested in the optimization variant, which asks for the smallest possible value of. A solution is optimal if it has minimal. The -value for an optimal solution for a set of items is denoted by or just if the set of items is clear from the context.
A possible integer linear programming formulation of the problem is:
where if bin is used and if item is put into bin.
Hardness of bin packing
The bin packing problem is strongly NP-complete. This can be proven by reducing the strongly NP-complete 3-partition problem to bin packing.On the other hand, it is solvable in pseudo-polynomial time for each fixed and solvable in polynomial time for each fixed.
Furthermore, a reduction from the partition problem shows that there can be no approximation algorithm with absolute approximation ratio smaller than unless.
In the online version of the bin packing problem, the items arrive one after another and the decision where to place an item has to be made before knowing the next item or even if there will be another one.
Yao proved 1980 that there can be no online algorithm with asymptotic competitive ratio smaller than. Brown and Liang improved this bound to. Afterward, this bound was improved to by Vliet.
In 2012, this lower bound was again improved by Békési and Galambos to.
Approximation algorithms for bin packing
To measure the performance of an approximation algorithm there are two approximation ratios considered in the literature. For a given list of items the number denotes the number of bins used when algorithm is applied to list, while denotes the optimum number for this list.The absolute worst-case performance ratio for an algorithm is defined as
On the other hand, the asymptotic worst-case ratio is defined as
Additionally, one can restrict the lists to those for which all items have a size of at most.
For such lists, the bounded size performance ratios are denoted as and.
Approximation algorithms for bin packing can be classified into two categories.
First heuristics that considered the items in a given order and place them one by one inside the bins.
These heuristics are also applicable to the online version of this problem.
The other class contains the offline algorithms.
It contains heuristics as well, but they modify the given list of items e.g. by sorting the items by size.
These algorithms are no longer applicable to the online variant of this problem.
However, they have an improved approximation guarantee compared to their offline versions while maintaining the advantage of their small time-complexity.
This class contains asymptotic approximation schemes as well.
These algorithms have an approximation guarantee of the form for some constant that may depend on.
For an arbitrarily large these algorithms get arbitrarily close to.
However, this comes at the cost of a increased time complexity compared to the heuristical approaches.
Online heuristics
A diverse set of offline and online heuristics for bin-packing have been studied by Johnson.He introduced the following two characterizations for online heuristics.
An algorithm is an any-fit algorithm if it fulfills the following property:
A new bin is opened for a considered item, only if it does not fit into an already open bin.
On the other hand, an algorithm is an almost-any-fit algorithm if it has the additional property:
If a bin is the unique bin with the lowest non-zero level, it cannot be chosen unless the item will not fit in any other bin with a non-zero level. He proved that each AAF-algorithm has an approximation guarantee of, meaning that it has an asymptotic approximation ratio of at most and that there are lists for that it has an asymptotic approximation ratio of at least.
An online algorithm uses k-bounded space if, for each new item, the number of bins in which it may be packed is at most k.
Examples for these algorithms are Next-k-Fit and Harmonic-k.
Algorithm | Approximation guarantee | Worst case list | Time-complexity |
Next-fit | |||
First-fit | |||
Best-fit | |||
Worst-Fit | |||
Almost-Worst-Fit | |||
Refined-First-Fit | |||
Harmonic-k | for | ||
Refined Harmonic | |||
Modified Harmonic | |||
Modified Harmonic 2 | |||
Harmonic + 1 | |||
Harmonic ++ |
Next-Fit (NF)
Next Fit is a bounded space AF-algorithm with only one partially filled bin that is open at any time.The algorithm works as follows. It considers the items in an order defined by a list. If an item fits inside the currently considered bin, the item is placed inside it. Otherwise, the current bin is closed, a new bin is opened and the current item is placed inside this new bin.
This algorithm was studied by Johnson in this doctoral thesis in the year 1973. It has the following properties:
- The running time can be bounded by, where is the number of items.
- For each list it holds that and hence.
- For each there exists a list such that and.
- for all.
- for all.
- For each algorithm that is an AF-algorithm it holds that.
The number of bins used by this algorithm is no more than twice the optimal number of bins. In other words, it is impossible for 2 bins to be at most half full because such a possibility implies that at some point, exactly one bin was at most half full and a new one was opened to accommodate an item of size at most. But since the first one has at least a space of, the algorithm will not open a new bin for any item whose size is at most. Only after the bin fills with more than or if an item with a size larger than arrives, the algorithm may open a new bin.
Thus if we have bins, at least bins are more than half full. Therefore,. Because is a lower bound of the optimum value, we get that and therefore.
The family of lists for which it holds that is given by with. The optimal solution for this list has bins containing two items with size and one bin with items with size , while the solution generated by NF has bins with one item of size and one item with size.
Next-''k''-Fit (NkF)
NkF works as NF, but instead of keeping only one bin open, the algorithm keeps the last bins open and chooses the first bin in which the item fits.For the NkF delivers results that are improved compared to the results of NF, however, increasing to constant values larger than improves the algorithm no further in its worst-case behavior.
If algorithm is an AAF-algorthm and then.
First-Fit (FF)
First-Fit is an AF-algorithm that processes the items in a given arbitrary order. For each item in, it attempts to place the item in the first bin that can accommodate the item. If no bin is found, it opens a new bin and puts the item within the new bin.The first upper bound of for FF was proven by Ullman in 1971.
In 1972, this upper bound was improved to by Garey et al.
In 1976, it was improved by Garey et al.
to, which equivalent to due to the integrality of and.
The next improvement, by Xia and Tan
in 2010,
lowered the bound to.
Finally 2013, this bound was improved to by Dósa and Sgall.
They also present an example input list, for that matches this bound.
Best-Fit (BF)
Best-fit is an AAF-algorithm similar to First-fit. Instead of placing the next item in the first bin, where it fits, it is placed inside the bin that has the maximum load, where the item fits.The first upper bound of for BF was proven by Ullman in 1971.
This upper bound was improved to by Garey et al.
Afterward, it was improved by Garey et al.
to.
Finally this bound was improved to by Dósa and Sgall.
They also present an example input list, for that matches this bound.
Worst-Fit (WF)
This algorithm is similar to Best-fit. instead of placing the item inside the bin with the maximum load, the item is placed inside the bin with the minimum load.This algorithm can behave as badly as Next-Fit and will do so on the worst-case list for that. Furthermore, it holds that.
Since WF is an AF-algorithm, there exists an AF-algorithm such that.
Almost Worst-Fit (AWF)
AWF is an AAF-algorithm that considers the items in order of a given list.It tries to fill the next item inside the second most empty open bin.
If it does not fit, it tries the most empty one, and if it does not fit there as well, the algorithm opens a new bin.
As AWF is an AAF algorithm it has an asymptotic worst-case ratio of.
Refined-First-Fit (RFF)
The items are categorized in four classes.An item is called -piece, -piece, -piece, or -piece, if its size is in the interval,,, or respectively.
Similarly, the bins are categorized into four classes.
Let be a fixed integer.
The next item is assigned due to the following rules:
It is placed using First-Fit into a bin in
- Class 1, if is an -piece,
- Class 2, if is an -piece,
- Class 3, if is an -piece, but not the th -piece seen so far, for any integer.
- Class 1, if is the th -piece seen so far,
- Class 4, if is an -piece.
This algorithm was first presented by Andrew Chi-Chih Yao, who proved that it has an approximation guarantee of and presented a familie of lists with for.
Harmonic-''k''
The Harmonic-k algorithm partitiones the interval of sizes harmonically into pieces for and such that.An item is called an -item, if.
The algorithm divides the set of empty bins into infinite classes for, one bin type for each item type. A bin of type is only used for bins to pack items of type.
Each bin of type for can contain exactly -items. The algorithm now acts as follows:
If the next item is an -item for, the item is placed in the first bin that contains fewer than pieces or opens a new one if no such bin exists.
If the next item is an -item, the algorithm places it into the bins of type using Next-Fit.
This algorithm was first described by Lee and Lee.
It has a time complexity of and at each step, there are at most open bins that can be potentially used to place items, i.e., it is a k-bounded space algorithm.
Furthermore, they studied its asymptotic approximation ratio.
They defined a sequence, for and proved that for it holds that.
For it holds that.
Additionally, they presented a family of worst-case examples for that
Refined-Harmonic (RH)
The Refined-Harmonic combines ideas from the Harmonic-k algorithm with ideas from Refined-First-Fit.It places the items larger than similar as in Refined-First-Fit, while the smaller items are placed using Harmonic-k.
The intuition for this strategy is to reduce the huge wast for bins containing pieces that are just larger than.
The algorithm classifies the items with regard to the following intervals:, ,,,, for, and.
The algorithm places the -items as in Harmonic-k, while it follows a different strategy for the items in and.
There are four possibilities to pack -items and -items into bins.
- An -bin contains only one -item.
- An -bin contains only one -item.
- An -bin contains one -item and one -item.
- An -bin contains two -items.
Algorithm Refined-Harmonic-k for a list L = :
1. N_a = N_b = N_ab = N_bb = N_b' = N_c = 0
2. If i_j is an I_k-piece
then use algorithm Harmonic-k to pack it
3. else if i_j is an I_a-item
then if N_b != 1,
then pack i_j into any J_b-bin; N_b--; N_ab++;
else place i_j in a new bin; N_a++;
4. else if i_j is an I_b-item
then if N_b' = 1
then place i_j into the I_b'-bin; N_b' = 0; N_bb++;
5. else if N_bb <= 3N_c
then place i_j in a new bin and designate it as an I_b'-bin; N_b' = 1
else if N_a != 0
then place i_j into any I_a-bin; N_a--; N_ab++;N_c++
else place i_j in a new bin; N_b++;N_c++
This algorithm was first described by Lee and Lee. They proved that for it holds that.
Offline algorithms
Algorithm | Approximation guarantee | Worst case instance |
First-fit-decreasing | ||
Modified-first-fit-decreasing |
First Fit Decreasing (FFD)
This algorithm works analog to First-Fit. However, before starting to place the items, they are sorted in non-increasing order of their sizes.this algorithm can be implemented to have a running time of at most.
In 1973, D.S. Johnson proved in his doctoral theses that.
In 1985, B.S. Backer gave a slightly simpler proof and showed that the additive constant is not more than 3. Yue Minyi proved that in 1991 and, in 1997, improved this analysis to together with Li Rongheng. In 2007 György Dósa proved the tight bound and presented an example for which.
The lower bound example given in by Dósa is the following: Consider the two bin configurations and.
If there are 4 copies of and 2 copies of in the optimal solution, FFD will compute the following bins:
4 bins with configuration, one bin with configuration, one bin with configuration, 2 bins with configuration, and one final bin with configuration, i.e. 8 bins total, while the optimum has only 6 bins.
Therefore, the upper bound is tight, because. This example can be extended to all sizes of.
Modified First Fit Decreasing (MFFD)
Modified first fit decreasing improves on FFD for items larger than half a bin by classifying items by size into four size classes large, medium, small, and tiny, corresponding to items with size > 1/2 bin, > 1/3 bin, > 1/6 bin, and smaller items respectively. Then it proceeds through five phases:- Allot a bin for each large item, ordered largest to smallest.
- Proceed forward through the bins. On each: If the smallest remaining medium item does not fit, skip this bin. Otherwise, place the largest remaining medium item that fits.
- Proceed backward through those bins that do not contain a medium item. On each: If the two smallest remaining small items do not fit, skip this bin. Otherwise, place the smallest remaining small item and the largest remaining small item that fits.
- Proceed forward through all bins. If the smallest remaining item of any size class does not fit, skip this bin. Otherwise, place the largest item that fits and stay on this bin.
- Use FFD to pack the remaining items into new bins.
Asymptotic approximation schemes
The first asymptotic approximation scheme was described by Fernandez de la Vega and Lueker. I has a time complexity of, where denotes a function only dependent on, and generates a solution with size at most.The time complexity of this algorithm was improved by Karmarkar and Karp to be polynomial in and.
Exact algorithm
Martello and Toth developed an exact algorithm for the 1-D bin-packing problem, called MTP. A faster alternative is the Bin Completion algorithm proposed by Korf in 2002 and later improved.A further improvement was presented by Schreiber and Korf in 2013. The new Improved Bin Completion algorithm is shown to be up to five orders of magnitude faster than Bin Completion on non-trivial problems with 100 items, and outperforms the BCP algorithm by Belov and Scheithauer on problems that have fewer than 20 bins as the optimal solution. Which algorithm performs best depends on problem properties like number of items, optimal number of bins, unused space in the optimal solution and value precision.