The goal of this work is to suggest a new hybrid algorithm to solve integer programming by incorporating the bat algorithm with direct search methods. The suggested algorithm is named hybrid bat direct search algorithm (HBDS). In HBDS, the global diversification and the local intensification process are balanced. The bat algorithm has a good capability to make intensification and diversification search. The intensification ability of the suggested algorithm is increased by employing the pattern search method as a local search method instead of the random walk method in the classic bat algorithm. In the final stage of the algorithm, the Nelder-Mead method is used to improve the best found solution from the bat and pattern search method instead of running the algorithm more iterations without any enhancements in the fitness function value. The performance of the HBDS algorithm is examined on 7 integer programming problems and compared to 10 benchmark algorithms for solving integer programming problems. The computational results show that HBDS is a promising algorithm and outperforms the other algorithms in most cases.

Bat algorithm, Direct search methods, Pattern search, Nelder-Mead method, Integer programming problems

Integer programming (IP) appear closely in every research area in applied operations research and mathematical programming. Variety of many real life applications for IP problems such as, scheduling problem, VLSI (very large scale integration) circuits design problems, engineering design problems, warehouse location problem, robot path planning problems, [1-3] can be formulated as IP problems.

On one hand, traditional integer programming methods such as dynamic programming or branch and bound have high computational cost, because they examine a search tree that has hundreds or more nodes when large scale real-life problems are considered. On the other hand, heuristic and metaheuritsic methods can be applied for solving integer programming problems. Swarm intelligence (SI) algorithms are novel meta-heuristics algorithms, which find their inspiration from the behavior of a group of social organisms. These algorithms are used to solve global optimization problems and their applications such as ant colony optimization (ACO) [4,5] artificial bee colony [6] particle swarm optimization (PSO) [7-9], bacterial foraging [10], bat algorithm [11,12], bee colony optimization (BCO) [13], wolf search [14], cat swarm [15], Cuckoo search [16], firefly algorithm [17], fish swarm/school [18], etc.

These algorithms have been commonly adopted to solve unconstrained and constrained optimization problems and their applications, nevertheless there are not many SI algorithms used to solve integer programming problems [19-22]. There are some efforts to use some of meta-heuristics algorithms to solve integer programming problems such as social spider optimization [23], gravitational search algorithm [20], particle swarm optimization algorithm [7,24], firefly algorithm [25,26], cuckoo search algorithm [27-29], artificial bee colony algorithm [30-32], ant colony algorithm [33], and simulated annealing [19].

Bat algorithm (BA) is a recent population based algorithm inspired from the echolocation behavior of the microbats [12]. BA is capable to balance the global diversification and the local intensification during the search process. Because of the powerful performance of the BA, it has been used by many researchers to solve diverse applications, for example, Lin, et al. [34] used parameter estimation in dynamic biological systems using a chaotic bat algorithm by integrating Levy flights and chaotic maps. Zhang and Wang [35] improved the diversity of solutions by using the mutation with bat algorithm for image matching. Yang [36] applied BA to solve multi-objective optimization and benchmark engineering problems. Komarasamy and Wahi [37] integrated K-means and bat algorithm (KMBA) for efficient clustering. Nakamura, et al. [38] developed a discrete version of bat algorithm to solve classifications and feature selection problems. Xie, et al. [39] presented a variant of bat algorithm integrating Levy flights and differential operator to solve function optimization problems. In addition, Wang and Guo [40] integrated harmony search with bat algorithm and generated a hybrid bat algorithm for numerical optimization of function benchmarks.

The purpose of this paper is to avoid the slow convergence of the BA and avoid trapping in local minima. In order to solve these two issues, we suggest a new hybrid bat algorithm with direct search methods to solve integer programming problems [41]. The suggested algorithm is named hybrid bat direct search algorithm (HBDS). In HBDS, the pattern search is employed as a local search method to exploit the search around the best found solution at each iteration and in the final stage of the algorithm, the Nelder-Mead method is called to enhance the best obtained solution from the bat and pattern search method. Using the Nelder-Mead method can hasten the search and evade running the algorithm with more iterations without any enhancements in the results.

The rest of this paper is structured as follows. In Section 4, the integer programming problems and the applied direct search methods, the classic BA are described. In Section 5, the main concepts of the suggested HBDS algorithm are given. The numerical experimental and results are shown in Section 6. Finally, the conclusion and future work are presented in Section 7.

In this section and its subsections, the definitions of the integer programming problems are presented and an overview of the BA and the pattern search method is given as follows.

An integer programming problem is a mathematical optimization problem in which all of the variables are restricted to be integers. The unconstrained integer programming problem can be defined as follows.

$\mathrm{min}f(x),\text{\hspace{0.17em}}x\text{\hspace{0.17em}}\in \text{\hspace{0.17em}}S\text{\hspace{0.17em}}\subseteq \text{\hspace{0.17em}}{\mathbb{Z}}^{n}\text{(1)}$

Where $\mathbb{Z}$ is the set of integer variables, $S$ is a not necessarily bounded set.

The authors in [42] introduced the pattern search method (PS). Pattern search method is an applied direct search method to solve a global optimization problems. In direct search method, there is no need for any information about the gradient of the objective function to solve optimization problem. PS method has two type of moves, the exploratory moves and the pattern moves. In the exploratory moves a coordinate search is used around a chosen solution with a step length of ∆ in Algorithm 1. The exploratory move is considered successful if the function value of the new solution is better than the current solution. Otherwise, the step length is reduced. If the exploratory move is successful, then the pattern search is used in order to produce the iterate solution. If the iterate solution is better than the current solution, then exploratory move is used on the iterate solution and the iterate solution is accepted as a new solution. Otherwise, if the exploratory move is unsuccessful, the pattern move is rejected and the step length ∆ is decreased. The operation is repeated until stopping criteria are satisfied. The algorithm of Hook and Jeeves (HJ) pattern search and the main steps of it are outlined in Algorithm 2. The parameters in Algorithms 1, 2 are outlined in Table 1.

The main steps of the Nelder-Mead (NM) algorithm [43] in two dimensions is presented as follows. There are four scalar parameters that represent the main parameters of the NM algorithm such as: Coefficients of reflection *ρ*, expansion *χ*, contraction *τ*, and shrinkage Table 1.

Algorithm 1: Exploratory search

INPUT: Get the values of *x ^{0}, k, ∆_{0}, d*

OUTPUT: New base point *x ^{k}*

1. Set *i* = 1

2. Set *k* = 1

3. repeat

4. Set ${x}_{i}^{k}=\text{\hspace{0.17em}}{x}_{i}^{k-1}+{\Delta}_{k-1}\text{\hspace{0.17em}}{x}_{i}^{k-1}$

5. if $f({x}_{i}^{k})\text{}f\left({x}_{i}^{k-1}\right)$ then

6. Set ${x}_{i}^{k+1}\text{=}{x}_{i}^{k}$

7. end if

8. Set *i* = *i* + 1

9. Set *k* = *k* + 1

10. until *i* > *d*

*φ* where *ρ* > 0, *χ* > 1, 0 < *τ* < 1, and 0 < *φ* < 1. The main steps of the NM algorithm are shown in Algorithm 3.

### Overview of the Bat algorithm

In this section, an overview of the main concepts and structure of the BA is given.

Bat algorithm (BA) is a population based metaheuristic algorithm, was developed by Xin-She Yang in 2010 [12]. BA is based on the echolocation of microbats, which use a type of sonar (echolocation) to detect prey and avoid obstacles in the dark. The main advantage of the BA is that it can provide a fast convergence at a very initial stage by switching from diversification to intensification, however, switching from diversification to intensification quickly may lead to stagnation after some initial stage.

Based on the bat characteristics, Xin-She Yang developed the bat algorithm with the following rules.

• All bats can distinguish between pray and barriers/obstacles by using echolocation to sense distance.

Algorithm 2: The basic pattern search algorithm.

INPUT: Get the values of *x*.

OUTPUT: Best solution *x ^{∗}*.

1. Set the values of the initial values of the mesh size Δ_{0}, reduction factor of mesh size *σ* and termination parameter *ε*

2. Set *k* = 1 {Parameter setting}

3. Set the starting base point *x ^{k-1}*{Initial solution}

4. repeat

5. Perform exploratory search as shown in Algorithm 1

6. if exploratory move success then

7. Go to 16

8. else

9. if $\left|\right|\text{\hspace{0.17em}}{\Delta}_{k}\text{\hspace{0.17em}}\left|\right|\text{\hspace{0.17em}}<\text{\hspace{0.17em}}\epsilon $, then

10. Stop the search and the current point is x^{∗}

11. else

12. Set Δ_{k} = σ Δ_{k-1} {Incremental change reduction}

13. Go to 5

14. end if

15. end if

16. Perform pattern move, where ${x}_{p}^{k+1}\text{=}{x}^{k}\text{}+\text{}\left({x}^{k}\text{\hspace{0.17em}}-\text{}{x}^{k-1}\right)$

17. Perform exploratory move with *x _{p}* as the base point

18. Set *x ^{k+1}* equal to the output result exploratory move

19. if $f({x}_{p}^{k+1})\text{\hspace{0.17em}}<\text{\hspace{0.17em}}f({x}^{k})$ then

20. Set *x ^{k-1}* =

*x*

^{k}
21. Set *x ^{k}* =

*x*{New base point}

^{k+1}22. Go to 16

23. else

24. Go to 9 {The pattern move fails}

25. end if

26. Set *k* = *k* + 1

27. until *k* > *m*

Algorithm 3: The Nelder-Mead Algorithm.

1. Let *x _{i}* denote the list of vertices in the current simplex,

*i*= 1, . . . ,

*n*+ 1.

2. Order. Order and re-label the *n* + 1 vertices from lowest function value *f(x _{1})* to highest function value

*f(x*so that

_{n+1})*f(x*≤

_{1})*f(x*≤ . . . ≤

_{2})*f(x*.

_{n+1})
3. Reflection. Compute the reflected point *x _{r}* by

${x}_{r}\text{=}\overline{x}\text{+}\rho \text{}\left(\overline{x}\text{\hspace{0.17em}}-\text{\hspace{0.17em}}{x}_{\left(n+1\right)}\right)$,
where $\overline{x}$ is the centroid of the *n* best points,

$\overline{x}$ = ∑(*x _{i} / n*),

*i*= 1, . . . ,

*n*.

if *f(x _{1}) ≤ f(x_{r}) < f(x_{n})* then

Replace *x _{n+1}* with the reflected point

*x*and go to Step 7.

_{r}end if

4. Expansion.

if*f(x*then Compute the expanded point

_{r}) < f(x_{1})*x*by ${x}_{e}\text{=}\overline{x}\text{+}\chi \text{(}{x}_{r}\text{-}\overline{x}\text{)}$ end if if

_{e}*f(x*then Replace

_{e}) < f(x_{r})*x*with

_{n+1}*x*and go to Step 7. else Replace

_{e}*x*with

_{n+1}*x*and go to Step 7. end if

_{r}5. Contraction.

if*f(x*≤

_{n})*f(x*then Calculate ${x}_{oc}\text{\hspace{0.17em}}=\text{}\overline{x}\text{+}\tau \left({x}_{r}\text{-}\overline{x}\right)$ {Outside contract} end if if

_{r}) < f(x_{n+1})*f(x*≤

_{oc})*f(x*then Replace

_{r})*x*with

_{n+1}*x*and go to Step 7. else Go to Step 6. end if if

_{oc}*f(x*then Calculate ${x}_{ic}\text{=}\overline{x}\text{+}\tau \left({x}_{n+1}\text{-}\overline{x}\right).$ {Inside contract} end if if f(xic) ≤ f(x

_{r}) ≥ f(x_{n+1})_{n+1}) then Replace x

_{n+1}with xic and go to Step 7. else Go to Step 6. end if

6. Shrink. Evaluate the *n* new vertices
${x}^{\prime}$ = x1 + φ(xi - x1), *i* = 2, . . . , *n* + 1.
Replace the vertices x2, . . . , x_{n+1} with the new vertices ${{x}^{\prime}}_{2},\mathrm{...},{{x}^{\prime}}_{n+1}.$

7. Stopping Condition. Order and re-label the vertices of the new simplex as x1, x2, . . . , x_{n+1} such that f(x1) ≤ f(x2) ≤ . . . ≤ f(x_{n+1})

_{n+1}) - f(x1) < ε then Stop, where ε > 0 is a small predetermined tolerance. else Go to Step 3. end if

• Each bat randomly moves with velocity *v _{i}* at a position

*x*with a frequency

_{i}*varying loudens*

*f*_{min}*A*and pulse emission rate

_{0}*r*.

• Assume that the loudness varies from a large value *A _{0}* to a minimum value

*A*.

_{min}
• Assume that the loudness varies from a large value *A _{0}* to a minimum value

*A*.

_{min}
The BA is a population based method, where the population size consists of bats (solutions). Each bat (solution) in the population is randomly moving with velocity *v _{i}* and a location

*x*. Also each bat is randomly assigned a frequency drawn uniformly from [

_{i}*]. The position of each bat in the population is updated as shown in the following equations.*

*f*,_{min}*f*_{max}
*f _{i} = f_{min} + (f_{max} - f_{min})β * (2)

${v}_{i}^{t}{\text{=v}}_{i}^{t-1}\text{+}\left({x}_{i}^{t-1}\text{-}{x}^{*}\right){f}_{i},\text{(3)}$

${x}_{i}^{t}\text{=}{x}_{i}^{t-1}\text{+}{v}_{i}^{t},\text{(4)}$

Where *β* ∈ [0, 1] is a random vector drawn from a uniform distribution.

The loudness *A _{i}* and the pulse rate emission

*r*are very important to let the algorithm switch between diversification and intensification process. When the bat has found its pray, the loudness decreases and the rate of pulse emission increases. The BA starts with an initial value of the loudness

_{i}*A*and the rate of pulse emission

_{0}*r*, then these values are updated as shown in the following equations.

_{0}$${A}_{i}^{\left(t+1\right)}\text{=}\alpha {A}_{i}^{\left(t\right)}\text{(5)}$$

${r}_{i}^{\left(t\right)}\text{=}{r}_{i}^{\left(0\right)}\text{[1-}exp\left(\text{-}\gamma t\right)]\text{(6)}$

Where *α* ∈ [0, 1] and *γ* > 0 are constant, the parameter plays a similar role as the cooling factor in the simulated annealing algorithm.

The main steps of the classic bat algorithm are shown in Algorithm 4 and can be summarized in the following steps.

Step 1: The algorithm starts by setting the initial values of its parameters and setting zero to the main iteration counter. (Lines 1-2)

Step 2: The initial population is randomly generated by generating the initial position x^{0} and the initial velocity v^{0} for each bat (solution) in the population. The initial frequency *f _{i}* is assigned to each solution in the population, where f is randomly generated from [

*]. The initial population is evaluated by calculating the objective function for each solution via the initial population*

*f*,_{min}*f*_{max}*f(x*, the values of pulse rate

^{0})*r*and initial loudness

_{i}*A*where

_{i}*r*∈ [0, 1] and

*A*varies from a large

_{i}*A*to

_{0}*A*. (Lines 3-9)

_{min}
Step 3: The new population is generated by adjusting the position *x _{i}* and the velocity

*v*for each solution in the population as in (2), (3), and (4). (Lines 12-13)

_{i}
Step 4: The new population is evaluated by calculating the objective function for each solution and the best solution *x ^{∗}* is selected from the population. (Lines 14-15)

Step 5: The local search method is applied in order to refine the best found solution at each iteration. (Lines 16-19)

Step 6: The new solution is accepted with some proximity depending on parameter Ai, increasing the rate of pulse emission and decreasing the loudness. The values of *A _{i}* and

*r*are updated as in (5) and (6).

_{i}Step 7: The new population is evaluated, and the best solution is selected from the population. The operations are repeated until termination criteria are satisfied and the overall best solution is produced. (Lines 25-28)

The main steps of the suggested HBDS algorithm are given in Algorithm 5 and the description of the suggested algorithm can be summarized as follows.

Algorithm 4: The bat algorithm

1. Set the initial values of the minimum frequency fmin, maximum frequency fmax, population size P, the loudness constant α, the rate of pulse emission constant γ, the initial loudens A0, the minimum loudness Amin, the initial rate of pulse emission r0 and the maximum number of iterations Maxitr. {Parameters initialization}

2. Set t = 0. {Counter initialization}

3. for (i = 1; i < P ; i++) do

4. Generate the initial bat population xti randomly.

5. Generate the initial bat velocities vti randomly.

6. Assign the initial frequency fi to each xti.

7. Evaluate the initial population by calculating the objective function f(xti) for each solution in the population.

8. Set the initial values of the pulse ratesriand loudness Ai. {Population initialization}

9. end for

10. repeat

11. t = t + 1

12. Generate new bat solutions xti by adjusting frequency as in (4).

13. Update the bat velocities vti as in (2) and (3).

14. Evaluate the new population by calculating the objective function f(xti) for each solution in the population.

15. Select the best solution x∗ from the population.

16. if rand >rithen

17. Select a solution among the best solutions.

18. Generate a local search solution around the selected best solution.

19. end if

20. Generate a random new solution.

21. if rand < Ai & f(xti < f(x∗)) then

22. Accept the new solutions.

23. Increase the rate of pulse emissionriand reduce the loudness A0.

24. end if

25. Evaluate the new population by calculating the objective function f(xti) for each solution in the population.

26. Rank the population and select the best solution x∗ from the population.

27. until (t > Maxitr) {Termination criteria are satisfied}

28. Produce the best solution.

Algorithm 5: The HBDS algorithm

1. Set the initial values of the minimum frequency fmin, the minimum loudens Amin, the rate of pulse emission constant γ, maximum frequency fmax, the initial loudens A0, population size P , the loudness constant α, the maximum number of iterations Maxitr. and the initial rate of pulse emission r0. {Parameters initialization}

2. Set t = 0. {Counter initialization}

3. for (i = 1; i < P ; i++) do

4. Generate the initial bat population xti randomly.

5. Generate the initial bat velocities vti randomly.

6. Designate the initial frequency fi to each xti.

7. Evaluate the initial population by calculating the objective function f(xti) for each solution in the population.

8. Set the initial values of the pulse ratesriand loudness Ai. {Population initialization}

9. end for

10. repeat

11. t = t + 1.

12. Generate new bat solutions xti by adjusting frequency as in (2), (3) and (4).

13. Update the bat velocities vti as in (3).

14. Calculate the new population by calculating the objective function f(xti) for each solution in the population.

15. Choose the best solution x∗ from the population.

16. if (rand > ri) then

17. Choose a solution among the best solutions.

18. Use the pattern search method in Algorithm 2 on the best solution from the best solutions list. {exploitation process}

19. end if

20. Generate a random new solution.

21. if (rand < Ai ) & f ( xti < f(x∗)) then

22. Accept the new solutions

23. Reduce the loudness A0 and increase the rate of pulse emissionriand as in (5) and (6).

24. end if

25. Calculate the new population by evaluating the objective function f(xt0) for each solution in the population.

26. Rank the population and choose the best solution x∗ from the population.

27. until (t > Maxitr) {Stopping criteria are satisfied.}

28. Employ Nelder-Mead method on the best solutions, Nelite, as shown in Algorithm 3.{Final intensification}

Step 1: The parameters of the minimum frequency fmin, maximum frequency fmax, population size P, the loudness constant α, the rate of pulse emission constant γ, the initial loudness A0, the minimum loudness Amin, the initial rate of pulse emission r0, the maximum number of iterations Maxitr and the initial iteration counter are set to their initial values. (Lines 1-2)

Step 2: The initial population is randomly generated by generating the initial position x0 and the initial velocity v0 for each bat (solution) in the population. The initial frequency f0i is assigned to each solution in the population. The initial population is evaluated by calculating the objective function for each solution via the initial population f(x0i) and the values of pulse rateriand initial loudness Ai. (Lines 3-9)

Step 3: The new population is generated by adjusting the position xi and the velocity vi for each solution in the population as in (2), (3) and (4). (Lines 12-13)

Step 4: The new population is determined by evaluating the objective function for each solution and the best solution x∗ is selected from the population. (Lines 14-15)

Step 5: The pattern search method is considered as a local search method and used in Algorithm 2 in order to improve the best found solution at each iteration. (Lines 16-19)

Step 6: The new solution is accepted with some proximity depending on parameter Ai, increasing the rate of pulse emission and decreasing the loudness as in (5) and (6). (Lines 21-24)

Step 7: The new population is calculated, and the best solution is chosen from the population. The operations are repeated until stopping criteria are satisfied.

Step 8: The Nelder-Mead method is used on the best found solution in the previous stage as a final intensification process in order to speed up the search and avoid running the algorithm with more iterations without any enhancement. (Line 28)

In this section, the efficiency of the HBDS algorithm is determined by giving its general performance on various benchmark functions and comparing the results of the suggested algorithm with different algorithms. In the following subsections, the parameters setting of the suggested algorithm and the properties of the applied test functions are outlined. Also, the performance analysis of the suggested algorithm is given with the comparative results between for other algorithms (Table 2).

The parameters of the HBDS algorithm have been outlined with their designated values in Table 2. Note that the parameters values are based on the common setting in the literature.

Population size P: The experimental tests show that the best population size is P = 20, increasing this number will not improve the results, but the evaluation function values will increase.

Frequency parameter f: Bat movement relies on the value of the frequency parameter f. In HBDS algorithm, the quality of the solution is associated to the value of f parameter. The experimental tests show that the minimum value of f is fmin = 0 and the best maximum value of f is fmax = 5.

Loudness parameters A, α: Loudness parameter A is one of the most essential parameters in the BA. The acceptance of the new generated solutions is based on the value of A. The α parameter plays a comparable role as the cooling factor in the simulated annealing algorithm. The initial value of A is set to 1 and the value of α is set to 0.9.

Pulse emission rate r: The value of the rate of pulse emission parameter r is very important to apply the local search method in the algorithm. The experimental tests show that the best value of r is 0.9 and the rate of pulse emission constant is γ = 0.9.

Pattern search parameters: HBDS uses PS as a local search method in order to enhance the best obtained solution from the BA at each iteration. In PS the mesh size is initialized as ∆0, in our experiments ∆0 = (Ui - Li)/3 and when no enhancement accomplished in the diversification search process, the mesh size is deducted by using reduction factor σ. The experimental results show that the best value of σ is 0.01. The PS steps are repeated m times, in order to enhance the intensification process of the algorithm. In our experiment m = 5 as a pattern search iteration number (Table 3).

Stopping condition parameters: HBDS stops the search when the number of iterations reaches to the desired maximum number of iterations or any other termination relying on the comparison with other algorithms. In our experiment, the value of the maximum number of iteration Maxitr = 2d, where d is the dimension of the problems.

Final intensification: The best obtained solutions from the BA and the pattern search method are recorded in a list in order to use the Nelder-Mead method on them, the number of the solutions in this list is called Nelit, in order to evade increasing the value of the function evaluation value, Nelit = 1.

The efficiency of the HBDS algorithm has been examined on 7 benchmark integer programming problems (FI1 - FI7). The properties of the benchmark functions (the global optimal of each problem, function number, problem bound, and dimension of the problem) are outlined in Table 3 and the functions with their definitions are presented in Table 4 as follows.

The classic BA is compared with the suggested HBDS algorithm without applying the final intensification process (Nelder-Mead method) to verify the efficiency of the suggested HBDS. The values of the parameters are designated the same for both algorithms in order to have a fair comparison. The functions FI1, FI2 and FI3 have been chosen to show the efficiency of the suggested algorithm by plotting the values of function values versus the number of iterations as shown in Figure 1. In Figure 1, the solid line alludes to the suggested HBDS results, while the dotted line alludes to the classic bat results after 50 iterations. Figure 1 shows that the function values rapidly reduce as the number of iterations expands for HBDS results than those of the classic BA. From Figure 1, it can be deduced that the incorporation between the classic BA with pattern search method can enhance the performance of the classic BA and speed the convergence of the suggested algorithm (Table 4).

The general performance of the suggested algorithm on the integer programming problems has been investigated by plotting the values of function values versus the number of iterations as shown in Figure 2 for four test functions FI4, FI5, FI6 and FI7. The results in Figure 2 are the results of the suggested algorithm without applying the Nelder-Mead method in the final stage of the algorithm after 50 iterations. From Figure 2, it can be deduced that the function values of the suggested HBDS rapidly reduce as the number of iterations expands and the hybridization between the BA and the pattern search method can hasten the search and help the algorithm to get the optimal or close to optimal solution in reasonable time.

The Nelder-Mead (NM) method is used in the final stage of the suggested HBDS algorithm in order to speed the convergence of the suggested algorithm and evade running the algorithm with more iterations without any enhancement in the obtained results. The results in Table 5 show the mean evaluation function values of the suggested HBDS without and with using Nelder-Mead method, respectively. The stopping criteria mean that the algorithm reaches to the global minimum of the solution within an error of 10-4 before the 20,000 function evaluation value, are designated the same for both the algorithms. The best results are outlined in boldface text. In Table 5, the results show that invoking the Nelder-Mead method in the final stage can hasten the search and help the algorithm to get the optimal or near optimal solution faster than the suggested algorithm without using the Nelder-Mead method (Table 5).

The HBDS algorithm is compared with four benchmark algorithms (namely, particle swarm optimization and its variants) in order to verify of the efficiency of the suggested algorithm. Before presenting the comparison results of all algorithms, a brief description about the comparative four algorithms [24] is described.

RWMPSOg: RWMPSOg is Random Walk Memetic Particle Swarm Optimization (with global variant), which incorporates the particle swarm optimization with random walk as direction exploitation.

RWMPSOl: RWMPSOl is Random Walk Memetic Particle Swarm Optimization (with local variant), which incorporates the particle swarm optimization with random walk as direction exploitation.

PSOg: PSOg is standard particle swarm optimization with global variant without local search method.

PSOl: PSOl is standard particle swarm optimization with local variant without local search method.

In this subsection, the comparison results between our HBDS algorithm and the other algorithms is given in order to validate the efficiency of our HBDS algorithm. The four comparative algorithms are examined on 7 benchmark functions. The results of the comparative algorithms are considered from their original paper [24]. The minimum (min), maximum (max), average (Mean), standard deviation (St. D) and success rate (% Suc) of the evaluation function values are outlined over 50 runs in Table 6. The run is regarded successful if the algorithm gets to the global minimum of the solution within an error of 10-6 before the 20,000 function evaluation value. The best results between the comparative algorithms are outlined in boldface text. The results in Table 6, show that the suggested HBDS algorithm is successful in all runs and gets the objective value of each function faster than the other algorithms in 5 of 7 functions.

The HBDS algorithm is investigated with various meta-heuristics and swarm intelligence algorithms (SI) such as genetic algorithm (GA) [44], standard particle swarm optimization (PSO) [8], firefly (FF) [17], cuckoo search (CS) [16] and grey wolf optimization algorithms (GWO) [45]. Having fair comparison, the population size is set 20 for all algorithms and the stopping criteria for all algorithm are the same which are the algorithm gets to the global minimum of the solution within an error of 10-4 before the 20,000 function evaluation value. For the GA the probability of crossover is set to PC = 0.8, probability of mutation PM = 0.01 and the roulette wheel selection is used. For the other SI algorithms, Table 6 the standard parameter setting for each algorithm is considered. The average (Avg) and standard deviation (SD) of all algorithms are outlined over 50 runs as shown in Table 7.

It can be concluded from Table 7 that the suggested algorithm can get the desired optimum values faster than the other SI algorithm.

In order to investigate the power of the HBATDS algorithm, it is compared with another famous method which is called branch and bound method (BB) [46-49]. Before discussing the comparative results between the proposed algorithm and BB method, the BB method and the main steps of its algorithm are presented.

The branch and bound method (BB) is one of the most widely used method for solving optimization problems. The main idea of BB method is the feasible region of the problem is subsequently partitioned into several sub regions, this operation is called branching. The lower and upper bounds value of the function can be determined over these partitions, this operation is called bounding. The main steps of BB method are reported in Algorithm 6, and the BB algorithm can be summarized in the following steps.

Step 1: The algorithm starts with a relaxed feasible region M0 ⊃ S, where S is the feasible region of the problem. This feasible region M0 is partitioned into finitely many subsets Mi.

Step 2: For each subset Mi, the lower bound β and the upper bound α have been determined, where β(Mi) ≤ inf f (Mi ∩ S) ≤ α(Mi), f is the objective function.

Algorithm 6: The branch and bound algorithm

1. Set the feasible region M0, M0 ⊃ S.

2. Set i = 0

3. repeat

4. Set i = i + 1

5. Partition the feasible region M0 into many subsets Mi.

6. For each subset Mi, determine lower bound β, where β = min β(Mi).

7. For each subset Mi, determine upper bound α, where α = min α(Mi).

8. if (α = β) || (α - β ≤ ) then

9. Stop

10. else

11. Select some of the subset Mi and partition them

12. end if

13. Determine new bound on the new partition elements.

14. until (i ≤ m)

Step 3: The algorithm is terminated, if the bounds are equal or very close, i.e. α = β (or α - β ≤ ∊) ∊, is a predefined positive constant.

Step 4: Otherwise, if the bounds are not equal or very close, some of the subsets Mi are selected and partitioned in order to obtain a more refined partition of M0.

Step 5: The procedure is repeated until termination criteria are satisfied.

In this subsection, the HBATDS algorithm is tested with the BB method. The results of the BB method are taken from its original paper [?]. In [?], the BB algorithm transforms the initial integer problem programming problem to a continuous problem. For the bounding, the BB uses the sequential quadratic programming method to solve the generated sub problems, while for branching, BB uses depth first traversal with backtracking. In Table 8, the comparative results between the BB method and the proposed algorithm are reported. In Table 8, the average (Mean), standard deviation (St. D) and rate of success (Suc) are reported after 30 runs. The best mean evaluation values between the two algorithms are marked in boldface. The results in Table 8 show that the proposed algorithm results are better than the results of the BB method, however the BB method is better than the proposed algorithm in function FI2. The overall results in Table 8 and Figure 3 show that the proposed algorithm is faster and more efficient than the BB method.

It can be concluded from the two comparison tests between the proposed HBATDS algorithm and the 5 benchmark algorithms, that the proposed HBATDS algorithm is a promising algorithm and can obtain the optimal or near optimal function values for most of the test functions (Table 8).

In this paper, a new hybrid algorithm is suggested by incorporating the bat algorithm with direct search methods to solve integer programming problems. The suggested algorithm is named hybrid bat direct search algorithm (HBDS). In the suggested algorithm, The performance of the classic BA is accelerated by invoking the pattern search method as a local search method and the Nelder-Mead method in the final stage of the algorithm. The HBDS algorithm is intensely tested on 7 integer programming problems. The suggested algorithm is compared with other 10 algorithms to test its performance for integer programming problems. The numerical results illustrate that the suggested HBDS algorithm is a potential algorithm and suitable to find a global optimal solution or close optimal solution in reasonable time.

Our work in this paper motivates us to work on the proposed algorithm to solve various optimization problems such as large scale and molecular energy function [7], other combinatorial problems, MIPLIB instances, large scale integer programming and minimax problems, and constrained optimization and engineering problems [50].

We are grateful to the referees for the detailed review of our paper, insightful comments and constructive suggestions which improve the structure of the paper. The research of the 2nd author is supported in part by the Natural Sciences and Engineering Research Council of Canada (NSERC). The postdoctoral fellowship of the 1st author is supported by NSERC.

^{th}edn), Wiley, New Delhi, India. http://mec.nit.ac.ir/file_part/master_doc/20149281833165301436305785.pdf