Continuous Global Optimization Software: A Brief Review

János D. Pintér

(Reproduced by permission from: Optima 52 (1996), 1-8.)

1. Global Optimization Models and Solution Approaches

A large variety of quantitative decision issues, arising in the sciences, engineering and economics, can be perceived and modelled as a constrained optimization problem. According to this generic description, the best decision often expressed by a real vector is sought which satisfies all stated feasibility constraints and minimizes (or maximizes) the value of an objective function. Applying standard mathematical programming notation, we shall consider problems in the general form

The above formula in text format:

(1) min f(x) s.t. x belongs to a subset D of the Euclidean n-space R^n.

The function f symbolizes the objective(s) in the decision problem, and D denotes the (non-empty) set of feasible decisions. D is usually defined by a finite number of functions; for the purposes of the present discussion, we shall assume that

The above formula in text format:

(2) D={x is in R^n s.t. l<=x<=u; g_j(x)<=0 j=1,...,J}.

In (2) l and u are explicit (finite) bounds, and g j are given constraint functions. Postulating now that all functions defined above are continuous, the optimal solution set to problem (1)-(2) is non-empty. Most typically, it is assumed that the decision problem modelled by (1)-(2) has a unique locally and, at the same time, also globally optimal solution. Uniextremality is often implied by the mathematical model structure (for example, by the strict convexity of f, and the convexity of D). This paradigm corresponds to the situation in which one, supposedly, has a sufficiently close initial 'guess' of the feasible region where the optimal solution x* is located. Hence, the global optimality of the solution directly follows, having found the single local optimum of f on D. For example, linear and convex nonlinear programming models both, in essence, satisfying the mentioneduniextremality assumption in most practical cases have been extensively applied in the past decades to formulate and solve an impressive range of decision problems.

Although very important classes of models naturally belong to the above category, there is also a broad variety of problems in which the property of uniextremality cannot be simply postulated or verified. Consider, for instance, the following general problem types:

 nonlinear approximation, including the solution of systems of nonlinear equations and inequalities

 model fitting to empirical data (calibration, parameterization)

 optimized design and operation of complex 'black box' ('oracle') systems, e.g., in diverse engineering contexts

 configuration/arrangement design (e.g., in various data classification, facility location, resource allocation, or scientific modelling contexts)

Such problems together with numerous other prospective application areas are discussed by Pintér (1996) and in the extensive list of related references therein. For further applications consult, e.g., Pardalos and Rosen (1987), Törn and Zilinskas (1989), Floudas and Pardalos (1990, 1992), Grossmann (1996), Bomze, Csendes, Horst and Pardalos (1996), or special application-related issues of the Journal of Global Optimization.

The emerging field of Global Optimization (GO) deals with mathematical programming problems, in the (possible) presence of multiple local optima. Observe that, typically, the number of local (pseudo)solutions is unknown and it can be quite large. Furthermore, the quality of the various local and global solutions may differ significantly. In the presence of such structure often visualized by 'hilly landscapes' corresponding to projections of the objective function into selected subspaces (given by coordinate-pairs of the decision variable x) GO problems can be extremely difficult. Hence, most classical numerical approaches are, generally speaking, not directly applicable to solve them. For illustration, see Figure 1 which displays a relatively simple composition of trigonometric functions with imbedded polynomial arguments, in just two variables (denoted by x and y).

Naturally, under such circumstances, it is essential to use a proper global search strategy. Furthermore, instead of 'exact' solutions, most typically one has to accept diverse numerical approximations to the globally optimal solution (set) and optimum value.

Following early sporadic work related to GO (since the late fifties), the present state-of-art is characterized by several dozen monographs, a professional journal, and at least a few thousand research articles devoted primarily to the subject. A few illustrative references are provided at the end of this brief review.

The most important GO model-classes which have been extensively studied include the following examples. (Please recall the general model form (1)-(2), and note that the problem-classes listed below are not necessarily distinct; in fact, several of them are hierarchically contained by more general problem-types listed.)

 Bilinear and biconvex programming (f is bilinear or biconvex, D is convex)

 Combinatorial optimization (problems which have discrete decision variables in f and/or in g j can be equivalently reformulated as GO problems in continuous variables)

 Concave minimization (f is concave, D is convex)

 Continuous global optimization (f is continuous, D is compact)

 Differential convex (D.C.) optimization (f and g j can all be represented as the difference of two corresponding convex functions)

 Fractional programming (f is the ratio of two real functions, and g j are convex)

 Linear and nonlinear complementarity problems (f is the scalar product of two vector functions, D is typically convex)

 Lipschitz optimization (f and g j are arbitrary Lipschitz-continuous functions)

 Minimax problems (f is some minimax objective, the maximum is considered over a discrete set or a convex set, D is convex)

 Multilevel optimization (models non-cooperative games, involving hierarchies of decision-makers, their conflicting criteria are aggregated by f; D is typically assumed to be convex)

 Multiobjective programming (e.g., when several conflicting linear objectives are to be optimized over a polyhedron)

 Multiplicative programming (f is the product of several convex functions, and g j are convex, or more generally multiplicative functions) Naturally, under such circumstances, it is essential to use a proper global search strategy.

 Network problems (f can be taken from several function-classes including nonconvex ones, and g j are typically linear or convex)

 Parametric nonconvex programming (the feasible region D and/or the objective f may also depend on a parameter vector)

 Quadratic optimization (f is an arbitrary indefinite quadratic function; g j are linear or, in the more general case, can be arbitrary quadratic functions)

 Reverse convex programming (at least one of the functions g j expresses a reverse convex constraint)

 Separable global optimization (f is an arbitrary nonlinear in general, nonconvex separable function, D is typically convex)

 Various other nonlinear programming problems, including, e.g., nonconvex stochastic models (in which the defining functions f, g j depend on random factors, possibly in an implicit, 'black box' manner)

For detailed descriptions of most of these model-types and their connections consult, e.g., Horst and Pardalos (1995), with numerous further references.

There are several main classes of algorithmic GO approaches which possess strong theoretical convergence properties, and at least in principle are straightforward to implement and apply. All such rigorous GO approaches have an inherent computational demand which increases non-polynomially, as a function of problem-size, even in the simplest GO instances. It should be emphasized at this point that GO approaches are (should be) typically completed by a 'traditional' local optimization phase at least when considering also numerical efficiency issues. Global convergence, however, needs to be guaranteed by the global-scope algorithm component which theoretically should be used in a complete, 'exhaustive' fashion. These remarks indicate the significant difficulty of developing robust and efficient GO software.

Without aiming at completeness, several of the most important GO strategies are listed below; for details, consult, for instance, the corresponding works from the list of references. (Note that the listing is not complete, and its items are not necessarily mutally exclusive; some software implementations combine ideas from several approaches.)

 Adaptive partition and search strategies (including, e.g., branch-and-bound algorithms, Bayesian approaches and interval arithmetic based methods) (Forgó, 1988; Ratschek and Rokne, 1988; Mockus, 1989; Neumaier, 1990; Zhigljavsky, 1991; Hansen, 1992; Horst and Pardalos, 1995; Horst and Tuy, 1996; Pintér, 1996; Kearfott, 1996)

 Adaptive stochastic search algorithms (including random search, simulated annealing, evolution and genetic algorithms) (van Laarhoven and Aarts, 1987; Zhigljavsky, 1991; Horst and Pardalos, 1995; Michalewicz, 1996; Pintér, 1996)

 Enumerative strategies (for solving combinatorial problems, or certain 'structured' e.g., concave optimization problems) (Forgó, 1988; Horst and Pardalos, 1995; Horst and Tuy, 1996)

 'Globalized' local search methods (applying a grid search or random search type global phase, and a local search algorithm) (Horst and Pardalos, 1995; Pintér, 1996)  Heuristic strategies (deflation, tunneling, filled function methods, approximate convex global underestimation, tabu search, etc.) (Horst and Pardalos, 1995; Pintér, 1996)

 Homotopy (parameter continuation) methods and related approaches (including fixed point methods, pivoting algorithms, etc.) (Horst and Pardalos, 1995)

 Passive (simultaneous) strategies (uniform grid search, pure random search) (Zhigljavsky, 1991; Horst and Pardalos, 1995; Pintér, 1996)

 Successive approximation (relaxation) methods (cutting plane, more general cuts, minorant construction approaches, certain nested optimization and decomposition strategies) (Forgó, 1988; Horst and Pardalos, 1995; Pintér, 1996)

 Trajectory methods (differential equation model based, path-following search strategies) (Horst and Pardalos, 1995)

In spite of a considerable progress related to the rigorous theoretical foundations of GO, software development and 'standardized' use lag behind. The main reason for this is, of course, the inherent numerical difficulty of GO, even in the case of 'simpler' specific instances (such as, the indefinite quadratic programming problem). In general, the difficulty of a global optimization problem (GOP) can be expected to increase as some exponential function of the problem dimension n. Consequently, dimensions 100, 50 or even 10 can be considered as 'large', depending on the GOP type investigated. In the remainder of this paper, an illustrative list of software products to solve GOPs is reviewed.

2. GO Software: Information Sources and Some General Remarks

For the purposes of collecting information for this survey, GO software authors have been asked (mainly by sending e-mail messages, and by placing 'electronic ads' at several prominent mathematical programming sites on the WWW) to submit documentation related to their work. The information or lack thereof summarized below is largely based on the responses received. Additional information has been collected from the Internet, from several GO books, and from the Journal of Global Optimization. Note that though in many research publications reference is made to numerical examples, or even to sophisticated specific applications, only such work is reported below which is understood to be a general purpose and legally distributable program system.

For obvious reasons, the present survey is far from being 'complete' in any possible sense; rather, it is an attempt to provide a realistic picture of the state-of-the-art, supported by instances of existing software. This short review is not intended to be either comparative or 'judgemental': one simple reason being that the information received from GO software developers is used 'as is', mostly without the possibility of actual software testing. By the same token, the accuracy of all information cannot be guaranteed either. Further research in this direction including the preparation of a more comprehensive and detailed survey is currently in progress.

The software list provided in the next section is simply alphabetical, without categorization. For a more uniform presentation style, abbreviations are associated with all software products listed, even when such names were not given in the documentation available for this survey (existing names were not changed, of course). The descriptions are almost formula-free and extremely concise due to space restrictions. For the latter reason, we decided not to include important classes of more specific GO approaches and related methodology. In particular as reflected by the title pure or mixed integer programming and more general combinatorial optimization algorithms are not discussed here. Furthermore, although most of the available top-of-the-line continuous nonlinear (convex) optimization software can be applied with good taste and some luck to analyze GOPs, even the most prominent such systems are excluded from this review. Again, a more detailed survey is planned, appropriately discussing also the program system types mentioned.

The hardware and software platform of the systems reviewed is also shown when such information is available. In order to assist in obtaining additional information, contact person(s), their e-mail addresses, ftp and/or WWW sites are listed, whenever known to me. (For brevity, only a few such pointers are provided in each case.)

The reader is assumed to have at least some basic familiarity with the GO approaches mentioned; for related discussions, please consult the references.

3. Short Software Descriptions

\Alpha BB A GO Algorithm for General Nonconvex Problems

An implementation of a Branch-and-Bound (B&B) algorithm which is based on the difference of convex functions (D.C.) transformation. Nonconvexities are identified and categorized as of either special or generic structure. Special nonconvex (such as bilinear or univariate concave) terms are convex lower bounded using customized bounding functions. For generic nonconvex terms, convex lower bounding functions are derived by utilizing the parameter a (specified by the user or derived based on theory). aBB solves general unconstrained and constrained problems; it requires MINOS and/or NPSOL for the solution of linear or convex optimization subproblems. (Languages: C and Fortran.) Contact: I.P. Androulakis , C.A. Floudas , C.D. Maranas ,

ANNEAL Simulated Annealing

ANNEAL is based on the core SA approach, including several possibilities for parameter adjustment and a deterministic solution refinement phase. It has been applied to predict complex crystal structures. Workstation implementation. Contact: W. Bollweg , H. Maurer , H. Kroll .

ASA CalTech Adaptive Simulated Annealing

ASA was developed to find the global optimum of a continuous non-convex function over a multidimensional interval (box). This algorithm permits an annealing schedule for 'temperature' decreasing exponentially in annealing time. The introduction of re-annealing also permits adaptation to changing sensitivities in the parameter-space. Some other adaptive options in ASA include self-optimize (to find optimal starting conditions) and quenching (to methodically find faster performance that might be useful for large parameter-spaces). (Language: C.) Contact: L. Ingber , #ASA-CODE.

B&B A Family of B&B Algorithms

This obvious acronym (by the present author) attempts to summarize several B&B type algorithms developed to solve certain structured GOP classes. These include (among others) indefinite quadratic, quasiconvex-concave, and general Lipschitz problems. Workstation implementa-tions. (Language: C.) Contact: R. Horst , M. Nast , N. Thoai .

BARON Branch-And-Reduce Optimization Navigator

Combines interval analysis and duality with enhanced B&B concepts. The BARON modules can handle structured nonconvex problems up to thousands of constraints and variables. The library of specialized modules includes solvers for numerous specific GOP-classes. (For other, more general problems, underestimation routines need to be provided by the user.) All modules can solve also such problems in which some or all of the variables are restricted to integer values. The specialized modules use OSL or MINOS to solve interim subproblems. Workstations, UNIX type operating systems. (Languages: Fortran and GAMS.) Contact: N.V. Sahinidis, gma/ baron.html,

BGO Bayesian Global Optimization

This program system includes four versions of Bayesian search, clustering, uniform deterministic grid, and pure Monte Carlo search. Bound constraints and more general constraints can be handled. Interactive DOS and UNIX versions are available. (Languages: Fortran and C.) Contact: J. Mockus , L. Mockus .

cGOP Global Optimization Program

Solves structured GOPs which have an objective function of the form a T x+b T y+x T Ay+f 1 (x)+f 2 (y) with convex f 1 , f 2 , and linear constraints. Requires the presence of the commercial codes MINOS and/or CPLEX to solve linear, mixed-integer linear and convex subproblems. cGOP has been used to solve problems involving several hundred variables and constraints. Versions are available for workstations. (Language: C.) Contact: V. Visweswaran , C.A. Floudas ,

CGU Convex Global Underestimator

This approach is designed to generate efficient approximations to the global minimum of a multiextremal function, by fitting a convex function to the set of all known (calculated) local minima. This heuristically attractive strategy requires only the sequential solution of auxiliary LPs and some rather elementary calculations. CGU has been applied to calculate molecular structure predictions, up to several dozen variables. Implemented on parallel workstations and supercomputers. Contact: K.A. Dill, A.T. Phillips , J.B. Rosen .

CRS Controlled Random Search

This is a recently developed variant of a popular class of random search based methods which can be applied under very mild analytical conditions imposed on the GOP. Several other related stochastic search methods have also been developed by this group. Workstation implementations. Contact: M.M. Ali, A. Törn , S. Viitanen .

CURVI Bound-Constrained Global Optimization

Windward Technologies (WTI) develops advanced numerical and visualization software, for solving constrained and unconstrained nonlinear optimization problems. One of their solvers, CURVI is aimed at solving bound-constrained nonlinear programs which have a complicated possibly multiextremal objective function. (Language: Fortran.) Contact: T. Aird ,

DE Differential Evolution Genetic Algorithm for Bound Constrained GO

DE won third place at the 1st International Contest on Evolutionary Computation on a real-valued function test set. It was the best genetic algorithm approach (the first two places of the contest were won by non-GA algorithms). (Languages: Matlab and C.) Contact: R. Storn , storn/ code.html.

ESA Edge Searching Algorithm

An implementation of an edge search algorithm for finding the global solution of linear reverse convex programs. ESA is based on an efficient search technique and the use of fathoming criteria on the edges of the polytope representing the linear constraints. In addition, the method incorporates several heuristics, including a cutting plane technique which improves the overall performance. Implemented for several UNIX platforms; the TPG Test Problem Generator is also available. (Language: Fortran.) Contact: K. Moshirvaziri , .

GA Genetic Algorithms

Genetic algorithms as a rule can be applied to GOPs under mild structural requirements. Both general and specific information related to this popular solver class is available from the following sources: A Commented List of Genetic Algorithm Codes, ftp:// GA Archive, alist/src/. Only a few illustrative examples are listed in the present review.

GAS Genetic Algorithm

Unconstrained and bound-constrained versions are available. For DOS and UNIX operating systems. (Language: C++.) Contact: M. Jelasity , J. Dombi , ftp:// GAucsd

Genetic Algorithm

Developed and maintained at the University of California, San Diego. GAucsd was written in C under Unix but should be easy to port to other platforms. The package is accompanied by brief information and a User's Guide. (Language: C.) Contact:, GAucsd-requ,

GENERATOR Genetic Algorithm Solver

This method is aimed at solving a variety of (combinatorial and continuous multiextremal) scientific and engineering optimization problems. It is designed to interact with Excel which serves as a user interface. (Platform: Excel.) Contact: New Light Industries ,

GC Global Continuation

GC is a continuation approach to GO applying global smoothing in order to derive a simpler approximation to the original objective function. GC is applied by the authors to distance geometry problems, in the context of molecular chemistry modelling. IBM SP parallel system implementation. Contact: J.J.Moré , Z. Wu.

GENOCOP III Genetic Algorithm for Constrained Problems

Solves general GOPs, in the presence of additional constraints and bounds (using quadratic penalty terms). System parameters, domains, and linear inequalities are input via a data file. The objective function and any nonlinear constraints are to be given in appropriate C files. (Language: C.) Contact: Z. Michalewicz, http://www.coe.uncc.ed u/~zbyszek/gcreadme.html, vol/genocopIII.tar.Z.

GEODES Minimum-Length Geodesic Computing

Approximating a minimum-length geodesic on a multidimensional manifold, GEODES is differential geometry software. However, it has potential also in the GO context. GEODES includes example manifolds and metrics; it is implemented in Elements (a matrix and function oriented scientific modelling environment) to compute and visualize geodesics on 2D surfaces plotted in 3-space. Portable to various hardware platforms. (Languages: C, C++.) Contact: W.L. Anderson ,, c/.

GLO Global and Local Optimizer

GLO is a modular optimization system developed for 'black box' problems in which objective function calculations may take a long time. Its methodology is based on the coupling of global (genetic) and local (variable metric) nonlinear optimization software with scientific applications software. It has been applied to automated engineering design. Besides the modular optimization control system, GLO also has a graphical user interface and includes a preprocessor. Contact: M.J. Murphy, , M. Brosius .

GLOBAL Multistart with Stochastic Clustering

GLOBAL can be used for the solution of the general bound-constrained GOP which has a (measurable) real objective function. The algorithm is a derivative-free implementation of the clustering stochastic multistart method of Boender et al., supplemented with a quasi-Newton local search routine and with a robust random local search method. Available for UNIX machines, IBM-compatible mainframes and PCs. (Languages: Fortran and C.) Contact: T. Csendes , des/, ftp://ftp

GLOBALIZER An Educational Program System for Global Optimization

Serves for solving univariate GOPs. After stating the problem, the user can choose among various (random search, B&B based, or Bayesian partition based) solver techniques. The software has interactive tutoring capabilities, provides textual and graphical information. Works on PCs, under MS-DOS. Contact: R.G. Strongin , V.P. Gergel, A.V. Tropichev.

GLOPT Constrained Global Optimization

Solves GOPs with a block-separable objective function subject to bound constraints and block-separable constraints; it finds a nearly globally optimal point that is near to a true local minimizer. GLOPT uses a B&B technique to split the problem recursively into subproblems that are either eliminated or reduced in their size. It includes a new reduction technique for boxes and new ways for generating feasible points of constrained nonlinear programs. The current implementation of GLOPT uses neither derivatives nor simultaneous information about several constraints. (Language: Fortran.) Contact: A. Neumaier , S. Dallwig and H. Schichl.

GOPP Global Optimization of Polynomial Problems using Gröbner Bases

The (local) optimality conditions to polynomial optimization problems lead to polynomial equations, under inequality constraints. Applying recent Gröbner basis techniques, this approach is aimed at finding all solutions to such systems, hence also finding global optima. (Language: Maple.) Contact: K. Hagglof , P.O. Lindberg , L. Svensson , http://

GOT Global Optimization Toolbox

GOT combines random search and local (convex) optimization. DOS and HP UX versions are available. (Language: Fortran.) Contact: A.V. Kuntsevich . GOT Global Optimization Toolbox

GOT combines random search and local (convex) optimization. DOS and HP UX versions are available. (Language: Fortran.) Contact: A.V. Kuntsevich .

GSA Generalized Simulated Annealing

GSA is based on the generalized entropy by Tsallis. The algorithm obeys detailed balance conditions and, at low 'temperatures', it reduces to steepest descent. (Note that the members of the same research group have been involved in the development of several SA type algorithms.) Contact: J.E. Straub , P.Amara , J. Ma .

IHR Improving Hit-and-Run

IHR is a random search based GO algorithm that can be used to solve both continuous and discrete optimization problems. IHR generates random points in the search domain by choosing a random direction and selecting a point in that direction. Versions have been implemented, using different distributions for the random direction, as well as several ways to randomly select points along the search line. The algorithm can also handle inequality constraints and a hierarchy of objective functions. IHR has been used to solve GOPs in various disciplines such as in engineering design. Contact: Z. Zabinsky , .

IMINBIS Interval Arithmetic Based GO

This method applies interval arithmetic techniques to isolate the stationary points of the objective function. Next, a topological characterization is used to separate minima from maxima and saddle points, followed by local minimization (sub)searches to select the global solution. The method has been applied also to 'noisy' problems. Workstation and PC implementations, extensive related research. (Language: Fortran.) Contact: M.N. Vrahatis , D.G. Sotiropoulos , E.C. Triantaphyllou.

INTBIS Global Solver for Polynomial Systems of Equations

Finds all solutions of polynomial systems of equations, with rigorously guaranteed results. The software package NTBIS is ACM-TOMS Algorithm 681; it is available through NETLIB. Distributed with the package are four source code files, sample input and output files, and a brief documentation file. The source files consist of the following: interval arithmetic, stack management, core INTBIS routines, and machine constants. (Language: Fortran.) Contact: R.B. Kearfott , t.html, ftp://interval.usl.e du/pub/interval_math/intbis/.

INTOPT_90 Verified (Interval) Global Optimization

Serves to the verified solution of nonlinear systems of equations and unconstrained and bound-and-equality-constrained global optimization. Based on exhaustive search, driven by a local optimizer, epsilon-inflation, interval Newton methods, and interval exclusion principles; uses automatic differentiation. Test results with hundreds of test examples. The underlying interval arithmetic package (ACM TOMS Algorithm 737) is also distributed. Workstation and PC implementations. (Language: Fortran.) Contact: R.B. Kearfott , http://, ftp://interval.usl.e du/pub/interval_math/intbis/.

INTGLO, INTGLOB Integral Global Optimization

These methods solve unconstrained and constrained, as well as discrete GOPs by the integral method. They also include a discontinuous penalty function approach for constrained problems. Problems up to one hundred variables have been solved. A set of test problems is also available, including box or unconstrained, constrained, concave minimization, discrete variable programs and multicriteria programs. For IBM PCs. (Language: Fortran.) Contact: Q. Zheng , D. Zhuang .

ISA Inductive Search Algorithm

ISA won first place at the 1st International Contest in Evolutionary Computation on a real-valued function test-suite. (Language: C++.) Contact: G. Bilchev, information available at h ttp://

LGO Continuous and Lipschitz Optimization

Solves bound-constrained and more general GOPs under mild structural requirements; it can be applied also to 'black box' problems. LGO integrates several global (adaptive partition and random search based) and local (conjugate directions type) strategies: these can be activated in interactive or automatic execution modes. The PC version has a menu interface to assist the application development process, includes a concise information / tutoring session, and has visualization capabilities. Available also for workstations. LGO has been applied to problems with several 100 variables (and it can be configured to encompass even larger sizes). Accompanied by a User's Guide and sample problems. (Language: Fortran.) Contact: J.D. Pintér ,

LOPS Lipschitz Optimization Program System

In all approaches listed below, the objective function is defined over n-intervals. The Lipschitz-continuity of f or f' is also assumed. Problem-classes and corresponding available versions include: one-dimensional GOPs (sequential methods with local tuning, PC version (Language: C++) one-dimensional GOPs, parallel solver implementations (Language: Alliant FX/80, parallel Fortran) multi-dimensional GOPs: sequential and parallel algorithms using Peano curves (Language: Alliant FX/80, parallel Fortran) Contact: Y.D. Sergeyev .

MAGESTIC Data Fitting by Global Optimization

Automatic global optimization based on a fast modified Gauss-Newton approach combined with Monte Carlo search. MAGESTIC handles calibration model variants (e.g., parameter and error masks for restricted sub-fitting, implicit equation fitting without solving, etc.). Suitable for use also with Lagrange multipliers for constrained optimization. Uses Excel as an interface (under Windows) and for generating graphics. (Platform: Excel.) Contact: Logix Consulting ,

MULTISTART Clustering Algorithm

This widely used approach is based on random search or some other initial sampling in the feasible set combined with clustering and local optimization launched from the most 'promising' point(s). Implemented on SUN workstations. Several interesting applications in combination with simulation models are related to the analysis of oil resources. (Language: Fortran.) Contact: S. Buitrago .

UNICALC Interval Branch and Bound Algorithm

UNICALC serves for bound-constrained GO; accepts also inequality and/or equality constraints and decision variables. Contact: A. Semenov, information available at lc.

VerGO Verified Global Optimization

VerGO is designed for rigorous bound (and approximate general constrained) GO of a twice continuously differentiable objective function. VerGO features include interval arithmetic, automatic differentiation, non-convexity test, monotonicity test, and local optimization. Tested on problems up to over 30 variables. DOS, OS/2, Linux and workstation versions. (Language: C++.) Contact: R. van Iwaarden , http://www.cs.hope. edu/~rvaniwaa/VerGO/VerGO.html.

VTT Interval Arithmetic Research

The goals of the Interval Arithmetic, Constraint Satisfaction and Probability Project are summarized as follows: development of portable C++ libraries for interval programming tasks; integration of the libraries to Microsoft Excel; application in financial planning software products (Platforms: C++, Excel.) Contact: S. De Pascale ,

4. Acknowledgements

The software review presented here is based to a significant extent on information kindly provided by colleagues working on GO and/or closely related areas. I would like to especially thank Arnold Neumaier and Simon Streltsov for the information collected on their WWW Global Optimization Pages (respectively, http://, and ). I also wish to thank Faiz Al-Khayyal for his valuable comments on the manuscript.

The space (and time) limitations of this review certainly have made it illusory to include 'all' existing software on this rapidly changing area; omissions are entirely possible but absolutely unintentional. It is planned to continue this work and to provide a more comprehensive and informative picture of the state-of-the-art for the mathematical programming community. Comments and suggestions are most welcome; they will contribute to an 'unabridged' GO software review in the near future.


To avoid a superfluously long listing, the reference list is reduced to the most topical journal, and to several GO monographs and handbooks published in the past ten years.

Bomze, I.M., Csendes, T., Horst, R., and Pardalos, P.M., eds. (1996) Developments in Global Optimization. Kluwer Academic Publishers, Dordrecht / Boston / London.

Floudas, C.A. and Pardalos, P.M. (1990) A Collection of Test Problems for Constrained Global Optimization Algorithms. Lecture Notes in Computer Science 455, Springer, Berlin / Heidelberg / New York.

Floudas, C.A. and Pardalos, P.M., eds. (1992) Recent Advances in Global Optimization. Princeton University Press, Princeton.

Forgó, F. (1988) Nonconvex Programming. Akadémiai Kiadó, Budapest.

Grossmann, I.E., ed. (1996) Global Optimization in Engineering Design. Kluwer Academic Publishers, Dordrecht / Boston / London.

Hansen, E.R. (1992) Global Optimization Using Interval Analysis. Marcel Dekker, New York.

Horst, R. and Tuy, H. (1996) Global Optimization - Deterministic Approaches. Springer, Berlin / Heidelberg / New York. (3rd Edn.)

Horst, R. and Pardalos, P.M., eds. (1995) Handbook of Global Optimization. Kluwer Academic Publishers, Dordrecht / Boston / London.

Journal of Global Optimization (published since 1991, by Kluwer Academic Publishers).

Kearfott, R.B. (1996) Rigorous Global Search: Continuous Problems. Kluwer Academic Publishers, Dordrecht / Boston / London.

Michalewicz, Z. (1996) Genetic Algorithms + Data Structures = Evolution Programs. Springer, Berlin / Heidelberg / New York. (3rd Edn.)

Mockus, J. (1989) Bayesian Approach to Global Optimization. Kluwer Academic Publishers, Dordrecht / Boston / London.

Neumaier, A. (1990) Interval Methods for Systems of Equations. Cambridge University Press, Cambridge.

Pardalos, P.M. and Rosen, J.B. (1987) Constrained Global Optimization: Algorithms and Applications. Lecture Notes in Computer Science 268, Springer, Berlin / Heidelberg / New York.

Pintér, J.D. (1996) Global Optimization in Action. Kluwer Academic Publishers, Dordrecht / Boston / London.

Ratschek, H. and Rokne, J.G. (1988) New Computer Methods for Global Optimization. Ellis Horwood, Chichester.

Törn, A.A. and Zilinskas, A. (1989) Global Optimization. Lecture Notes in Computer Science 350, Springer, Berlin / Heidelberg / New York.

Van Laarhoven, P.J.M. and Aarts, E.H.L. (1987) Simulated Annealing: Theory and Applications. Kluwer Academic Publishers, Dordrecht / Boston / London.

Zhigljavsky, A.A. (1991) Theory of Global Random Search. Kluwer Academic Publishers, Dordrecht / Boston / London.