**By Hans Mittelmann (mittelmann at asu.edu)**

For many years our benchmarking effort had included the solvers CPLEX, Gurobi, and XPRESS. Through an action by Gurobi at the 2018 INFORMS Annual Meeting this has come to an end. IBM and FICO demanded that results for their solvers be removed and then we decided to temporarily remove those of Gurobi as well. CPLEX was in fifteen of the benchmarks, Gurobi and XPRESS in thirteen. See here for more details. In late November 2019 selected benchmarks for Gurobi were added.

A partial record of previous benchmarks can be obtained from this webpage and some additional older benchmarks

**
See this graphical tool for visualization of the results including a virtual best (ensemble)
**

* Concorde-TSP with different LP solvers (5-2-2022)*

* LPfeas Benchmark (find a PD feasible point) (12-7-2022)*

* LPopt Benchmark (find optimal basic solution) (12-9-2022)*

* Large Network-LP Benchmark (commercial vs free) (11-12-2022)*

* MILP Benchmark - MIPLIB2017 (11-13-2022)*

* MILP cases that are slightly pathological (11-26-2022)*

* Infeasibility Detection for MILP Problems (11-22-2022)*

* SQL problems from the 7th DIMACS Challenge (8-8-2002)*

* Several SDP codes on sparse and other SDP problems (11-4-2022)*

* Infeasible SDP Benchmark (8-29-2022)*

* Large SOCP Benchmark (11-12-2022)*

* AMPL-NLP Benchmark (1-18-2022)*

* Non-commercial convex QP Benchmark (9-16-2021)*

* Binary Non-Convex QPLIB Benchmark (11-13-2022)*

* Discrete Non-Convex QPLIB Benchmark (non-binary) (11-20-2022)*

* Continuous Non-Convex QPLIB Benchmark (11-20-2022)*

* Convex Continuous QPLIB Benchmark (11-26-2022)*

* Convex Discrete QPLIB Benchmark (11-12-2022)*