Algorithms that zero in on solutions to optimization problems are the beating heart of machine reasoning. New results reveal surprising limits.
Our lives are a succession of optimization problems. They occur when we search for the fastest route home from work or attempt to balance cost and quality on a trip to the store, or even when we decide how to spend limited free time before bed.
These scenarios and many others can be represented as a mathematical optimization problem. Making the best decisions is a matter of finding their optimal solutions. And for a world steeped in optimization, two recent results provide both good and bad news.
In a paper posted in August 2020, Amir Ali Ahmadi of Princeton University and his former student, Jeffrey Zhang, who is now at Carnegie Mellon University, established that for some quadratic optimization problems — in which pairs of variables can interact — it’s computationally infeasible to find even locally optimal solutions in a time-efficient manner.
But then, two days later, Zhang and Ahmadi released a second paper with a positive takeaway. They proved that it’s always possible to quickly identify whether a cubic polynomial — which can feature three-way interactions between variables — has a local minimum, and to find it if it does.
The limits are not what their discoverers expected.