Linear Programming Algorithms

Learn about the different optimization algorithms that solve linear programming problems.

Previously, we studied the three steps of solving any problem using linear programming. The third step, which is to find the optimal solution, is at the heart of the process, and it is the most rigorous one. We can employ different algorithms in this way. The choice largely depends on resource constraints like time, computational speed, and precision, among others.

There are numerous optimization algorithms, but we will discuss the more commonly used ones in more detail.

The simplex algorithm

The simplex is the oldest and the most widely-used optimization algorithm. This algorithm involves the selection of a random corner point of the feasible region. The feasible region (or the polygonal region) corresponds to the constraints that are graphically specified. Each corner of the region represents a solution—a possible optimal solution will be one of them. Depending on the type of optimization (maximization or minimization), the algorithm travels across the neighboring corner points to find out if another point provides a better solution. If that is the case, the algorithm switches to this new point and repeats until the point has no neighbor that provides a better solution.

A major reason for the efficient working of this algorithm is that it does not compute the value of the objective function at every point of the feasible region. Instead, it computes the value only at the corner points, continuously improving it until an optimal solution is found. However, a major drawback of the simplex algorithm is that while it performs efficiently for most practical applications, in the worst cases, it can take an exponential amount of time.

Get hands-on with 1400+ tech skills courses.