What is an objective function in linear programming?
In linear programming problems, the objective function refers to the real-valued function whose value has to be either maximized or minimized according to the constraints that are defined on the specified linear programming problem over a set of possible solutions. It is essentially a mathematical expression that describes the problem’s objective and can be made as large or small as possible. The objective function is a linear function of the form z = ax + by. The goal of the linear programming problem will inform you whether you need to maximize or minimize the objective function. It generally tends to represent cost or profit.
The optimization function defines the relationship between input and output of a system which is represented by the function.
In the context of optimization, an objective function would essentially be the system objective that is presented as a function of decision variables.
In multi-objective optimization, many of these objective functions have to be optimized simultaneously.
The objective function of linear programming and the constraints that are placed upon the problem have to be deterministic and should be able to get expressed in linear form. These restrictions impose a limit on the number of problems that could be directly handled. However, there has been a lot of progress since the introduction of linear programming in the late 1940s which made it possible to adapt the method to problems of greater complexity.
Linear programming could very possibly be the most extensively used mathematical optimization and there are several computer programs that are available for solving linear programming problems.
Linear programming techniques are used quite frequently in problems like oil and chemical refinery blending, selecting vendors or suppliers for large, multiplant manufacturing corporations, figuring out shipping routes and schedules, and managing and maintaining truck fleets.
How do you solve an objective function?
Here are the steps involved in solving an objective function:
- First, you need to Graph the region that corresponds to the solution of the system of constraints.
- Next, you need to identify the coordinates of the vertices of the region formed.
- Then you analyze and evaluate the objective function at every vertex of the region to figure out which x - and y -values, if any, are capable of maximizing or minimizing the function.
How do you maximize an objective function?
A problem in which you have to maximize the objective function is known as a maximization linear programming problem.
There are essentially two types of maximization problems:
In standard maximization problems, the objective function must be maximized, all the constraints in the problem are of the same form 𝑎𝑥 + 𝑏𝑦 ≤ 𝑐, and every single variable is constrained by non-negative constraints ( 𝑥 ≥ 0 , 𝑦 ≥ 0 ).
The other type of maximization problem is problems that have mixed constraints. In problems like these, some of the inequality constraints are of the form 𝑎𝑥 + 𝑏𝑦 ≤ 𝑐 and while the others are of the form 𝑎𝑥 + 𝑏𝑦 ≥ 𝑐. The non-negativity constraints are a critical requirement in all linear programming problems.
Here are the basic steps involved in maximizing an objective function:
- Start by writing the objective function.
- Proceed to write the constraints.
In standard maximization linear programming problems, constraints are of the form: 𝑎𝑥 + 𝑏𝑦 ≤ 𝑐.
Because the variables are non-negative, you can include the constraints 𝑥 ≥ 0 ; 𝑦 ≥ 0. - After that, you need to plot the constraints on a graph.
- Now, you shade the feasibility region.
- Proceed to detect the corner points.
- Figure out which corner point gives you the highest value. You can do this by figuring out the value of the objective function at every single corner point. You could even achieve this if you move the line associated with the objective function.
How do you minimize objective function?
Minimization linear programming problems are solved in a manner that’s pretty similar to the way maximization problems are solved.
In standard minimization problems, the constraints are of the form 𝑎𝑥 + 𝑏𝑦 ≥ 𝑐, instead of the form 𝑎𝑥 + 𝑏𝑦 ≤ 𝑐 used in the standard maximization problems. Because of this, the feasible solution will extend indefinitely to the upper right of the first quadrant, and is unbounded. But that isn’t an issue because for the purpose of minimizing the objective function, the line associated with the objective function gets moved towards the origin, and the critical point that minimizes the function that is closest to the origin.
You should know that in a situation where there is an unbounded feasibility region, there is no chance of finding an optimal solution.
A linear program would also not have an optimal solution if there is no feasibility region. If the inequality constraints aren’t compatible, there wouldn’t be a region in the graph that satisfies all the constraints. If the linear program does not have a feasible solution that satisfies all the constraints, then it is not possible for it to have an optimal solution.
Here are the basic steps involved in minimizing an objective function:
- You have to first write the objective function.
- Next, you need to write the constraints.
For the standard minimization linear programming problems, constraints will be of the form: 𝑎𝑥 + 𝑏𝑦 ≥ 𝑐
Since the variables are non-negative, you should include the constraints: 𝑥 ≥ 0 ; 𝑦 ≥ 0 - Now, you have to graph the constraints.
- Next, you need to shade the feasibility region.
- Now, you have to identify the corner points
- Figure out which of the corner points give you the minimum value.
This can be achieved by detecting the value of the objective function at each corner point.
You can also do this by moving the line associated with the objective function.
It is also possible that the problem will not have a solution.