What is mathematical optimization?
Mathematical optimization is the selection of a best element, with regard to some criterion, from some set of available alternatives. Optimization problems of sorts arise in all quantitative disciplines from computer science and engineering to operations research and economics, and the development of solution methods has been of interest in mathematics for centuries.
In the simplest case, an optimization problem consists of maximizing or minimizing a real function by systematically choosing input values from within an allowed set and computing the value of the function. The generalization of optimization theory and techniques to other formulations constitutes a large area of applied mathematics. More generally, optimization includes finding "best available" values of some objective function given a defined domain (or input), including a variety of different types of objective functions and different types of domains.
What is mathematical optimization used for?
Mathematical optimization is an extremely powerful prescriptive analytics technology that enables companies to solve complex business problems and make better use of available resources and data. Mathematical programming allows you to capture the key features of a complex real-world problem as an optimization model. An optimization model is comprised of relevant objectives (business goals), variables (decisions in your control) and constraints (business rules) to recommend a solution that generates the best possible result. A math programming solver is the computational engine that reads the optimization model and then delivers an optimal feasible solution.
Mathematical programming technologies are being used by leading companies, often resulting in tens or even hundreds of millions of dollars in cost savings and revenue. The following are examples of industries and companies using mathematical optimization:
- Electrical power distribution: New York ISO uses optimization to choose the most cost-effective way to deliver electricity to customers.
- Finance: Betterment uses optimization to choose the optimal mix of assets, which maximize after-tax returns while minimizing risk.
- Government: The FCC used optimization to generate the first two-sided spectrum auction, generating $20 Billion in revenue.
- Logistics: FedEx streamlines costs by optimizing the routing of packages through their shipping network.
- Manufacturing: SAP uses optimization to efficiently schedule the production of goods in factories in order to meet customer orders.
- Sports scheduling: The NFL uses optimization to compute the best possible league schedules.
What are 3 main components of mathematical optimization?
Optimization models have three major components: decision variables, objective function, and constraints.
1. Decision variables
Decision variables are physical quantities controlled by the decision maker and represented by mathematical symbols. Decision variables take on any of a set of possible values.
2. Objective function
Objective function defines the criterion for evaluating the solution. It is a mathematical function of the decision variables that converts a solution into a numerical evaluation of that solution. For example, the objective function may measure the profit or cost that occurs as a function of the amounts of various products produced. The objective function also specifies a direction of optimization, either to maximize or minimize. An optimal solution for the model is the best solution as measured by that criterion.
3. Constraints
Constraints are a set of functional equalities or inequalities that represent physical, economic, technological, legal, ethical, or other restrictions on what numerical values can be assigned to the decision variables. For example, constraints might ensure that no more input is used than is available.
In optimization models we find values for the decision variables that maximize or minimize the objective function and satisfy all constraints.
What are the optimization techniques?
The field of data mining increasingly adapts methods and algorithms from advanced matrix computations, graph theory and optimization. In these methods, the data is described using matrix representations (graphs are represented by their adjacency matrices) and the data mining problem is formulated as an optimization problem with matrix variables. With these, the data mining task becomes a process of minimizing or maximizing a desired objective function of matrix variables.
Prominent examples include spectral clustering, matrix factorization, tensor analysis, and regularizations. These matrix-formulated optimization-centric methodologies are rapidly evolving into a popular research area for solving challenging data mining problems. These methods are amenable to vigorous analysis and benefit from the well-established knowledge in linear algebra, graph theory, and optimization accumulated through centuries. They are also simple to implement and easy to understand, in comparison with probabilistic, information-theoretic, and other methods. In addition, they are well-suited to parallel and distributed processing for solving large scale problems. Last but not the least, these methodologies are quite flexible and they can be used to formulate a large number of data mining tasks.
What is optimization and its types?
1. Continuous Optimization versus Discrete Optimization
Some models only make sense if the variables take on values from a discrete set, often a subset of integers, whereas other models contain variables that can take on any real value. Models with discrete variables are discrete optimization problems; models with continuous variables are continuous optimization problems. Continuous optimization problems tend to be easier to solve than discrete optimization problems; the smoothness of the functions means that the objective function and constraint function values at a point x can be used to deduce information about points in a neighborhood of x. However, improvements in algorithms coupled with advancements in computing technology have dramatically increased the size and complexity of discrete optimization problems that can be solved efficiently. Continuous optimization algorithms are important in discrete optimization because many discrete optimization algorithms generate a sequence of continuous subproblems.
2. Unconstrained Optimization versus Constrained Optimization
Another important distinction is between problems in which there are no constraints on the variables and problems in which there are constraints on the variables. Unconstrained optimization problems arise directly in many practical applications; they also arise in the reformulation of constrained optimization problems in which the constraints are replaced by a penalty term in the objective function. Constrained optimization problems arise from applications in which there are explicit constraints on the variables. The constraints on the variables can vary widely from simple bounds to systems of equalities and inequalities that model complex relationships among the variables. Constrained optimization problems can be furthered classified according to the nature of the constraints (e.g., linear, nonlinear, convex) and the smoothness of the functions (e.g., differentiable or nondifferentiable).
3. None, One or Many Objectives
Most mathematical optimization problems have a single objective function, however, there are interesting cases when optimization problems have no objective function or multiple objective functions. Feasibility problems are problems in which the goal is to find values for the variables that satisfy the constraints of a model with no particular objective to optimize. Complementarity problems are pervasive in engineering and economics. The goal is to find a solution that satisfies the complementarity conditions. Multi-objective optimization problems arise in many fields, such as engineering, economics, and logistics, when optimal decisions need to be taken in the presence of trade-offs between two or more conflicting objectives. For example, developing a new component might involve minimizing weight while maximizing strength or choosing a portfolio might involve maximizing the expected return while minimizing the risk. In practice, problems with multiple objectives often are reformulated as single objective problems by either forming a weighted combination of the different objectives or by replacing some of the objectives by constraints.
4. Deterministic Optimization versus Stochastic Optimization
In deterministic optimization, it is assumed that the data for the given problem are known accurately. However, for many actual problems, the data cannot be known accurately for a variety of reasons. The first reason is due to simple measurement error. The second and more fundamental reason is that some data represent information about the future (e. g., product demand or price for a future time period) and simply cannot be known with certainty. In optimization under uncertainty, or stochastic optimization, the uncertainty is incorporated into the model. Robust optimization techniques can be used when the parameters are known only within certain bounds; the goal is to find a solution that is feasible for all data and optimal in some sense. Stochastic optimization models take advantage of the fact that probability distributions governing the data are known or can be estimated; the goal is to find some policy that is feasible for all (or almost all) the possible data instances and optimizes the expected performance of the model.