优化-拉格朗日乘子法简述

简介: Lagrange Multipliers are used to solve the optimal value of multivariate functions under a group of constraints. By lagrange multipliers, we can convert an optimal problem with dd variables

Lagrange Multipliers are used to solve the optimal value of multivariate functions under a group of constraints. By lagrange multipliers, we can convert an optimal problem with d variables and k constraints to one with d+k variables without constraint.

  1. Equality Constraint
    Suppose xRd, we would like to solve some optimal value x s. t. minxf(x) and g(x)=0, i.e.
    minxf(x),s.t.g(x)=0

    For simplicity, we omit the geometric explanation of the optimal problem. Define the Lagrange multiplier λ0 s.t.
    f(x)+λg(x)=0

    Define the corresponding Lagrange funcntion as
    L(x,λ)=f(x)+λg(x)

    So
    xL(x,λ)=f(x)+λg(x)=0

    λL(x,λ)=g(x)=0

    Obviously, the original optimal problem is converted to the new optimal problem with no constraint.
  2. Inequality Constraint
    In the inequality constraint case, the optimal problem can be defined as
    minxf(x),s.t.g(x)0

    When g(x)<0, the constraint g(x)0 makes no sense which means that we can take f(x)=0 to solve the optimal problem. In addition, when g(x)=0, the inequality constraint degrades to the equality constraint.
    In summary, the KKT (Karush-Kuhn-Tucker) conditions are always satisfied:
    g(x)0;λ0;λg(x)=0

    With efficiency concerned, we show the solution of the optimal problem together with next problem. Go on.
  3. Multi-constraints
    Consider an optimal problem with m equality constraints and n inequality constraints. In addition there is a feasible region DRd:
    minxs.t.f(x)hi(x)=0,i=1,2,,m,gj(x)0,j=1,2,,n

    Define the lagrange function as
    L(x,λ,μ)=f(x)+i=1mλihi(x)+j=1nμjgj(x)

    Since there are inequality constraints, the KKT condition is followed:
    gj(x)0;μj0;μjgj(x)=0.

    Solving the original optimal problem, also known as primal problem, can be achieved by solving the corresponding dual problem. Then the Lagrange dual function Γ:Rm×RnR is defined as
    Γ(λ,μ)=infxDL(x,λ,μ)=infxDf(x)+i=1mλihi(x)+j=1nμjgj(x)

    Evidently, mi=1λihi(x)+nj=1μjgj(x)0. Let x~D, then
    Γ(λ,μ)=infxDL(x,λ,μ)L(x~,λ,μ)f(x~)
    That is to say, the dual function shows the lower bound of the value of the primal problem. Then the dual problem is given by
    maxλ,μΓ(λ,μ)s.t.μ0

    The dual problem is always a convex optimal problem, regardless of the convexity of the primal problem.

Let p be the optimal value of the primal problem, d be the optimal value of the dual problem. It has been shown that dp which is also known as weak duality. If d=p, then strong duality holds. Generally, the strong duality does not always hold. However, when the primal problem is a convex optimal problem, i.e. f(x),gj(x) are convex functions, hi(x) are affine functions and there exists a least one x~ making the inequality constraints strictly hold, the strong duality holds. In this case, take the derivative of the Lagrange function over x, λ and μ and set it to be zero. Then we can solve the dual problem as well as the primal problem.

相关文章
|
2天前
|
算法
基于改进自适应分段线性近似(IAPLA)的微分方程数值解法研究: 从简单动力系统到混沌系统的应用分析
IAPLA方法为复杂动力系统的数值模拟提供了一个灵活、高效且易于实现的框架,在众多实际应用中可以作为现有数值求解器的有效替代方案。
10 2
基于改进自适应分段线性近似(IAPLA)的微分方程数值解法研究: 从简单动力系统到混沌系统的应用分析
|
6月前
|
机器学习/深度学习 人工智能 算法
上升到人生法则的贝叶斯理论
贝叶斯定理在数据分析、机器学习和人工智能等领域有广泛的应用。贝叶斯定理(Bayes' theorem)是一种用于计算条件概率的重要定理,它基于条件概率的定义,描述了在已知某一条件下,另一个条件发生的概率。
|
11月前
非线性规划的概念
非线性规划的概念
|
11月前
线性规划解的概念
线性规划解的概念
|
机器学习/深度学习 算法 数据处理
无约束最优化(五) 最小二乘法问题的解法
无约束最优化(五) 最小二乘法问题的解法
180 0
双层优化入门(2)—基于yalmip的双层优化求解(附matlab代码)
​上一篇博客介绍了双层优化的基本原理和使用KKT条件求解双层优化的方法,这篇博客将介绍使用yalmip的双层优化问题的求解方法。 1.KKT函数 通过调用yalmip工具箱中的KKT函数,可以直接求出优化问题的KKT条件,省去自己手动写的步骤。 2.solvebilevel函数 solvebilevel是yalmip工具箱内置的求解双层优化问题的函数。也就是通过这个函数,不需要咱手动写KKT条件,也不需要使用KKT函数,直接把上、下层优化的目标函数、约束条件往里面一放,就能求出结果。 ​
|
机器学习/深度学习 人工智能 算法
【机器学习】支持向量机(SVM)——硬间隔+对偶+KKT条件+拉格朗日乘子(理论+图解+公式推导)
【机器学习】支持向量机(SVM)——硬间隔+对偶+KKT条件+拉格朗日乘子(理论+图解+公式推导)
315 0
【机器学习】支持向量机(SVM)——硬间隔+对偶+KKT条件+拉格朗日乘子(理论+图解+公式推导)
|
算法 决策智能
运筹优化学习06:拉格朗日松弛算法(一)
运筹优化学习06:拉格朗日松弛算法(一)
运筹优化学习06:拉格朗日松弛算法(一)
|
机器学习/深度学习 算法
下一篇
无影云桌面