Prime Problem and Dual Problem

recommend textbook:
Convex Optimization Stepthen Boyd
Nonlinear Programming

Prime Problem

$$
\left{\begin{array}{l}
minimize\ f(\boldsymbol\omega)\
s.t.\left{\begin{array}{l}
g_i(\boldsymbol\omega)\leq0(i=1\sim K)\
h_i(\boldsymbol\omega) = 0 (i=1\sim N)
\end{array}\right.
\end{array}\right.
$$

Dual Problem

$$
\left{\begin{array}{l}
\Theta(\boldsymbol\alpha, \boldsymbol\beta) = \min\limits_{all \ ω}{L(\boldsymbol\omega, \boldsymbol\alpha, \boldsymbol\beta)}\
s.t. \ \boldsymbol\alpha_i \geq0
\end{array}\right.\
L(\boldsymbol\omega,\boldsymbol\alpha, \boldsymbol\beta) = f(\boldsymbol\omega)+\sum\limits_{i=1}^{K}\boldsymbol\alpha_ig_i(\boldsymbol\omega)+\sum\limits_{i=1}^{M}\boldsymbol\beta_ih_i(\boldsymbol\omega)\
L(\boldsymbol\omega,\boldsymbol\alpha, \boldsymbol\beta) = f(\boldsymbol\omega)+\boldsymbol\alpha^Tg(\boldsymbol\omega)+\boldsymbol\beta^Th(\boldsymbol\omega) \ (vectorize)
$$
Theorem: if $\boldsymbol\omega^*$ is the solution of the prime problem and $\boldsymbol\alpha^*, \boldsymbol\beta^*$ is the solution of the dual problem. Then: $f(\boldsymbol\omega^*) \geq \Theta(\boldsymbol\alpha^*, \boldsymbol\beta^*)$
Prove:
∵ $\boldsymbol\omega^*$ satisfies prime problem and $\boldsymbol\alpha^*,\boldsymbol\beta^*$ satisfy dual problem
$$
L(\boldsymbol\omega^*,\boldsymbol\alpha^*, \boldsymbol\beta^*) = f(\boldsymbol\omega^*)+\sum\limits_{i=1}^K\mathop{\boldsymbol\alpha_i^*}\limits_{\geq0}\mathop{g(\boldsymbol\omega^*)}\limits_{\leq0} +\sum\limits_{i=1}^N\boldsymbol\beta_i\mathop{h(\boldsymbol\omega)}\limits_{=0}
$$
∴ $L(\boldsymbol\omega^*,\boldsymbol\alpha^*, \boldsymbol\beta^*) \leq f(\boldsymbol\omega^*)$
it is easy to prove that:
$\Theta(\boldsymbol\alpha^*, \boldsymbol\beta^*)\leq L(\boldsymbol\omega^*,\boldsymbol\alpha^*, \boldsymbol\beta^*)$
∴ $f(\boldsymbol\omega^*) \geq \Theta(\boldsymbol\alpha^*, \boldsymbol\beta^*)$

Define the distance between prime problem and dual problem Duality Gap $G = f(\boldsymbol\omega^*) - \Theta(\boldsymbol\alpha^*, \boldsymbol\beta^*) \geq0$. In certain conditions, G = 0.

Strong Duality Theorem(I won’t prove it here)
If f(ω) is a convex function and $g(\omega)=A\omega+b, h(\omega) = C\omega+d$, then G = 0.

Suppose we have already proved strong duality theorem, then $f(\boldsymbol\omega^*)=\Theta(\boldsymbol\alpha^*, \boldsymbol\beta^*)$:
①$\omega^* = \omega,\ \Theta(\boldsymbol\alpha^*, \boldsymbol\beta^*)=L(\boldsymbol\omega^*,\boldsymbol\alpha^*, \boldsymbol\beta^*)$
②$\sum\limits_{i=1}^{K}\alpha_i^g_i(\omega^) = 0, \ \forall i=1\sim K, \ \alpha_i^*=0 \ or \ g_i^*(\omega^*) = 0$
② is very important, it is also called KKT condition