Linear Programming Duality Homework Chart

All our constraints are inequality constraints, greater than or equal.

We have a minimization problem.

All our variables are non-negative.

How do we write the dual?

Minimization becomes Maximization.

What is the row associated to edge e?

It says xu+xv is at least 1.

xu + xv, 1 times xu, 1 times xv, 0 times everything else.

So the corresponding row of the matrix has 0s everywhere

except on the column associated to u and in the column associated to v.

0 0 0 0 0, 1.

0001000.

That is the role associated to edge e.

Now to find the constraint associated to vertex u,

we need to look at the column of the matrix associated to vertex u.

So let us focus in that matrix, on the column where vertex u appears.

What we see is that many of the provisions are 0.

Which coefficients are 0, are non-zero,

when there's an edge, uv, or u something else.

An edge adjacent to vertex u.

An edge containing u.

For edge u, e prime ux, there is a row that has

zeroes everywhere except one for u and one for x.

For e double prime at uy,

there's a row that has zeroes everywhere except one for u and a one for y.

And so, if we look at the column associated to vector,

associated to vortex u, what we see is that we have a one exactly for

every edge containing u and a zero otherwise.

X, we want to minimize the total weight of X.

X has very low value, value zero, great.

But X is not feasible.

It does not satisfy all the constraints.

In fact it satisfies no constraints.

So, and we try to maximize the value of y, so what shall we do?

We're going to try to make x feasible, little by little.

And in order to make x feasible, we're going to use y.

And we get to play with dual variables and

constraints and complimentary slackness conditions.

Repeat, take on edge e, such that this constraint is violated,

such that x is not feasible, it does not satisfy this constraint.

Such that xu plus sv is strictly less than 1.

What do we do with this E?

E appears as a constraint in the primal, as a variable in the dual.

[COUGH] We start with the dual.

We start with y sub e, and we increase it, increase it as much as we can.

We're trying to maximize here.

We'll increase it until something goes wrong.

Until one of these constraints here, the left-hand side increases,

increases, increases, until it hits the barrier w sub u.

Either because the constraints for u becomes tight.

We've increased y until the left-hand side equals wu,

or the constraint for v becomes tight.

In the first case,

we are blocked from increasing because of the constraint associated to u.

We pick vertex u, we put it in a vertex cover.

In other words, we set x u to be equal to 1.

We modify the primal variable associated to the dual

constraint that has come into play.

In a second case, same thing with x sub v.

X sub v is set to 1.

So that is our algorithm.

That is our algorithm playing with both the primal and the dual and

trying to construct simultaneously an x and a y.

Now what can we say about this?

Vector y initially was feasible, it satisfied the constraints of the dual.

When we increase it, as soon as the constraint becomes tight,

we stop increasing y.

So what this means is that we never to exceed this constraint.

So what this means is that vector y remains feasible throughout the execution.

Moreover, what can we say about x?

Initially, it's not feasible at all, but then at every iteration of the repeat,

we say pick an e, such that the constraint e is not satisfied.

Then we do some things, and then in the end, either xu becomes 1, or

xv becomes 1, and at that point, xu plus xv will be equal to 1.

So what this means is at that point, this constraint will be satisfied.

So at every iteration, we satisfy one more constraint for x, and so

eventually, we'll stop repeating because no constraint will be violated,

and then x will satisfy every constraint, x will be feasible.

At the end of the execution, both x and

y will be feasible solution, and x will be an integer solution.

Everything will be zeros and ones.

So it does correspond to a vertex cover.

So the output will be a vertex cover.

As to why, is y an integer?

Well, it depends.

Depends on the weights, but it doesn't matter.

We don't care.

That is not important here.

So what can we say about this if we think about

the complementary slackness conditions?

If we look at the pair of output values, output vectors x and

y, we can see that for every u in v, either x sub u is 0.

It was zero initially, or it's not zero.

If it's not zero, why is it not zero?

It's not zero because it got raised to one.

Why did it get raised to one?

It got raised to one because the corresponding constraint became tight.

So whenever xu is nonzero, the dual constraint

associated to u is true with equality.

In other words, for every vertex u, either xu is 0 or

the sum over every edge containing u of y sub f = w sub u.

This is exactly the primal complementary slackness conditions.

So we've done half of the work of building a pair of solutions that

are optimal that both satisfied the complementary slackness conditions.

What about the other half?

If we look at e, Either ye is 0,

maybe we never raised ye, or ye was raised.

Raised until when?

Until something blocked and then we set either xu or xv to 1, and

then later, maybe for some other reasons, the other one would also be raised to 1.

So what we can say is that if ye is nonzero,

then xu + xv is maybe 1 or maybe 2.

So we don't quite satisfy the dual

complementary slackness conditions, but we satisfy them up to a factor of two.

So the complementary slackness conditions are satisfied up to a factor of two,

and that is the key point that enables us to prove bounds on

the approximation ratio of this primal-dual algorithm.

Now we are ready to go through the formal proof.

And here is the proof.

The value of the output sum over u of wuxu.

What is it?

We are trying to mimic the weak duality theorem proof, how the proof goes.

So either, actually zero, or something happens.

So let's treat this summation to the nonzero terms.

Sum over u such that xu is different from 0.

Then, we can remove this condition, xu different from 0.

It was only useful for that single step.

So we go back to sum of the every u,

of sum of all the edges in the graph, verges into u of yexu.

Then what, then thinking back about the proof of weak duality,

the next thing we do is we have a double summation, we invert it.

Sum over u, sum over e,

sum over e, sum over u of yexu.

So this time, here the sum was over every edge e verges into u,

now the sum is over every vertex u verges into e.

Sum over e of y sub e times sum over u in e of x sub u.

And what is this?

This is if e at uv, this is xu plus xv.

If we had dual complementary slackness conditions,

we could say, either ye is 0 or the sum is equal to 1.

But we don't have that.

We have that up to a factor of 2.

So this is what we use now, that sum is at most 2.

So we have at most the sum of 2ye.

2 times the sum of ye, and now we have the dual objective.

The dual objective, less than OPT, because y is feasible.

And so, the value of the output is at most 2 times OPT.

Again, this proof relies on an approximate version

of complementary slackness conditions and mimics the proof of weak duality.

What have we in the end?

We have an algorithm, a primal-dual algorithm for vertex cover, that gives us

an output x, which is a 2-approximation to the best possible vertex cover.

We have not really made any progress as to the quality of our approximation.

So this was just an example to show you how primal-dual algorithms work.

If you look at this, you can see that this algorithm is very pretty, very elegant.

It's elegant because it can be implemented quickly, it's purely communitorial.

You don't have to use a black box for solving a linear program.

You just use the linear program's primal and

dual as guidelines for guiding a purely communitorial construction.

In fact, you could rephrase this without talking about duality or

linear programs, but it would still give you a true approximation and

the hidden reason would be duality.

Pick an edge such that the constraint is validated, for

example, let's pick edge 4,5.

4,5, we take the dual variable, ye,

it was 0 originally, we raise it, raise it, raise it, until it hits a barrier.

This period is at 4.

That period is at 5.

The first one that it hits is 4.

Therefore, ye is equal to 4.

We take the vertex whose constraint caused this block, this vertex, and

we put it in the cover.

This vertex goes in the cover.

That's what happens during the first iteration.

Now, this edge has been dealt with.

This vertex is under cover.

Continue, next round of the iteration.

We find some constraint that is not satisfied by x, by the current x.

The current x has this x equal to 1 and all other xs equal to 0.

In particular, this edge between 5 and 6, the constraint is not that defined.

So, let's take this edge for example, arbitrary validity constraint.

While e was equal to 0, we raise ye until it hits a block.

When is that block?

So, that's what happens.

We can raise this variable all the way to 1, and

then we take the corresponding vertex, vertex 5,

and add it to the vertex cover we are in the process of constructing.

Now we have a dual vector where this one is 1, that one is 4, and the others are 0.

This two vertices are in the cover, the other vertices have their x equal to 0.

Find a new validity constraint.

This one say, between vertex 2 and vertex 7.

So what do we do with this?

Again, we do the same thing.

ye is 0.

We raise it, raise it, raise it until it hits a constraint.

2 or 7, which one happens first?

2, 2 so y equals 2.

We stop there and then what caused the block?

This vertex so we put it in the vertex cover.

And so now we have, one, two, three vertices in the vertex cover.

ye is 4, 1, 2 for these three areas, 0 for the others.

At this point, we look for their new valid constraint.

None happens.

All the constraints are satisfied so we stop and

we output the resulting set of vertices.

The vertices of weight 2, 4, 5.

This is a vertex cover.

At the same time, we constructed a dual feasible solution,

ye equal to 4 plus 2 plus 1.

4 plus 2 is 6, plus 1 is 7.

That's the dual solution, a witness to the fact that OPT cannot be better than 7.

Indeed, our output is not optimal.

If we think about it, we could get rid of this 4, replace it by that.

Those three vertices, 1, 2, 5,

would be a valid vertex cover and would have a value of just 8.

But we don't know that.

What we know is, we have an output of value 11.

We have a dual of value 7.

So, we know our value is at most 11/7 of OPT.

We proved the theorem saying it was a factor of 2 in general, but

in the execution we have a certificate, a per instance certificate,

that in this case, proves that our output has value at most 11/7 of optimal.

So, with this example,

we have seen a setting where you can design a primal dual approximation

algorithm to match the best known approximation factor for vertex cover.

Now what we will do is use a similar approaches to try to solve harder or

open problems on empty hard communitorial optimization questions.

Скажи, что ты ушел с поста декана. Дэвид кивнул. - В следующем семестре я возвращаюсь в аудиторию. Сьюзан с облегчением вздохнула: - Туда, где твое подлинное призвание.

0 thoughts on “Linear Programming Duality Homework Chart”

    -->

Leave a Comment

Your email address will not be published. Required fields are marked *