User:Realazthat/Notes/Scrap: Difference between revisions

From Rosetta Code
Content added Content deleted
 
(18 intermediate revisions by the same user not shown)
Line 109: Line 109:


== Better algorithm ==
== Better algorithm ==
If we know something about the size of E, we can improve the complexity. For example, in SAT when converted to DHC, all of the variable graph nodes will have a maximum of 3 edges leading in, and 3 out: Each node is connected to the node to the left, and to the right, and possibly to a clause node. The only nodes that have more than this are the clause nodes themselves, which will have m incoming edges, and m outgoing edges, where m is the number of variables in that clause. Thus, it is bounded by the number of variables in the problem. For 3SAT, it is additionally bounded by three, and for k-SAT, by k. Converting from DHC to UHC splits the incoming edges to one node and outgoing to another. Thus, for a k-SAT problem, <math>|E| \in O(k|G|)</math> instead of <math>|E| \in O(\left|G\right|^{2})</math>.

As we iterate over the different possible values for <math>q</math>, there are incrementally more possibilities for <math>(r,s)</math>. Each selection of <math>(r,s)</math> implies a possible breakpoint for <math>(p,r)</math>.
As we iterate over the different possible values for <math>q</math>, there are incrementally more possibilities for <math>(r,s)</math>. Each selection of <math>(r,s)</math> implies a possible breakpoint for <math>(p,r)</math>.


Line 147: Line 149:
\right\}
\right\}
</math>
</math>

The nice thing about these three sets, is that they can be incrementally calculated. <math>T_{q} = T_{q+1} \cup \left\{C_{q+1}\right\}</math> and so on for U and V. Basically, iterate through each possible q (O(|G|), and for each q, determine which sets the two new possibilities (r,s) go into (O(1) time), and insert them (O(log |G|) time, or O(1) if hashset or similar structure; we will assume O(1)).



<math>D_{q}</math> is a set that contains the adjacent vertices to <math>C_{q}</math> in G.
<math>D_{q}</math> is a set that contains the adjacent vertices to <math>C_{q}</math> in G.
Line 157: Line 162:
</math>
</math>


All <math>D_{i}</math> can be precalculated in a large table D, for all i in O(|G|^2) time, and be retrieved in O(1).


<math>H_{q}</math> is a set that contains those adjacent vertices to <math>C_{q}</math> in G, which don't form a breakpoint in F, the set of former main breakpoints. In other words, all v, where <math>(v,C_{q}) \in E \wedge (v,C_{q}) \notin F</math>.
<math>H_{q}</math> is a set that contains those adjacent vertices to <math>C_{q}</math> in G, which don't form a breakpoint in F, the set of former main breakpoints. In other words, all v, where <math>(v,C_{q}) \in E \wedge (v,C_{q}) \notin F</math>.
Line 166: Line 172:
\right\}
\right\}
</math>
</math>

Calculating <math>H_{q}</math> takes a maximum of O(|G|) time, since there can be <math>|G|^2</math> edges in E, each <math>D_{q}</math> can have a maximum of O(|G|) adjacent vertices recorded, and <math>H_{q}</math> is linear to <math>D_{q}</math>. However, for k-SAT problems, we can assume that each node has a maximum of k adjacent nodes, making <math>|D_{q}| \in O(k)</math>, thus making the calculation and size of <math>H_{q} \in O(k)</math>.


<math>
m\left(C,F,E,p\right) =
\min_{q \in C} W(C,F,E,q)
</math>

<math>
<math>
W(q) \in \left\{0,1,2,3,\infty\right\}
W(C,F,E,q) \in \left\{0,1,2,3,\infty\right\}
=
=
\begin{cases}
\begin{cases}
Line 177: Line 192:


<math>
<math>
W_{0}\left(q\right) \in \left\{0,1,2,\infty\right\}
W_{0}\left(C,F,E,q\right) \in \left\{0,1,2,\infty\right\}
=
=
\begin{cases}
\begin{cases}
0 & \text{if } \left|T_{q} / D_{q}\right| \not= 0 \text{,}\\
0 & \text{if } \left|T_{q} \backslash D_{q}\right| \not= 0 \text{,}\\
1 & \text{else if }
1 & \text{else if }
\left(
\left(
\left|T_{q} \cap H_{q} \right| \not=0
\left|T_{q} \cap H_{q} \right| \not=0
\vee
\vee
\left| V_{q} / D_{q} \right| \not=0
\left| V_{q} \backslash D_{q} \right| \not=0
\right) \text{,}\\
\right) \text{,}\\
2 & \text{else if }
2 & \text{else if }
Line 198: Line 213:


<math>
<math>
W_{1}\left(q\right) \in \left\{1,2,3,\infty\right\}
W_{1}\left(C,F,E,q\right) \in \left\{1,2,3,\infty\right\}
=
=
\begin{cases}
\begin{cases}
Line 206: Line 221:
\left| T_{q} \cap D_{q} \right| \not=0
\left| T_{q} \cap D_{q} \right| \not=0
\vee
\vee
\left| U_{q} / D_{q} \right| \not=0
\left| U_{q} \backslash D_{q} \right| \not=0
\vee
\vee
\left| V_{q} / D_{q} \right| \not=0
\left| V_{q} \backslash D_{q} \right| \not=0
\right) \text{,}\\
\right) \text{,}\\
3 & \text{else if }
3 & \text{else if }
Line 220: Line 235:


<math>
<math>
W_{2}\left(q\right) \in \left\{2,3,\infty\right\}
W_{2}\left(C,F,E,q\right) \in \left\{2,3,\infty\right\}
=
=
\begin{cases}
\begin{cases}
2 & \text{if } \left|T_{q} \cap H_{q}\right| \not= 0 \vee \left( |V{q}/D{q}| \right) \text{,}\\
2 & \text{if } \left|T_{q} \cap H_{q}\right| \not= 0 \vee \left( |V_{q} \backslash D_{q}| \not=0 \right) \text{,}\\
3 & \text{else if }
3 & \text{else if }
\left(
\left(
\left| U_{q} \cap H_{q} \right| \not=0
\left| U_{q} \cap H_{q} \right| \not=0
\vee
\vee
\left| V_{q} / D_{q} \right| \not=0
\left| V_{q} \cap D_{q} \right| \not=0
\right) \text{,}\\
\right) \text{,}\\
\infty & \text{otherwise}
\infty & \text{otherwise}
\end{cases}
\end{cases}
</math>

== Perform an operation ==
New edges:

<math>
N_{e}(C,p,q,r,s) = \left\{ (C_{p},C_{r}), (C_{q},C_{s}), (C_{p-1},C_{q+1}) \right\}
</math>

Broken edges:

<math>
B_{e}(C,p,q,r,s) = \left\{ (C_{p-1}, C_{p}), (C_{q},C_{q+1}) (C_{r}, C_{s}) \right\}
</math>

Next cycle:

<math>
C^{2}\left(C,M,E_{e},p,q,r,s\right) =
\begin{cases}
C^{2_{pq}}(C,p,q,r,s) & \text{if } b(C_{p-1},C_{q+1}) = 1 \wedge f(C_{p-1},C_{q+1}) = 0 \text{,}\\
C^{2_{pr}}(C,p,q,r,s) & \text{else if } b(C_{p},C_{r}) = 1 \wedge f(C_{p},C_{r}) = 0 \text{,}\\
C^{2_{qs}}(C,p,q,r,s) & \text{else if } b(C_{q},C_{s}) = 1 \wedge f(C_{q},C_{s}) = 0 \text{,}\\
C^{2_e}(C,p,q,r,s,m \in M \backslash (B_{e}(C,p,q,r,s) \backslash N_{e}(C,p,q,r,s) )) & \text{else if } M \backslash (B_{e}(C,p,q,r,s) \backslash N_{e}(C,p,q,r,s) ) \not= \emptyset\\
C^{2_e}(C,p,q,r,s,e \in E_{e} \backslash (B_{e}(C,p,q,r,s) \backslash N_{e}(C,p,q,r,s) )) & \text{else if } E_{e} \backslash (B_{e}(C,p,q,r,s) \backslash N_{e}(C,p,q,r,s) ) \not= \emptyset\\
C & \text{otherwise}
\end{cases}
</math>

<math>

C^{2_{pq}}(C,p,q,r,s) =
\begin{cases}
< [C_{q+1}, C_{r}], [C_{p},C_{q}], [C_{s}, C_{p-1}] > & \text{if } (r < s) \text{,}\\
< [C_{q+1}, C_{s}], [C_{q},C_{p}], [C_{r}, C_{p-1}] > & \text{if } (r > s) \text{,}\\
\end{cases}
</math>



<math>

C^{2_{pr}}(C,p,q,r,s) =
\begin{cases}
< [C_{p},C_{q}], [C_{s}, C_{p-1}], [C_{q+1}, C_{r}] > & \text{if } (r < s) \text{,}\\
< [C_{r}, C_{p-1}], [C_{q+1}, C_{s}], [C_{q},C_{p}] > & \text{if } (r > s) \text{,}\\
\end{cases}
</math>


<math>

C^{2_{qs}}(C,p,q,r,s) =
\begin{cases}
< [C_{s}, C_{p-1}], [C_{q+1}, C_{r}], [C_{p},C_{q}] > & \text{if } (r < s) \text{,}\\
< [C_{q},C_{p}], [C_{r}, C_{p-1}], [C_{q+1}, C_{s}] > & \text{if } (r > s) \text{,}\\
\end{cases}
</math>

<math>
E_{e}^{2}(E_{e},C,p,q,r,s) =
E_{e} \backslash (B_{e}(C,p,q,r,s) \backslash N_{e}(C,p,q,r,s))
</math>


<math>
M^{2}(M,C,p,q,r,s) =
(M \backslash B_{e}(C,p,q,r,s)) \cap N_{e}(C,p,q,r,s)
</math>

<math>
F^{2}(F_{e}, M,C,p,q,r,s) =
F \cap M^{2}(M,C,p,q,r,s)
</math>
</math>

Latest revision as of 23:19, 8 January 2011

Du


Naive algorithm


newedgdes(C,p,q,r,s):
  return { (C[p-1],C[q+1]), (C[q],C[s]), (C[p], C[r]) }

countbps(E,NE):
  newbps = 0
  for ( ne in NE )
    if ( ne !in E )
      ++newbps
  return newbps


countmainbps(F,E,NE):
  newmainbps = 0
  for ( ne in NE )
    if ( ne !in E && ne in F )
      ++newmainbps
  return newmainbps



m(C,F,E,p):
  min = infinity
  n = |C|
  for ( q in range(p,p-2) )
    for ( s in range( q + 2, p ) )
      r = s - 1
      for ( i in range(2) )
        NE = newedges(C,p,q,r,s)
        if ( |NE| != 0 )
          newbps = countbps(F,E,NE)
          newmainbps = countmainbps(F,E,NE)
          
          if ( newbps == 0 || newmainbps > 0 )
            min = newbps < min ? newbps : min
       
        swap(r,s)
  return min

Better algorithm

If we know something about the size of E, we can improve the complexity. For example, in SAT when converted to DHC, all of the variable graph nodes will have a maximum of 3 edges leading in, and 3 out: Each node is connected to the node to the left, and to the right, and possibly to a clause node. The only nodes that have more than this are the clause nodes themselves, which will have m incoming edges, and m outgoing edges, where m is the number of variables in that clause. Thus, it is bounded by the number of variables in the problem. For 3SAT, it is additionally bounded by three, and for k-SAT, by k. Converting from DHC to UHC splits the incoming edges to one node and outgoing to another. Thus, for a k-SAT problem, instead of .

As we iterate over the different possible values for , there are incrementally more possibilities for . Each selection of implies a possible breakpoint for .

For each q, for all the valid (r,s) for that q, will track those that result in no breakpoint for (p,r).

will track those s, valid with q, that result in a breakpoint (p,r) that is a former breakpoint.

will track those s, valid with q, that result in a breakpoint (p,r) that are not a former breakpoint.

The nice thing about these three sets, is that they can be incrementally calculated. and so on for U and V. Basically, iterate through each possible q (O(|G|), and for each q, determine which sets the two new possibilities (r,s) go into (O(1) time), and insert them (O(log |G|) time, or O(1) if hashset or similar structure; we will assume O(1)).


is a set that contains the adjacent vertices to in G.

All can be precalculated in a large table D, for all i in O(|G|^2) time, and be retrieved in O(1).

is a set that contains those adjacent vertices to in G, which don't form a breakpoint in F, the set of former main breakpoints. In other words, all v, where .

Calculating takes a maximum of O(|G|) time, since there can be edges in E, each can have a maximum of O(|G|) adjacent vertices recorded, and is linear to . However, for k-SAT problems, we can assume that each node has a maximum of k adjacent nodes, making , thus making the calculation and size of .




Perform an operation

New edges:

Broken edges:

Next cycle: