MQ- sorting measures serve other purposes, Adaptive Planar Graph Algorithms

Jérémy Barbay 15 Jun 201015/06/10 a las 17:07 hrs.2010-06-15 17:07:15

RMQ- sorting measures serve other purposes, Adaptive Planar Graph Algorithms, by Jeremy <2010-06-15 Tue 14:30-16:00>

***** RMQ
****** Johannes's results (and slides)
[[file:~/ ... ces/bibliography.org::* 28 Optimal Succinctness for Range Minnimum Queries Johannes Fisher]]
****** Adaptive intuition: discussion
file:~/ ... geMinimumQueries.org

1 General Runs
---------------

- detect them in linear time
- mark them using o(n) space
- apply usual structure to the heads of the runs
- when solving a query,
- ACCESS A for the first run intersected (hence this
example is a /systematic/ data structure)
- compute the min of the rest of the heads of the runs
intersected using the 2d min heap
- return the min of the two.
- given an array of r runs, this yields a /systematic/ data
structure of 2r+o(n) bits.

2 Strict Runs
--------------

- Same thing, but no need to access A, because all elements of a
strict runs are consecutive, hence the first element of the
interval of the query is the answer if and only if the head of
the strict run it belongs to is smaller than all other heads
contained in the query
- given an array of r strict runs, this yields a /non-systematic/
data structure of 2r+o(n) bits.

Those results are not enough

- we lose the separation of the index
- they are mere hacks: we should be able to do better!

****** Formalisation: better results
******* Jansson et al's scheme:
- To encode trees while supporting many operators (including those we need for RMQ)
- using nH(n_1,...,) bits where n_i = nb of nodes with $i$ children
******* Analysis of Jansson scheme encoding of 2d min heap tree in function of \rho
- \rho runs means \rho branches (and leaves) in the 2d min heap tree.

- [X] k strict runs
+ k branches starting at the root, which yields
- 1 root node of degree k
- k nodes of degree 0 and
- n-k-1 nodes of degree 1
- [X] k runs
+ k branches starting at arbitrary nodes, which yields
- at most k nodes of degree larger than one and smaller than k,
- exactly k nodes of degree 0 and
- at least n-2k nodes of degree 1
- Using Jansson et al [Ultra-succinct Representation of
Ordered Trees] one can represent the 2d-min-heap with less
than 2n bits when the distribution of the node's degree is
strongly skewed, which happens for instance in the 2d-min
heap of an array with \rho long runs (the tree will consist
only of \rho branches of nodes of degree 1, connected at
\rho nodes of degree 2).



******* Perspective?
- Formal computation
- Other measures? (REM)
- Finer Analysis, other data structure?
***** Planar Graphs
****** Planar Max Flow:
******* Definition
+ Given
- a /directed/ graph G=(V,E),
- a cost function c:E->R+,
- s,t \in V
+ Output a valid flow assignation to each edge, maximizing the
flow from s to t
******* Known Results

- The problem is O(n\lg n), no tight lower bound is known.

- There is a O(n) time algorithm to find the max flow when s and t
share a face [Hassin]

- The new analysis of the O(n\lg n) algorithm by E/B&K indicates
an adaptive component, dominated by the initialisation cost in
O(n\lg n).
****** Adaptive (s,t) MinCut
******* Definition of the Problem:
- Input
- G=(V,E) directed
- c:E->R+
- s,t\in V
- Output
- \pi\subset E s.t.
- \pi is a cut
- \pi minimize \pi(c)

******* Algorithm

1. In G^* find P=minpath(s^*,t^*)
- note *d* its lenght.
2. Replace each vertex v of P by v_L and v_R, disconnected,
such that
- edges of v on the left now connecto to v_L, and
- edges of v on the right no connect to v_R
3. Recursively, for point v of P in middle
- find minpath P_v between v_R and v_L, of value cP_v
- partition G into the higher part G_h and the lower part
G_l and recurse
4. Return the path of smallest cost cP_v

******* Result
- This algorithm was known [Hassin and Johnson 1985 for
undirected] to compute the (s,t)min cut in time O(n\lg n), and
to give the value of max flow.
- "new" analysis gives total time O(n+d+n\lg d)
- (mincut of the flow, which edges are saturated for max flow)

****** Self-Adjusting Top Trees: the shaddow of the beginning of an answer.
******* Definition
- Given a forest
- Choose an arbitrary root
- partition the forest into edge disjoint paths
- deduce (uniquely) from it a unique series of rakes and compress
- define (at most) lg n decomposition levels into *unit tree*
- Each unit tree is a spinal chord with unique trees on the
sides of each node, recursively encoded.
- Each unit tree is encoded by a balanced binary search tree
on the node of its spinal chord.
******* Results
- using splaying (cf Splay trees), unit-trees can be updated
dynamically in time O(\lg n) (amortized?)
- using an *expose* operator, one can add a constant value to
the (potential more than \lg n) edges of a path in time
O(\lg n) (amortized) -> useful for maxflow type problems.

******* Perspectives:
1. Self Adjusting Top Trees are *adaptive*: when path exposed
is very similar to the path previously exposed, there are
less changes in the structure: how to measure it?
2. In the maxflow problem, in worst case instances which seem
easy, the paths exposed are very similar... Can we have an
adaptive result here?

Adaptive to order and Structure: results and challenges

Jérémy Barbay 15 Jun 201015/06/10 a las 17:06 hrs.2010-06-15 17:06:15

Adaptive to order and Structure: results and challenges <2010-06-10 Thu 14:30-16:00>

1) Input order instance optimality, two problems
1. Improve the algorithm
- find unordered instance optimal algorithms which perform
better when the input is almost ordered (for various
measures).
- a trivial approach is to simulate two algorithms in
parallel, and stop as soon as one finishes => Dove tailing algorithm
- a more clever approach is to combine two algorithms in
one (e.g. \rho-adaptive with \sigma-adaptive)
2. Analyze the algorithm
- the algorithms seem only as good as the analysis:
- fixing only n does not permit to distinguish between
various algorithms.
- similarly, fixing only \sigma (or h) and \rho does not
permit to distinguish between the naive algorithm which
simulates in parallel two distinct algorithm and a more
clever one which combines them cleverly.
- there are (at least two solutions):
- define a better measure of difficulty
- define another "model" such as competive ratio between
algorithm, inspired by the work of Dorrigiv and
Lopez-Ortiz on online algorithms.

2) Improving the algorithm in function of \rho and \sigma
1. Sorting
- Roughly outlined already by Munro and Spira: even if they
did not consider adaptive algorithms, they considered the
(adaptive) complexity of insertion sort and merge sort in
function of \sigma, and it is easy to show that the
adaptive versions of insertion sort and merge sort are at
once adaptive to \rho and \sigma.
2. Planar Convex Hull
- The equivalent of Run can be defined in several ways. I
define it in non-convex paths.
- From there, using linear merging, we have a similar
algorithm to adaptive merge sort.
- making it adaptive to h (i.e. \sigma) is only a matter of
interleaving one of Chan's gift-wrapping's step between
two merge steps.
3. 3d Convex Hull
- The equivalent of Run might be defined in several ways. I
define it in function of Jack Snoeyink linear encoding of
triangulations (which encodes a 3d convex hull).
- From there, using linear merging from Kirpatrick, we have
a similar algorithm to adaptive merge sort.
- Similarly to 2d, interleaving steps yields an algorithm
adaptive to both \sigma and \rho (and more, to the
distribution of the sizes of the runs, or of the
distribution of the scopes of the points in the
certificate)

3) Improving the analysis (more vague)

1. Certificates.
- each correct algorithm must be able to print some
"certificate" that its answer is correct. Let's consider
this as the real output of the algorithm.
- MAYBE one can show that the \rho adaptive and \sigma
adaptive use a different "grammar" each, and use this to
define the "shortest certificate" using the union of the
two grammars.
- Then one would have a new measure of difficulty for the problem.
- But one would have to justify why not consider another
grammar/vocabulary, in particular corresponding to other
orders (Subsequences arbitrary SUS or SMS, subsequences
regular)

2. Codificacion
- Each algorithm defines an encoding of the certificate, and
in essence of the instance.
- An algorithm which use special order will have to encode
it, which is not taken into account in the current models,
but would be taken into account in a model counting the
space of the algorithm along the space of the certificate
of its execution.

3. Competitive/Collaborative Ratio
- Can we show a bound on the ratio between
- the complexity of the naive "Dove tailing"?
- the space taken by the algorithm+certificate?


4) Open Problems

- Algorithms adaptive to both order (other than Runs) and instance optimal
- A general *Synergy* technique to mix any adaptive algorithms,
finer than the *Dove Tailing* technique.
- An analysis for this.

From sorting to edit-clustering 1

Jérémy Barbay 8 Jun 201008/06/10 a las 18:26 hrs.2010-06-08 18:26:08

From sorting to edit-clustering, by Jeremy <2010-06-08 Tue 14:30-16:00>

1) Adaptivity and Compression

1. Adaptive Searching and Compressed Integers
- Doubling Search [Bentley and Yao]
- Gamma codes
- Experimental Design
- Succinct Data-Structures [Carlos]

2. Adaptive Sorting and Compressed Permutations
- Runs [STACS, Barbay Navarro]
- REM [In Progress, Francisca]
- Local Insertion and ShellSort [Francisca]

3. Binary Merging and Set Compression
- Systematic [Avila and Laber]
- Compressed Succinct Data Structures? [Carlos]

2) Parameterized Complexity and Graph Compression

Can we do the same thing with other, more difficult problems?

* *Vertex Cover*
- Crown reduction = Independent Sets.
- Problem: adding an edge does not change the final
result (nor the kernelisation), hence the signature of
the execution won't give a good encoding.

* *Edge Clique Cover*
- Cover the edges of the graph in a minimum of clique
- param k the size of the partition
- List of rules (to complete)
1. if (u,v)\in E s.t. N[u]=N[v], replace u,v in V by u
- Name K the size of the resulting graph
- Final: brute force if K<2^k, rejection if not.

* *Edge BiClique Cover*
- Cover the edges of the graph in a minimum of biclique
- param k the size of the partition
- List of rules (to complete)
1. if u,v\in V s.t. N[u]=N[v], replace u,v in V by u
- Name K the size of the resulting graph
- Final: brute force if K<2^k, rejection if not.
- 3^k-kernelization algorithm
- www.kr.tuwien.ac.at/ ... /papers/biclique.pdf

* *Edge Clique Partition* and *Edge BiClique Partition*
- same as Edge Clique Cover and Edge BiClique Cover, but the
cliques or bicliques must be disjoint.
- 2^(k+1) for BiClique Partition
crpit.com/confpapers/CRPITV77Mujuni.pdf

* *Cluster Editing*
- How many edges must be added or removed in order to yield a
set of clusters (i.e. a "union of disjoint cliques")
www.springerlink.com/ ... 6gt72h5/fulltext.pdf

* *Vertex Feedback Edge Set*
- Can we remove edges incident to at most k vertices from the
graph so that to reduce it to an acyclic graph?
- There is a kernel of size 4*k with the following rules:
1. Remove all degree-one vertices.
2. For any two neighboring degree-two vertices that do not
have a common neighbor, contract the edge between them.
- The reference is
Jiong Guo, Rolf Niedermeier, Sebastian Wernicke:
Fixed-Parameter Tractability Results for Full-Degree Spanning
Tree and Its Dual. IWPEC 2006: 203-214
www3.interscience.wiley.com/ ... ct?CRETRY=1&SRETRY=0


3) Discussion

- New measures of "compressability" for various data-structures
- Helping to Bridge practice and theory, from the theoretical side.
- better understanding of practical instances
- There are problems where this relation is less usefull:
All problems where the "certificate" is of little
interest. For instance:
- Convex Hull
- (...)

Bilan First Phase of Course and Evaluation of Student's talks

Jérémy Barbay 4 May 201004/05/10 a las 22:56 hrs.2010-05-04 22:56:04

Bilan First Phase of Course
:LOGBOOK:
- State "DONE" from "" [2010-05-04 Tue 17:39]
:END:
<2010-05-04 Tue 14:30-16:00>
1) What did we see up to now?
* A list of topics
- hanoi tower and disk pile
- searching and integer coding
- sorting and permutation coding
- convex hull and variants
- union
- intersection and pattern matching
- Vertex Cover
* A list of techniques
- parametrization of the analysis:
- output sensitivity
- adaptive (analysis of) algorithms
- parameterized complexity
- instance optimality
- kernelisation
* Some stories and their morals
- Convex Hull: O(n^2)
- which was O(nh)
- which was uncomparable with O(n\lg n)
- which got solved by O(n\lg h)
- which got improved by O(n H_v)
- which was made direction independant by O(nH)
- which is still uncomparable with O(n\lg runs)
- Sorting: many measures of difficulty
- some reducible one to another,
- some uncomparable.
- Doubling Search: usefull at many levels
- searching into a sorted array
- searching into a parameter's domain
2) What I want for the second phase
* To take it easy (:+\)
* Each of you to study one topic,
- hopefully close to your own center of interest
- sufficiently in detail to be able to rewrite it
- and potentially improve/adapt the writing
- (and the result if it hapens)
* I will be available to help each of you individually
(I won't take it that easy)

3) What I plan for the third phase
* Each of you to study one direction in which to extend or
apply the topic you studied in the previous phase.
- in a direction chosen either on your own or with me.
- in a SHORT TIME: the goal is as much to eliminate quickly
promising but unfructful perspectives as to obtain results.
* I will be available to help each of you individually (even
when the research topic is unfructful or far from my own.)
* If you think the theme is not hopeful, your talk should
convincingly explain why so. It could prove harder than
proving that the result is promising!



***** Student Evaluations of the talks
****** Modus Operandi
* Form of the talk
- prof and students fill form and give one note in {1,...,7},
obtained by summing the points of the first part.
- average of those marks give the /TalkForm/ note of the speaker.
* Content of the talk
- prof reads the content part of the form
- gives it a note in {1,...,7}
- enters it in a square matrix between the two students (speaker and auditor)
- average of notes gives the /TalkContent/ note of the speaker.
* Attention of auditor
- at the end of the phase, average of /TalkContent/ notes of a
student yields his /auditor/ note.
****** Evaluation form for the talk

1) Form of the talk
1. [ ] Timing
2. [ ] Verbal expression
3. [ ] Use of slides/board
4. [ ] Use of Examples
5. [ ] Handling of Questions
6. [ ] Animation (wake up the attendance)

2) Content of the talk
1. [ ] Title of the talk:
2. [ ] Plan
3. [ ] Hypothesis
4. [ ] Results
5. [ ] Detailed example of the technique involved
6. [ ] Vision (speaker has a sense of where this is going)

****** Formulario de Evaluación para la Charla

1) Tecnicas de Presentación.
1. [ ] Tiempo
2. [ ] Expresion
3. [ ] Uso de la pizarra, de los transparentes.
4. [ ] Use de Ejemplos
5. [ ] Respuesta a las preguntas
6. [ ] Animacion (si anima a los spectadores)

2) Contenido de la Charla
1. [ ] Titulo
2. [ ] Plan
3. [ ] Hipotesis
4. [ ] Resultados
5. [ ] Ejemplo (detallado) de la tecnica utilisada.
6. [ ] Vision (si tiene una visio de que viene después)

Intersection, threshold set and pattern Matching

Jérémy Barbay 4 May 201004/05/10 a las 22:55 hrs.2010-05-04 22:55:04

Intersection, threshold set and pattern Matching, by Jeremy
:LOGBOOK:
- State "DONE" from "" [2010-05-01 Sat 18:04]
:END:
<2010-04-29 Thu 14:30-16:00>

1) Intersection and Applications
- k=2, Intersection = Union
- Conjonctive Queries
- Pattern Matching in XML
2) Algorithms of Intersection and Measures of Difficulty
- SvS
- Adaptive
- Small Adaptive
- Sequential
- Randomized
3) Relations between the measures
- Sequential vs Randomized
- Sequential <> SvS <> Adaptive <> Small Adaptive
- Kirkpatrick Dovetailing searching algorithm?

Union algorithms, for $k=2$ and larger, by Jeremy

Jérémy Barbay 27 Abr 201027/04/10 a las 16:48 hrs.2010-04-27 16:48:27

**** Union algorithms, for $k=2$ and larger, by Jeremy
<2010-04-27 Tue 14:30-16:00>

1) Union algorithms for $k=2$

- Hwang Lin's algorithm
- DLM/Alternation algorithm
- Relation to compression of sets

2) Union algorithms for $k>2$

- SVS (trivial)
- DLM Gap encoding of certificates: LB of $\Omega(C)$
(works on B-Trees too)
- DLM Algorithm: UB of $O(C)$

3) Trying to generalize it to the Intersection

- Certificate of an Intersection $C$
- Lower bound for Intersection $\Omega(C)$
- Other Lower Bound for Intersection $\Omega(kG)$

***** References

@Article{optimalMergingOf2ElementsWithNElements,
author = {F. K. Hwang and S. Lin},
title = {Optimal merging of 2 elements with n elements},
journal = {Acta Informatica},
year = {1971},
key = {},
OPTvolume = {1},
OPTnumber = {},
pages = {145--158},
OPTmonth = {},
OPTnote = {},
OPTannote = {}
}

@Article{aSimpleAlgorithmForMergingTwoDisjointLinearlyOrderedSets,
author = {F. K. Hwang and S. Lin},
title = {A simple algorithm for merging two disjoint linearly ordered sets},
journal = {SIAM Journal of Computing},
year = {1972},
OPTkey = {},
volume = {1},
number = {1},
pages = {31--39},
OPTmonth = {},
OPTnote = {},
OPTannote = {}
}


@Article{significantImprovementsToTheHwangLinMergingAlgorithm,
author = {G. K. Manacher},
title = {Significant improvements to the Hwang-Ling merging algorithm},
journal = {JACM},
year = {1979},
volume = {26},
number = {3},
pages = {434--440},
}

@article{twoProbabilisticResultsOnMerging,
author = {Wenceslas Fernandez de la Vega and Sampath Kannan and Miklos Santha},
title = {Two probabilistic results on merging},
journal = {SIAM J. Comput.},
volume = {22},
number = {2},
year = {1993},
issn = {0097-5397},
pages = {261--271},
doi = {dx.doi.org/10.1137/0222019},
publisher = {Society for Industrial and Applied Mathematics},
address = {Philadelphia, PA, USA},
}

@article{averageCaseAnalysisOfTheMergingAlgorithmOfHwangAndLin,
author = "Wenceslas Fernandez de la Vega and Alan M. Frieze and Miklos Santha",
title = "Average-Case Analysis of the Merging Algorithm of Hwang and Lin",
journal = "Algorithmica",
volume = "22",
number = "4",
pages = "483-489",
year = "1998",
url = "citeseer.ist.psu.edu/274734.html"
}


@InProceedings{dlm,
author = {Erik~D.~Demaine and Alejandro~L{\'o}pez-Ortiz and J.~Ian~Munro},
title = {Adaptive set intersections, unions, and differences},
booktitle = {Proceedings of the $11^{th}$ ACM-SIAM Symposium on Discrete
Algorithms ({SODA})},
pages = {743--752},
year = {2000}
}

Kernelisation, by Serge Gaspers

Jérémy Barbay 8 Abr 201008/04/10 a las 16:47 hrs.2010-04-08 16:47:08

Kernelisation, by Serge Gaspers
[2010-04-22 Thu 14:30-16:00]

I. Recall
a. FPT
b. W[1]
c. Vertex Cover

II. Introduction to Kernelization
a. Kernelization Algorithm: definition
b. Example: a O(k^2) kernel for Vertex Cover
c. A problem is FPT iff it has a kernel

III. Crown Decomposition
a. Crown Decomposition: definition and example
b. Crown Reduction for Vertex Cover
c. Vertex Cover has a 4k kernel

Parameterized Complexity, by Serge Gaspers

Jérémy Barbay 8 Abr 201008/04/10 a las 16:47 hrs.2010-04-08 16:47:08

**** Parameterized Complexity, by Serge Gaspers
[2010-04-20 Tue 14:30-16:00]

I. Introduction to Parameterized Complexity
a. Motivation
b. Examples of parameters
c. Complexity classes (P \subseteq FPT \subseteq W[1] \subseteq W[2])

II. FPT Algorithms
a. Vertex Cover: definition and example
b. General Argument by Graph Minors
c. Algorithms for Vertex Cover

III. W-Hardness reductions
a. Definition of classical vs. parameterized hardness reduction
b. Induced Biclique: definition and example
c. Induced Biclique is W[1]-hard

Convex Hull and Variants: Output sensitivity

Jérémy Barbay 8 Abr 201008/04/10 a las 14:13 hrs.2010-04-08 14:13:08

**** Convex Hull and Variants: Output sensitivity
<2010-04-08 Thu 14:30-16:00>

1) What we saw
- adaptive analysis
- papers to read
- tarea: order adaptive convex hull and max vector?

2) Planar Convex Hull: the odyssey

1. Gift Wrapping $O(nh)\subsetO(n^2)$
- (choose tangent of minimal angle)
2. Graham's scan $O(\mbox{sorting}(n)+n) \subset O(n\lg n)$
3. Kirkpatrick and Seidel $O(n\lg h) \subset O(n\lg n)$
4. Vertical Partitioning $O(n H_v) \subset O(n\lg h)$
5. Instance Optimality $O(n H) \subset O(nH_v)$
- $\Theta(nH)$ for restricted class of algorithms

3) Extensions and Open Pbs

1. Summary

2. Extensions
- Sorting, max vector (...)
- 3D Convex Hull, Reporting/Counting Pbs in 2D

3. Open Pbs:
- more general class of algorithms
- Input Order adaptivity + Instance Optimality?



***** References:
@article{optimalOutputSensitiveConvexHullAlgorithmsInTwoAndThreeDimensions,
author = "Timothy Chan",
title = "Optimal Output-Sensitive Convex Hull Algorithms in Two and Three Dimensions",
journal = "GEOMETRY: Discrete \& Computational Geometry",
volume = 16,
year = 1996,
url = "citeseer.ist.psu.edu/ ... e/chan96optimal.html"
}

@article{ chan01dynamic,
author = "Timothy M. Chan",
title = "Dynamic planar convex hull operations in near-logarithmic amortized time",
journal = "Journal of the ACM",
volume = "48",
number = "1",
pages = "1--12",
year = "2001",
url = "citeseer.ist.psu.edu/ ... e/chan00dynamic.html"
}


@InProceedings{instanceOptimalGeometricAlgorithmsFOCS,
author = {Peyman Afshani and Jeremy Barbay and Timothy Chan},
title = {Instance-optimal geometric algorithms},
booktitle = {Proc. 50th IEEE Symposium on Foundations of Computer Science (FOCS)},
year = 2009,
note = {2009-FOCS-InstanceOptimalGeometricAlgorithms-AfshaniBarbayChan.pdf},
annote = {Preliminary version on www.cs.uwaterloo.ca/~tmchan/abc_4_09.ps},
abstract = {We prove the existence of an algorithm A for computing 2-d or 3-d convex hulls that is optimal for every point set in the following sense: for every set S of n points and for every algorithm A' in a certain class C, the maximum running time of A on input s_1,...,s_n is at most a constant factor times the maximum running time of A' on s_1,...,s_n, where the maximum is taken over all permutations s_1,...,s_n of S. In fact, we can establish a stronger property: for every S and A', the maximum running time of A is at most a constant factor times the average running time of A' over all permutations of S. We call algorithms satisfying these properties instance-optimal in the order-oblivious and random-order setting. Such instance-optimal algorithms simultaneously subsume output-sensitive algorithms and distribution-dependent average-case algorithms, and all algorithms that do not take advantage of the order of the input or that assume the input is given in a random order.

The class C under consideration consists of all algorithms in a decision tree model where the tests involve only multilinear functions with a constant number of arguments. To establish an instance-specific lower bound, we deviate from traditional Ben-Or-style proofs and adopt an interesting adversary argument. For 2-d convex hulls, we prove that a version of the well known algorithm by Kirkpatrick and Seidel (1986) or Chan, Snoeyink, and Yap (1995) already attains this lower bound. For 3-d convex hulls, we propose a new algorithm.

To demonstrate the potential of the concept, we further obtain instance-optimal results for a few other standard problems in computational geometry, such as maxima in 2-d and 3-d, orthogonal line segment intersection in 2-d, finding bichromatic L_\infty-close pairs in 2-d, off-line orthogonal range searching in 2-d, off-line dominance reporting in 2-d and 3-d, off-line halfspace range reporting in 2-d and 3-d, and off-line point location in 2-d.
}

}

From Sorting to Convex Hull

Jérémy Barbay 7 Abr 201007/04/10 a las 11:59 hrs.2010-04-07 11:59:07

**** From Sorting to Convex Hull
<2010-04-06 Tue 14:30-16:00>

1. Algoritmos de Ordenamiento:

- Review/Summary of what we saw.

2. Problema de Ordenamiento

- Lower Bounds for Problems
- Optimality of Algorithms
- Comparation of Difficulty Measures
- Fixed Problem
- Variable Problem (Pb,m)

3. Sorting in higher dimentsion
- not well defined, many interpretations are possible
- *Max Vector* in the plane
- Given a set of points, find the set of dominant points.
- a point v is *dominated* by a point w if
- v.x < w.x and v.y < w.y
- a *dominant* point is not dominated by any other point.
- There exits a simple algorithm:
- sort by x O(n lg n )
- "wrap around" O(n)
- total complexity O(n lg n)
- *Convex Hull* in the plane
- Given a set of points, find the smallest convex polygon
containing those points
- There exist the same simple algorithm as for Max Vector,
with complexity O(n lg n)

4. TAREA:

- What could be good measures of difficulty for MaxVector and
Convex Hull?