 
OVERALL RATING: 7 (strong accept) 
REVIEWER'S CONFIDENCE: 4 (expert)  
----------------------- REVIEW --------------------

This is an excellent (yet extremely long) paper that should definitely be accepted. I'll summarize the contributions and suggest a way to choose what should be published:

Contributions:
1) Combine Makinen-Navarro (MN) RLE scheme with Golynski (G) newer technique to achieve O(1) select and O(log log s) rank instead of O(log s) rank/select as in MN. This part is interesting but rather trivial, as MN solution can be built over any rank/select method.

2) Dinamyze the previous scheme. This is done by using Makinen-Navarro (MN2) dynamization scheme over the structures of 1 (bitmaps L and L', etc.). This is again interesting but relatively trivial.

3) A dynamic scheme for plain (not RLE) text that on small alphabets 
(s = O(log n)), based on improving MN2's extension to larger alphabets. This is excellent work.

4) A dynamic scheme for plain text on large alphabets, resorting to splitting the long symbols into sequences of loglog n bits. This is excellent work.


Although the authors present the contributions in the order above, I think that 3 and 4 are by far more relevant and deeper than 1 and 2. I suggest the authors to focus the paper on these results, as 1 and 2 are very easy corollaries. I think the most relevant of the paper can fit in the allowed space.

Detailed comments:

- A dictionary manages a set, not quite with rank/select structures do on general alphabets (on binary alphabets this is ok because one can see the bitmap as a set). I thus suggest to call this structure and not dictionary.
- Intro: O(n log s) is not the same space of the text, it's just the same order.
- Table 1: GGV's extra space is o(n log s), not o(n), I think.
- MN shows that n' is nHk + o(n): say that this is for k <= ...
   Also I think the nHk with constant 1 is obtained in the journal version of [14] (Nordic Journal of Computing) and not in the CPM version.
- Page 2, dynamic RLE: You can summarize all this paragraph with the complexity O(log n (1+log s/loglog n)). Actually I think this is the real complexity for large alphabets too, see later..
- Page 2, static plain: no contribution of yours here, right? Should make that clear.
- k-ary wavelet trees were also explored in [5].
- Table 2: all the "Binary" part is a special case of other parts of the table, you could perfectly remove it.
- Page 4 "(So does MN)" is unclear. I'd rephrase as "... simply extends MN's RLFM to support ... m-length pattern within the same space complexity nHk..."
- Pge 4, about COUNT and PSI, etc. please see a TechReport followup of [15]: ftp://ftp.dcc.uchile.cl/pub/users/gnavarro/dynamic.ps.gz (TR/DCC-2006-10 Veli Makinen and Gonzalo Navarro Dynamic Entropy-Compressed Sequences and Full-Text Indexes, July 2006).
I think nothing there affects your paper but it is interesting that you have it in mind.
- Sec 2, it seems that your "only assumption" is that log s = o(log n). This is not coherent with your claim that your structures work for any alphabet while some other results are for "some" sizes, as those "some" sizes cover s = O(n to the power of beta), which exceeds your restriction!
- Sec 2, para 2, line 1: where a run is $l_i$ consecutive...
- Page 5, line 5, nHk -> nHk + (s to the power of k).
- Duality Psi - BWT: very interesting. Maybe it deserves more publicity in the paper.
- IMPORTANT: Pages 6-7, you repeat all the solution of [14], and even admit it! Having problems of space, there is no point on this. Use directly [14], state instead of proving the properties, and you will gain more than one page. Actually Theorem 2 is a trivial combination of MN anfd Golynski, so you could write it in a few lines. I see that you need some details for the dynamic part, but even so you should not reinvent the wheel. It takes less space to reference it.
- Theorem 3: According to table 2, this is exactly what RRR or HSS obtain. So which is the point of section 4? I'm confused. This is essential as otherwise there is little point in section 4.
- Theorem 4: where did the O(log n) time go? I'd say your result should be 
O(log n (1+ log s / loglog n)).
- Sec 4.1, not clear that insert-over and delete-over just updates global statistics and not the data itself.
- Sec 4.1.2, line 5: T[j] -> T[i] ? 

----------------- REMARKS FOR PROGRAMME COMMITTEE ------------

There should be a condition to the authors about what they must include in the final version, as clearly they cannot include everything. My suggestion in the public comments could be a proposal for this.


