Talk:Suffix tree: Difference between revisions

Line 70:
::: I thought that seeking a definition for a normally well-defined algorithm was out of the scope of RC, but fair enough, I don't mind talking about it. From what I understand, your example is incomplete. The branch in "n" for instance should really have been "na", as all nodes in it start with "a". Somehow in the definition there has to be a rule stating that all edges labels have to be the longest possible. I don't know if it's clear in the wikipedia article. Maybe it should be clarified there.--[[User:Grondilu|Grondilu]] ([[User talk:Grondilu|talk]]) 12:05, 27 May 2013 (UTC)
:::: The distinction you draw in your opening sentence, in your paragraph here, is perhaps worth thinking about. Personally, I think, if the task page cannot convey the definition, and if the definition has its own jargon which must be studied, that the definition is very much in scope for RC. Sometimes an implementation can serve as a definition, but in this case the Racket implementation used an external library (which suggests an unbounded scope) and I do not know enough about perl6 to read its code and I do not know whether it would have been easier to learn the relevant perl6 or to just learn this algorithm (of course, both approaches have additional benefits, but since I imagine that I am not the only one that would want to implement this task, I opted to try to get the task definition clarified).
:::: Anyways, the wikipedia definition does mention that every parent node other than the root has at least two edges leading out. which (now that the defect in my thinking about what an edge is has been fixed) seems to address the "must be longest possible" issue. But emphasizing the point might not hurt. (As an aside, note that I almost never use trees with low children counts in coding because for my applications the constant multipliers on their costs almost always mean that another approach is better - roughly speaking, if I need a tree at all, for me "better" is something like sequence of trees (new content in a small mutable tree, old content in a larger more constant tree) where a node occupies most of an L1 cache and where the "edges" have roughly fixed size - in other words, lots of readers and very few writers - but that kind of reasoning does not seem to apply here. So things which should be just obvious for someone used to working with tall narrow tree implementations on a regular basis can easily escape my notice (similarly, things which seem obvious to me seem to be routinely overlooked by people specifying algorithms which favor skinny trees - and it's not that either approach is universally wrong it's just a reflection of different kinds information from different kinds of applications). Meanwhile, reading the wikipedia page was less than fruitful because that turned my attention to things like insertion operations (which probably means merging two of these trees, but maybe not) and I needed to focus on more basic issues.) --[[User:Rdm|Rdm]] ([[User talk:Rdm|talk]]) 13:41, 27 May 2013 (UTC)
That said, my current impression is that the wikipedia article is confusing because it claims linear time for O(n log n) mechanisms. Average time might be linear (I do not know enough to determine that) but time is proportional to space and we need an additional copy of the text for every prefix variant that needs to be treated. I may be wrong here (I do not have proof) but thinking about using this mechanism to encode long, random bit strings seems to support this way of thinking. (And, if I am wrong, I would be very interested in seeing ''proof that the algorithm is O(n)'' that adequately covers space needed for random bit strings.) --[[User:Rdm|Rdm]] ([[User talk:Rdm|talk]]) 13:57, 27 May 2013 (UTC)
6,951

edits