Anonymous user
Talk:Word wheel: Difference between revisions
m
→Algorithms: (Whoops).
m (→more words with a different grid: changed wording.) |
m (→Algorithms: (Whoops).) |
||
Line 7:
: It seems a waste of memory and CPU time to generate all possible conforming strings instead of writing some simple filters that validates the words against the Rosetta Code task and grid (word wheel) constraints. The REXX solution consumes almost all of the CPU time in just reading in the dictionary. The filters that I used for the REXX programming solution eliminate over half of the 25,105 words in the UNIXDICT file, about <sup>1</sup>/<sub>4</sub> of the time used for the filtering was used in the detecting of duplicate words (there are none, however). A very small fraction of that is used to validate that each letter (by count) is represented in the grid. I wonder what the CPU consumption would be if the number of words (entries) in the dictionary were a magnitude larger. My "personal" dictionary that I built has over 915,000 words in it. -- [[User:Gerard Schildberger|Gerard Schildberger]] ([[User talk:Gerard Schildberger|talk]]) 21:34, 4 July 2020 (UTC)
::Hi
:: Cheers, --[[User:Paddy3118|Paddy3118]] ([[User talk:Paddy3118|talk]]) 03:32, 5 July 2020 (UTC)
Line 13:
:: Just ran the larger [https://raw.githubusercontent.com/dwyl/english-words/master/words.txt dictionary] its 12x the size of the standard dictionary and runs in 15x the time using the Python code. (There is a lot of "cruft" padding out that larger dictionary from the look of the first 100 words). --[[User:Paddy3118|Paddy3118]] ([[User talk:Paddy3118|talk]]) 10:18, 5 July 2020 (UTC)
== more words with a different grid ==
|