Talk:Happy numbers: Difference between revisions

m
(→‎On caching (and laziness): Caching doesn't seem to be useful at all, but if caching is done, the "first step" strategy should be better than the "bag" strategy)
 
(One intermediate revision by one other user not shown)
Line 10:
::: However, I now think it would be a saving anyway. The digit square sums tend to be much smaller than the original numbers; indeed, with a 32 bit unsigned int, the largest possible value after one step would be 657 (unless I mis-calculated). If storing the flags in single bits, you'd need just 83 bytes, and lookup would be strictly O(1). Moreover, if you don't have that number in the cache, you have to do this calculation anyway, so you don't lose much time in that case, so the only efficiency loss is for numbers already in the cache, but that's just one iteration (for 32 bit uint in the worst case, 18 divisions, 18 modulo operations, 9 multiplications and 17 additions, if I counted correctly). Moreover the needed memory then goes linear in the number of digits, so even with 64 bits you won't need too much memory. --[[User:Ce|Ce]] 13:30, 7 May 2009 (UTC)
:::: Likely. However happy numbers seem not to be memory or cpu intensive (even with Smalltalk, which surely is not a "light") I was able to find fastly happy numbers upto 10000 (and not beyond just since I've not changed the constant!), where with "fastly" I mean just the time to output them; "no time" among one output line and the other one ("no time" meaning: far less than 1s, i.e. less than a human can perceive like a "estimable by eye" duration). --[[User:ShinTakezou|ShinTakezou]] 17:19, 7 May 2009 (UTC)
::::: Indeed, I just calculated the first million happy numbers with the non-caching C++ code (the last one being 7105849), and it took 11.670 seconds real (wall clock) time, and 7.156 seconds user (CPU) time (I didn't make any statistics, and there are other processes running on the computer, so those numbers are only to be taken as aproximate values). So caching is probably not useful. However, if caching is done, it should IMHO be the "after first step" strategy because it's both algorithmically simpler and more memory efficient, and I'm not even convinced that it takes more time than the bag strategy: while calculating the whole iteration step is more costly than calculating the bag, it will be zero lost time (except for the array lookup) in case of a miss, and the hit rate will be larger. On the other hand, calculating the bag will always have an overhead, and the bag cannot be directly used as array index, so you'll use either a hash map (so you'll have to calculate a hash function for each lookup, which is likely as expensive as adding the squares) or a search tree (which is O(log N) instead of O(1)). Of course, one would have to measure to know for sure. --[[User:Ce|Ce]] 07:42, 8 May 2009 (UTC)
Anonymous user