Talk:Find first and last set bit of a long integer: Difference between revisions

Line 128:
Not all languages know about their host word size. Implementing an "efficient" algorithm without loops or recursion for this special case is a bit arbitrary and dodgy in these cases. Sure an arbitrary special case could be coded but would that be efficient? What actually makes sense here? Furthermore the next natural long is also arbitrary. --[[User:Dgamey|Dgamey]] 18:45, 9 December 2011 (UTC)
: There's worse problem than that. The task wants you to use the same bit methods on arbitrary precision integers, "efficiently", which is oxymoronic. Suppose you have a multiprecision integer that's made of an array of ''n'' native integers each of ''m'' bits long. The sane way of finding the leading/trailing bit is by scaning those native integers from either end until you find the first nonzero one, then use the native bit operations to determine which bit in that you want. Suppose you are looking for LSB, on average, if the bigint values is not with some oddball distribution, the very first native integer (the least significant word) is the one you'll need, and average complexity is O(1 log m). The task, as worded, seems to want one to use the same "efficient" bit operations in such a situation, because binary search is nominally O(log N). Except if you actually perform the test <code>0 == value & 0xfff..(a million more)..ffff</code>, each bitwise <code>and</code> operation itself is already O(n), while the extend <code>== 0</code> test may be O(n) as well, not to mention the intermediary value is probably going to be stored somewhere which induces god knows how many clock cycles (likely the code will need to allocate memory), so the binary search is bound to be at least O(n (log n) (log m)), possibly a lot more. It's plainly the wrong approach of doing things.
: This kind of things happen to people who love to overload operators. What many fail to realize is that seemingly inocuousinnocuous expressions <code>a & b == 1</code> could do a lot of harm if one insists on using overloaded operators to force a uniform appearance on tasks that are only superficially similar. --[[User:Ledrug|Ledrug]] 19:44, 9 December 2011 (UTC)
:: Ya, I was beginning to think about that. I personally like polymorphism in operators and functions as a first cut. But I also like to have the ability to step out and get deep into the weeds if something demands it. Problem is that a lot of people seem to be in one camp or the other exclusively. :( --[[User:Dgamey|Dgamey]] 23:12, 9 December 2011 (UTC)
Anonymous user