Talk:Find first and last set bit of a long integer: Difference between revisions

→‎Clarification: integers are used in the test case only toconveniently supply binary specimen for testing.
(add header; add comment)
(→‎Clarification: integers are used in the test case only toconveniently supply binary specimen for testing.)
Line 1:
===Needs clarificationClarification===
The task is unclear in what number should be computed. It mentions "first" and "last" set bits, but with bits written in most-significant to least-significant order, or the other way around? (i.e. which of "upb" and "lwb" should compute the least significant set bit? and which computes the most significant set bit?) For a 32-bit integer should the least significant bit be numbered 0, 1, 31, or 32? Also, what values should be returned if there are no set bits? And for negative integers, should the most significant set bit be the sign bit?
 
Line 5:
 
: The whole thing doesn't make sense for multi-precision integers. If you are willing to use language/library bigint implementation, you basically willingly forfeit control over how it's done behind the scenes, it doesn't make sense to "efficiently find a set bit" -- who says the numbers are internally represented by bits to begin with? --[[User:Ledrug|Ledrug]] 00:30, 8 December 2011 (UTC)
 
'''/* Clarification */''' integers are used in the ''test case'' only to conveniently supply binary specimen for testing.
 
'''re:''' This seems the reverse of most people's bit numberings.
 
C also - correct me if I am wrong - prints the most significant bit/octel/hex digit on the left, and the least on the extreme right, hence the operators << and >>. Also python, and most other programming languages display and (pretend to) store right to left. It's probably a human thing.
 
'''re:''' And for negative integers
 
The task is focused on binary/bits. Whether it is signed or unsigned is not so important. The '''test case''' requires only +ve. Beyond that just do the natural thing for the specific language & CPU.
 
'''re:''' wouldn't make sense for arbitrary-precision integers
 
Agreed. It does seem odd having the least significant bit at (algol68g's) 116 for '''long bits'''/'''long int'''s. Thus this could make Arbitrary Precision ''integers'' particularly weird. But this task is about bit manipulation. The '''test case''' happens to use '''long int'''. Maybe a particular language/CPU puts these in reverse. In such a case just find the location of first and last non zero bits on this platform. Try not to worry about what is being represented. Try not to worry about the significance of the actual '''bits'''.
 
Also: The task doesn't request for Arbitrary Precision ''bits'' per se, rather it asks for an algorithm that works 32 bits, and then another algorithm that works for <u>any</u> particular precision implementation. Hopefully his would not preclude an ''Arbitrary Width Bits'' algorithm. ¢ A '''not'' of 0 on Arbitrary Precision ''bits'' is going to be very interesting! :-) ¢
 
FYI: Algol68 permits the coercion/casting/widening of '''bits''' to a '''bool''' array, eg<lang algol68>[]BOOL bools a = 2r101,
bools b = (TRUE,FALSE,TRUE);
BITS bits c = 2r101,
bits d = bits pack((TRUE,FALSE,TRUE)); # Hence ... #
print((bits width ELEM bits a, []BOOL(bits a)[bits width])) # prints "1F" - the same last bit #</lang> Hence there are strong synergies between '''bits''' and []'''bool''' type, but they are not the same type. Similarly the types '''int''' and '''bits''' are distinct and require explicit casting with the operators '''bin''' and '''abs'''. C - by contrast - does not have this distinction, in C an '''int''' also happens to be '''bits'''.
 
Good luck, and Merry Xmas!
 
[[User:NevilleDNZ|NevilleDNZ]] 02:02, 8 December 2011 (UTC)