Talk:UTF-8 encode and decode

From Rosetta Code

Is the task more about converting between a Unicode codepoint (an integer) and the bytes? or converting between a displayed character and bytes? In the solutions for some languages, they basically start with a 1-character string (because that's the natural way to represent a character in that language), and convert that string to and from the bytes, and also separately get the codepoint integer out from the string to print to the output. Is that acceptable for this task, or is it necessary to actually convert between the integer and the bytes? --Spoon! (talk) 21:57, 8 March 2017 (UTC)