String length: Difference between revisions
Add Ecstasy example
Puppydrum64 (talk | contribs) m (→{{header|6502 Assembly}}: clarification) |
(Add Ecstasy example) |
||
(46 intermediate revisions by 28 users not shown) | |||
Line 30:
Assembler 360 use EBCDIC coding, so one character is one byte.
The L' atrribute can be seen as the length function for assembler 360.
<
LEN CSECT
USING LEN,15 base register
Line 54:
D DS D double word 8
PG DS CL12 string 12
END LEN</
{{out}}
<pre>
Line 67:
{{trans|Z80 Assembly}}
Most 6502-based computers predate Unicode, so only byte length will be demonstrated for now.
<
;(Of course, any two consecutive zero-page memory locations can fulfill this role.)
LDY #0 ;Y is both the index into the string and the length counter.
Line 78:
exit:
RTS ;string length is now loaded into Y.</
=={{header|68000 Assembly}}==
===Byte Length (ASCII)===
<syntaxhighlight lang="68000devpac">GetStringLength:
; INPUT: A3 = BASE ADDRESS OF STRING
; RETURNS LENGTH IN D1 (MEASURED IN BYTES)
MOVE.L #0,D1
loop_getStringLength:
MOVE.B (A3)+,D0
CMP #0,D0
BEQ done
ADDQ.L #1,D1
BRA loop_getStringLength
done:
RTS</syntaxhighlight>
=={{header|8086 Assembly}}==
{{trans|68000 Assembly}}
===Byte Length===
<syntaxhighlight lang="asm">;INPUT: DS:SI = BASE ADDR. OF STRING
;TYPICALLY, MS-DOS USES $ TO TERMINATE STRINGS.
GetStringLength:
xor cx,cx ;this takes fewer bytes to encode than "mov cx,0"
cld ;makes string functions post-inc rather than post-dec.
loop_GetStringLength:
lodsb ;equivalent of "mov al,[ds:si],inc si" except this doesn't alter the flags.
cmp '$'
je done ;if equal, we're finished.
inc cx ;add 1 to length counter. A null string will have a length of zero.
jmp loop_GetStringLength
done:
ret</syntaxhighlight>
=={{header|4D}}==
===Byte Length===
<
=={{header|AArch64 Assembly}}==
{{works with|as|Raspberry Pi 3B version Buster 64 bits}}
<syntaxhighlight lang="aarch64 assembly">
/* ARM assembly AARCH64 Raspberry PI 3B */
/* program stringLength64.s */
Line 184 ⟶ 222:
/* for this file see task include a file in language AArch64 assembly */
.include "../includeARM64.inc"
</syntaxhighlight>
=={{header|Action!}}==
<syntaxhighlight lang="action!">PROC Test(CHAR ARRAY s)
PrintF("Length of ""%S"" is %B%E",s,s(0))
RETURN
PROC Main()
Test("Hello world!")
Test("")
RETURN</syntaxhighlight>
{{out}}
[https://gitlab.com/amarok8bit/action-rosetta-code/-/raw/master/images/String_length.png Screenshot from Atari 8-bit computer]
<pre>
Length of "Hello world!" is 12
Length of "" is 0
</pre>
=={{header|ActionScript}}==
===Byte length===
This uses UTF-8 encoding. For other encodings, the ByteArray's <code>writeMultiByte()</code> method can be used.
<syntaxhighlight lang="actionscript">
package {
Line 223 ⟶ 277:
}
</syntaxhighlight>
===Character Length===
<
var s1:String = "The quick brown fox jumps over the lazy dog";
var s2:String = "𝔘𝔫𝔦𝔠𝔬𝔡𝔢";
var s3:String = "José";
trace(s1.length, s2.length, s3.length); // 43, 14, 4
</syntaxhighlight>
=={{header|Ada}}==
{{works with|GCC|4.1.2}}
===Byte Length===
<
Length : constant Natural := Str'Size / 8;</
The 'Size attribute returns the size of an object in bits. Provided that under "byte" one understands an octet of bits, the length in "bytes" will be 'Size divided to 8. Note that this is not necessarily the machine storage unit. In order to make the program portable, System.Storage_Unit should be used instead of "magic number" 8. System.Storage_Unit yields the number of bits in a storage unit on the current machine. Further, the length of a string object is not the length of what the string contains in whatever measurement units. String as an object may have a "dope" to keep the array bounds. In fact the object length can even be 0, if the compiler optimized the object away. So in most cases "byte length" makes no sense in Ada.
===Character Length===
<
UCS_16_Str : Wide_String := "Hello World";
Unicode_Str : Wide_Wide_String := "Hello World";
Latin_1_Length : constant Natural := Latin_1_Str'Length;
UCS_16_Length : constant Natural := UCS_16_Str'Length;
Unicode_Length : constant Natural := Unicode_Str'Length;</
The attribute 'Length yields the number of elements of an [[array]]. Since strings in Ada are arrays of characters, 'Length is the string length. Ada supports strings of [[Latin-1]], [[UCS-16]] and full [[Unicode]] characters. In the example above character length of all three strings is 11. The length of the objects in bits will differ.
=={{header|Aime}}==
===Byte Length===
<
or
<
=={{header|ALGOL 68}}==
===Bits and Bytes Length===
<
BYTES bytes := bytes pack("Hello, world"); # packed array of CHAR #
print((
Line 263 ⟶ 317:
"bits width:", bits width, ", max bits: ", max bits, ", bits:", bits, new line,
"bytes width: ",bytes width, ", UPB:",UPB STRING(bytes), ", string:", STRING(bytes),"!", new line
))</
Output:
<pre>
Line 271 ⟶ 325:
</pre>
===Character Length===
<
INT length := UPB str;
printf(($"Length of """g""" is "g(3)l$,str,length));
Line 277 ⟶ 331:
printf(($l"STRINGS can start at -1, in which case LWB must be used:"l$));
STRING s := "abcd"[@-1];
print(("s:",s, ", LWB:", LWB s, ", UPB:",UPB s, ", LEN:",UPB s - LWB s + 1))</
Output:
<pre>
Line 286 ⟶ 340:
=={{header|Apex}}==
<syntaxhighlight lang="apex">
String myString = 'abcd';
System.debug('Size of String', myString.length());
</syntaxhighlight>
=={{header|AppleScript}}==
===Byte Length===
<
Mac OS X 10.5 (Leopard) includes AppleScript 2.0 which uses only Unicode (UTF-16) character strings.
This example has been tested on OSX 10.8.5. Added a combining char for testing.
<
set inString to "Hello é̦世界"
set byteCount to 0
Line 328 ⟶ 382:
return 1
end if
end doit</
===Character Length===
<
Or:
<
=={{header|Applesoft BASIC}}==
<
=={{header|ARM Assembly}}==
{{works with|as|Raspberry Pi}}
<syntaxhighlight lang="arm assembly">
/* ARM assembly Raspberry PI */
/* program stringLength.s */
Line 442 ⟶ 496:
/***************************************************/
.include "../affichage.inc"
</syntaxhighlight>
<pre>
møøse€
Line 452 ⟶ 506:
===Character Length===
<
print ["length =" size str]</
{{out}}
Line 462 ⟶ 516:
=={{header|AutoHotkey}}==
===Character Length===
<
Or:
<
StringLen, Length, String
Msgbox % Length</
=={{header|Avail}}==
===Character Length===
Avail represents strings as a tuple of characters, with each character representing a single code point.
<
===Byte Length===
A UTF-8 byte length can be acquired with the standard library's UTF-8 encoder.
<
encoder ::= a UTF8 encoder;
bytes ::= encoder process nonBMPString;
Line 480 ⟶ 534:
// or, as a one-liner
|a UTF8 encoder process "𝔘𝔫𝔦𝔠𝔬𝔡𝔢"|</
=={{header|AWK}}==
===Byte Length===
From within any code block:
<
x=length("Hello," s " world!") # dynamic string example
y=length($1) # input field example
z=length(s) # variable name example</
Ad hoc program from command line:
<pre> echo "Hello, wørld!" | awk '{print length($0)}' # 14</pre>
From executable script: (prints for every line arriving on stdin)
<
{print"The length of this line is "length($0)}</
=={{header|Axe}}==
Line 499 ⟶ 553:
===Byte Length===
<
Disp length(Str1)▶Dec,i</
=={{header|BaCon}}==
BaCon has full native support for UTF-8 encoding.
<
PRINT "Charlen of 'hello': ", ULEN("hello")
Line 511 ⟶ 565:
PRINT "Bytelen of '𝔘𝔫𝔦𝔠𝔬𝔡𝔢': ", LEN("𝔘𝔫𝔦𝔠𝔬𝔡𝔢")
PRINT "Charlen of '𝔘𝔫𝔦𝔠𝔬𝔡𝔢': ", ULEN("𝔘𝔫𝔦𝔠𝔬𝔡𝔢")</
{{out}}
<pre>
Line 525 ⟶ 579:
===Character Length===
{{works with|QBasic}}
{{works with|Liberty BASIC}}
{{works with|PowerBASIC|PB/CC, PB/DOS}}
BASIC only supports single-byte characters. The character "ø" is converted to "°" for printing to the console and length functions, but will still output to a file as "ø".
<
PRINT LEN(a$)</
==={{header|ANSI BASIC}}===
The ANSI BASIC needs line numbers.
<syntaxhighlight lang="basic">
10 INPUT A$
20 PRINT LEN(A$)
</syntaxhighlight>
==={{header|Applesoft BASIC}}===
The [[#GW-BASIC|GW-BASIC]] solution works without any changes.
==={{header|BASIC256}}===
The [[#GW-BASIC|GW-BASIC]] solution works without any changes.
==={{header|Chipmunk Basic}}===
The [[#GW-BASIC|GW-BASIC]] solution works without any changes.
==={{header|MSX Basic}}===
{{works with|MSX BASIC|any}}
The [[#GW-BASIC|GW-BASIC]] solution works without any changes.>
==={{header|Quite BASIC}}===
The [[#GW-BASIC|GW-BASIC]] solution works without any changes.
==={{header|True BASIC}}===
The [[#GW-BASIC|GW-BASIC]] solution works without any changes.
==={{header|Yabasic}}===
The [[#GW-BASIC|GW-BASIC]] solution works without any changes.
==={{header|ZX Spectrum Basic}}===
The ZX Spectrum needs line numbers:
<
20 PRINT LEN a$</
However, it's not quite as trivial as this.
Line 548 ⟶ 629:
Stripping out all entries in the string with codes in the lower 32 will get rid of colour control codes. The character length of a token is not a simple thing to determine, so this version strips them out too by eliminating anything above CHR$ 164 (the last UDG). A 91-entry DATA list of token lengths might be the next step.
<
20 LET b$=""
30 FOR x=1 TO LEN a$
Line 555 ⟶ 636:
60 LET b$=b$+a$(k)
70 NEXT x
80 PRINT LEN b$</
====Grapheme length====
Line 561 ⟶ 642:
Alternatively, the string might include control codes for backspacing and overwriting;
<
will produce an "o" character overprinted with a quotation mark, resulting in a "passable" impression of an umlaut. The above code will reduce this to two characters when the actual printed length is one (byte length is of course five). The other possible workaround is to print the string and calculate the character length based on the resultant change in screen position. (This will only work for a string with a character length that actually fits on the screen, so below about 670.)
<
20 CLS
30 PRINT a$;
40 LET x=PEEK 23688: LET y=PEEK 23689
50 PRINT CHR$ 13;33-x+32*(24-y)</
==={{header|Commodore BASIC}}===
Commodore BASIC needs line numbers too, and can't use mixed case. When in mixed case mode, everything must be in lower case letters. However, the default is UPPERCASE + graphic characters; thus everything appears as UPPER case character.
<
20 PRINT LEN(A$)</
==={{header|IS-BASIC}}===
<
110 PRINT LEN(TX$)</
==={{header|QB64}}===
In QB64 a String variable is assumed to be UTF-8 and thus the byte length is the same as character length. That said there are methods to map UTF-16 and UTF-32 to the CP437 (ASCII) table (see, _MAPUNICODE).
<syntaxhighlight lang
=={{header|Batch File}}==
===Byte Length===
<
setlocal enabledelayedexpansion
call :length %1 res
Line 602 ⟶ 683:
set str=!str:~1!
set /a cnt = cnt + 1
goto loop</
=={{header|BBC BASIC}}==
===Character Length===
<
PRINT LEN(text$)</
===Byte Length===
{{works with|BBC BASIC for Windows}}
<
CP_UTF8 = &FDE9
Line 621 ⟶ 702:
PRINT "Length in bytes (ANSI encoding) = " ; LEN(textA$)
PRINT "Length in bytes (UTF-16 encoding) = " ; 2*(nW%-1)
PRINT "Length in bytes (UTF-8 encoding) = " ; LEN($$!^textU$)</
Output:
<pre>Length in bytes (ANSI encoding) = 5
Length in bytes (UTF-16 encoding) = 10
Length in bytes (UTF-8 encoding) = 7</pre>
=={{header|BQN}}==
Strings are arrays of characters in BQN.
===Byte Length===
Each character is converted to its codepoint, and compared with the respective UTF boundary.
<syntaxhighlight lang="bqn">BLen ← {(≠𝕩)+´⥊𝕩≥⌜@+128‿2048‿65536}</syntaxhighlight>
===Character Length===
Character length is just array length.
<syntaxhighlight lang="bqn">Len ← ≠</syntaxhighlight>
'''Output'''
<syntaxhighlight lang="bqn">•Show >(⊢⋈⊸∾Len⋈BLen)¨⟨
"møøse"
"𝔘𝔫𝔦𝔠𝔬𝔡𝔢"
"J̲o̲s̲é̲"
⟩</syntaxhighlight>
<syntaxhighlight lang="text">┌─
╵ "møøse" 5 7
"𝔘𝔫𝔦𝔠𝔬𝔡𝔢" 7 28
"J̲o̲s̲é̲" 8 13
┘</syntaxhighlight>
=={{header|Bracmat}}==
The solutions work with UTF-8 encoded strings.
===Byte Length===
<
length
. @(!arg:? [?length)
Line 636 ⟶ 739:
);
out$ByteLength$𝔘𝔫𝔦𝔠𝔬𝔡𝔢</
Answer:
<pre>28</pre>
===Character Length===
<
length c
. 0:?length
Line 655 ⟶ 758:
);
out$CharacterLength$𝔘𝔫𝔦𝔠𝔬𝔡𝔢</
Answer:
<pre>7</pre>
An improved version scans the input string character wise, not byte wise. Thus many string positions that are deemed not to be possible starting positions of UTF-8 are not even tried. The patterns <code>[!p</code> and <code>[?p</code> implement a ratchet mechanism. <code>[!p</code> indicates the start of a character and <code>[?p</code> remembers the end of the character, which becomes the start position of the next byte.
<
length c p
. 0:?length:?p
Line 672 ⟶ 775:
)
| !length
);</
Later versions of Bracmat have the built in function <code>vap</code> that "vaporises" a string into "atoms". If the string is UTF-8 encoded, then each "atom" is one UTF-8 character, so the length of the list of atoms is the character length of the input string. The first argument to the <code>vap</code> function is a function that will be applied to every UTF-8 encoded character in the input string. The outcomes of these function calls are the elements in the resulting list. In the solution below we choose an anonymous function <code>(=.!arg)</code> that just returns the characters themselves.
<
length
. vap$((=.!arg).!arg):? [?length&!length
);</
=={{header|Brainf***}}==
===Byte Length===
There are several limitations Brainf*** has that influence this solution:
*Brainf*** only supports 8-bit numbers in canonical implementations, so it only supports strings of length below 255.
*The rule of thumb in Brainf*** when reading a string is to always store exactly one byte, no matter how much bytes a character represents. That's why this solution is a strictly ByteLength one.
*No way to pass anything to Brainf*** but giving the arguments as input. That's why this program reads a string and outputs the number of bytes in it.
[[https://esolangs.org/wiki/Brainfuck_algorithms#Print_value_of_cell_x_as_number_for_ANY_sized_cell_.28eg_8bit dot 2C_100000bit_etc.29]] is used to print the number from memory.
<syntaxhighlight lang="bf">
,----- ----- [>,----- -----] ; read a text until a newline
<[+++++ +++++<] ; restore the original text
>[[-]<[>+<-]>+>]< ; add one to the accumulator cell for every byte read
;; from esolang dot org
>[-]>[-]+>[-]+< [>[-<-<<[->+>+<<]>[-<+>]>>]++++++++++>[-]+>[-]>[-]> [-]<<<<<[->-[>+>>]>[[-<+>]+>+>>]<<<<<]>>-[-<<+>>]<[-]++++++++ [-<++++++>]>>[-<<+>>]<<] <[.[-]<]
[-]+++++ +++++. ; print newline
</syntaxhighlight>
=={{header|C}}==
Line 685 ⟶ 806:
{{works with|GCC|3.3.3}}
<
int main(void)
Line 693 ⟶ 814:
return 0;
}</
or by hand:
<
{
const char *string = "Hello, world!";
Line 705 ⟶ 826:
return 0;
}</
or (for arrays of char only)
<
int main(void)
Line 717 ⟶ 838:
return 0;
}</
===Character Length===
For wide character strings (usually Unicode uniform-width encodings such as UCS-2 or UCS-4):
<
#include <wchar.h>
Line 735 ⟶ 856:
return 0;
}</
===Dealing with raw multibyte string===
Following code is written in UTF-8, and environment locale is assumed to be UTF-8 too. Note that "møøse" is here directly written in the source code for clarity, which is not a good idea in general. <code>mbstowcs()</code>, when passed NULL as the first argument, effectively counts the number of chars in given string under current locale.
<
#include <stdlib.h>
#include <locale.h>
Line 751 ⟶ 872:
return 0;
}</
chars: 5</pre>
Line 759 ⟶ 880:
{{works with|C sharp|C #|1.0+}}
===Character Length===
<
int characterLength = s.Length;</
===Byte Length===
Strings in .NET are stored in Unicode.
<
string s = "Hello, world!";
int byteLength = Encoding.Unicode.GetByteCount(s);</
To get the number of bytes that the string would require in a different encoding, e.g., UTF8:
<
=={{header|C++}}==
Line 775 ⟶ 896:
{{works with|ISO C++}}
{{works with|g++|4.0.2}}
<
using std::string;
Line 785 ⟶ 906:
// In bytes same as above since sizeof(char) == 1
string::size_type bytes = s.length() * sizeof(string::value_type);
}</
For wide character strings:
<
using std::wstring;
Line 795 ⟶ 916:
wstring s = L"\u304A\u306F\u3088\u3046";
wstring::size_type length = s.length() * sizeof(wstring::value_type); // in bytes
}</
===Character Length===
Line 803 ⟶ 924:
For wide character strings:
<
using std::wstring;
Line 810 ⟶ 931:
wstring s = L"\u304A\u306F\u3088\u3046";
wstring::size_type length = s.length();
}</
For narrow character strings:
Line 817 ⟶ 938:
{{works with|clang++|3.0}}
<
#include <codecvt>
int main()
Line 825 ⟶ 946:
std::wstring_convert<std::codecvt_utf8<char32_t>, char32_t> conv;
std::cout << "Character length: " << conv.from_bytes(utf8).size() << '\n';
}</
{{works with|C++98}}
{{works with|g++|4.1.2 20061115 (prerelease) (SUSE Linux)}}
<
#include <locale>
Line 865 ⟶ 986:
// return the result
return length;
}</
Example usage (note that the locale names are OS specific):
<
int main()
Line 878 ⟶ 999:
// Tür in ISO-8859-1
std::cout << char_length("\x54\xfc\x72", "de_DE") << "\n"; // outputs 3
}</
Note that the strings are given as explicit hex sequences, so that the encoding used for the source code won't matter.
Line 886 ⟶ 1,007:
Clean Strings are unboxed arrays of characters. Characters are always a single byte. The function size returns the number of elements in an array.
<
strlen :: String -> Int
strlen string = size string
Start = strlen "Hello, world!"</
=={{header|Clojure}}==
===Byte Length===
<
(map utf-8-octet-length ["møøse" "𝔘𝔫𝔦𝔠𝔬𝔡𝔢" "J\u0332o\u0332s\u0332e\u0301\u0332"]) ; (7 28 14)
Line 902 ⟶ 1,023:
(def code-unit-length count)
(map code-unit-length ["møøse" "𝔘𝔫𝔦𝔠𝔬𝔡𝔢" "J\u0332o\u0332s\u0332e\u0301\u0332"]) ; (5 14 9)</
===Character length===
<
(map character-length ["møøse" "𝔘𝔫𝔦𝔠𝔬𝔡𝔢" "J\u0332o\u0332s\u0332e\u0301\u0332"]) ; (5 7 9)</
===Grapheme Length===
<
#(->> (doto (java.text.BreakIterator/getCharacterInstance)
(.setText %))
Line 916 ⟶ 1,037:
(take-while (partial not= java.text.BreakIterator/DONE))
count))
(map grapheme-length ["møøse" "𝔘𝔫𝔦𝔠𝔬𝔡𝔢" "J\u0332o\u0332s\u0332e\u0301\u0332"]) ; (5 7 4)</
=={{header|COBOL}}==
===Byte Length===
<syntaxhighlight lang
Alternative, non-standard extensions:
{{works with|GNU Cobol}}
<syntaxhighlight lang
{{works with|GNU Cobol}}
{{works with|Visual COBOL}}
<syntaxhighlight lang
===Character Length===
<syntaxhighlight lang
=={{header|ColdFusion}}==
===Byte Length===
<
<cfoutput>
<cfset str = "Hello World">
Line 942 ⟶ 1,063:
<p>#arrayLen(t)#</p>
</cfoutput>
</syntaxhighlight>
===Character Length===
<
=={{header|Common Lisp}}==
Line 952 ⟶ 1,073:
{{works with|SBCL}}
<
returns 12.
===Character Length===
Common Lisp represents strings as sequences of characters, not bytes, so there is no ambiguity about the encoding. The [http://www.lispworks.com/documentation/HyperSpec/Body/f_length.htm length] function always returns the number of characters in a string.
<
returns 11, and
<pre>(length "Hello Wørld")</pre>
Line 965 ⟶ 1,086:
===Character Length===
<
MODULE TestLen;
Line 980 ⟶ 1,101:
END TestLen.
</syntaxhighlight>
A symbol ''$'' in ''LEN(s$)'' in Component Pascal allows to copy sequence of characters up to null-terminated character. So, ''LEN(s$)'' returns a real length of characters instead of allocated by variable.
Line 991 ⟶ 1,112:
===Byte Length===
<
MODULE TestLen;
Line 1,007 ⟶ 1,128:
END TestLen.
</syntaxhighlight>
Running command ''TestLen.DoByteLength'' gives following output:
Line 1,014 ⟶ 1,135:
Length of characters in bytes: 10
</pre>
=={{header|Crystal}}==
UTF8 is the default encoding in Crystal.
===Byte Length===
<syntaxhighlight lang="crystal">"J̲o̲s̲é̲".bytesize</syntaxhighlight>
===Character Length===
<syntaxhighlight lang="crystal">"J̲o̲s̲é̲".chars.length</syntaxhighlight>
=={{header|D}}==
===Byte Length===
<
void showByteLen(T)(T[] str) {
Line 1,047 ⟶ 1,176:
dstring s3c = "J̲o̲s̲é̲";
showByteLen(s3c);
}</
{{out}}
<pre>Byte length: 7 - 6dc3b8c3b87365
Line 1,062 ⟶ 1,191:
===Character Length===
<
void showCodePointsLen(T)(T[] str) {
Line 1,092 ⟶ 1,221:
dstring s3c = "J̲o̲s̲é̲";
showCodePointsLen(s3c);
}</
{{out}}
<pre>Character length: 5 - 6d f8 f8 73 65
Line 1,108 ⟶ 1,237:
=={{header|DataWeave}}==
===Character Length===
<
{{out}}
Line 1,118 ⟶ 1,247:
===Byte Length===
Dc's "P" command prints numbers as strings. The number 22405534230753963835153736737 (hint: look at it in hex) represents "Hello world!". Counting the byte length of it is counting how often it iteratively can be divided by 256 with non zero result. The snippet defines the macro which calculates the length, prints the string 1st and then its length.
<
22405534230753963835153736737 d P A P
lL x f</
<pre>
Hello world!
Line 1,128 ⟶ 1,257:
===Character Length===
The following code output 5, which is the length of the string "abcde"
<syntaxhighlight lang
=={{header|Déjà Vu}}==
===Byte Length===
Byte length depends on the encoding, which internally is UTF-8, but users of the language can only get at the raw bytes after encoding a string into a blob.
<
!. len !encode!utf-8 "𝔘𝔫𝔦𝔠𝔬𝔡𝔢"</
{{out}}
<pre>
Line 1,141 ⟶ 1,270:
===Character Length===
<
!. len "𝔘𝔫𝔦𝔠𝔬𝔡𝔢"</
{{out}}
<pre>5
Line 1,149 ⟶ 1,278:
See [https://rosettacode.org/wiki/String_length#Pascal Pascal].
=={{header|Dyalect}}==
<
=={{header|E}}==
===Character Length===
<
=={{header|EasyLang}}==
===Character Length===
<syntaxhighlight lang="easylang>
# 5
print len "møøse"
# 7
print len "𝔘𝔫𝔦𝔠𝔬𝔡𝔢"
# 8
print len "J̲o̲s̲é̲"
# 1
print len "😀"
</syntaxhighlight>
=={{header|Ecstasy}}==
<syntaxhighlight lang="ecstasy">
module StrLen {
@Inject Console console;
void run(String s = "José") {
console.print($|For the string {s.quoted()}:
| Character length: {s.size}
| UTF-8 byte length: {s.calcUtf8Length()}
);
}
}
</syntaxhighlight>
{{out}}
<pre>
For the string "José":
Character length: 4
UTF-8 byte length: 5
</pre>
=={{header|Elena}}==
===Character Length===
ELENA 4.x :
<
public program()
Line 1,168 ⟶ 1,331:
var ws_length := ws.Length; // Number of UTF-16 characters
var u_length := ws.toArray().Length; //Number of UTF-32 characters
}</
===Byte Length===
ELENA 4.x :
<
public program()
Line 1,181 ⟶ 1,344:
var s_byte_length := s.toByteArray().Length; // Number of bytes
var ws_byte_length := ws.toByteArray().Length; // Number of bytes
}</
=={{header|Elixir}}==
===Byte Length===
<
name = "J\x{332}o\x{332}s\x{332}e\x{301}\x{332}"
byte_size(name)
# => 14
</syntaxhighlight>
===Character Length===
<
name = "J\x{332}o\x{332}s\x{332}e\x{301}\x{332}"
Enum.count(String.codepoints(name))
# => 9
</syntaxhighlight>
===Grapheme Length===
<
name = "J\x{332}o\x{332}s\x{332}e\x{301}\x{332}"
String.length(name)
# => 4
</syntaxhighlight>
=={{header|Emacs Lisp}}==
===Character Length===
<
;; => 5</
===Byte Length===
<
;; => 12</
<code>string-bytes</code> is the length of Emacs' internal representation. In Emacs 23 up this is utf-8. In earlier versions it was "emacs-mule".
Line 1,216 ⟶ 1,379:
<code>string-width</code> is the displayed width of a string in the current frame and window. This is not the same as grapheme length since various Asian characters may display in 2 columns, depending on the type of tty or GUI.
<
(mapcar (lambda (c) (decode-char 'ucs c))
'(#x1112 #x1161 #x11ab #x1100 #x1173 #x11af)))))
Line 1,222 ⟶ 1,385:
(string-bytes str)
(string-width str)))
;; => (6 18 4) ;; in emacs 23 up</
=={{header|EMal}}==
<syntaxhighlight lang="emal">
text moose = "møøse"
text unicode = "𝔘𝔫𝔦𝔠𝔬𝔡𝔢"
text jose = "J" + 0U0332 + "o" + 0U0332 + "s" + 0U0332 + "e" + 0U0301 + 0U0332
text emoji = "𠇰😈🎶🔥é-"
</syntaxhighlight>
===Byte Length===
<syntaxhighlight lang="emal">
writeLine((blob!moose).length)
writeLine((blob!unicode).length)
writeLine((blob!jose).length)
writeLine((blob!emoji).length)
</syntaxhighlight>
{{out}}
<pre>
7
28
14
19
</pre>
===Character Length===
<syntaxhighlight lang="emal">
writeLine(moose.codePointsLength)
writeLine(unicode.codePointsLength)
writeLine(jose.codePointsLength)
writeLine(emoji.codePointsLength)
</syntaxhighlight>
{{out}}
<pre>
5
7
9
6
</pre>
===Grapheme Length===
<syntaxhighlight lang="emal">
writeLine(moose.graphemesLength)
writeLine(unicode.graphemesLength)
writeLine(jose.graphemesLength)
writeLine(emoji.graphemesLength)
</syntaxhighlight>
{{out}}
<pre>
5
7
4
6
</pre>
=={{header|Erlang}}==
Line 1,236 ⟶ 1,449:
=={{header|Euphoria}}==
===Character Length===
<
=={{header|F_Sharp|F#}}==
This is delegated to the standard .Net framework string and encoding functions.
===Byte Length===
<
let byte_length str = Encoding.UTF8.GetByteCount(str)</
===Character Length===
<
=={{header|Factor}}==
===Byte Length===
Here are two words to compute the byte length of strings. The first one doesn't allocate new memory, the second one can easily be adapted to measure the byte length of encodings other than UTF8.
<
: string-byte-length-2 ( string -- n ) utf8 encode length ;</
===Character Length===
<code>length</code> works on any sequece, of which strings are one. Strings are UTF8 encoded.
<syntaxhighlight lang
=={{header|Fantom}}==
Line 1,261 ⟶ 1,474:
A string can be converted into an instance of <code>Buf</code> to treat the string as a sequence of bytes according to a given charset: the default is UTF8, but 16-bit representations can also be used.
<
fansh> c := "møøse"
møøse
Line 1,276 ⟶ 1,489:
fansh> c.toBuf(Charset.utf16BE).toHex // display as UTF16 big-endian
006d00f800f800730065
</syntaxhighlight>
===Character length===
<
fansh> c := "møøse"
møøse
fansh> c.size
5
</syntaxhighlight>
=={{header|Forth}}==
Line 1,296 ⟶ 1,509:
A counted string is a single pointer to a short string in memory. The string's first byte is the count of the number of characters in the string. This is how symbols are stored in a Forth dictionary.
<
s C@ ( -- length=11 )
s COUNT ( addr len ) \ convert to a stack string, described below</
'''Stack string'''
Line 1,304 ⟶ 1,517:
A string on the stack is represented by a pair of cells: the address of the string data and the length of the string data (in characters). The word '''COUNT''' converts a counted string into a stack string. The STRING utility wordset of ANS Forth works on these addr-len pairs. This representation has the advantages of not requiring null-termination, easy representation of substrings, and not being limited to 255 characters.
<
DUP . \ 6</
===Character Length===
Line 1,312 ⟶ 1,525:
The following code will count the number of UTF-8 characters in a null-terminated string. It relies on the fact that all bytes of a UTF-8 character except the first have the the binary bit pattern "10xxxxxx".
<
: utf8+ ( str -- str )
begin
Line 1,320 ⟶ 1,533:
10000000 <>
until ;
decimal</
<
0
begin
Line 1,329 ⟶ 1,542:
utf8+
swap 1+
repeat drop ;</
=={{header|Fortran}}==
Line 1,346 ⟶ 1,559:
=={{header|FreeBASIC}}==
<
Dim s As String = "moose" '' variable length ascii string
Line 1,368 ⟶ 1,581:
Print "w : " ; w, "Character Length : "; Len(s), "Byte Length : "; SizeOf(w)
Print
Sleep</
{{out}}
Line 1,382 ⟶ 1,595:
===Byte Length===
A string can be converted to an array of bytes in any supported encoding.
<
b = "𝔘𝔫𝔦𝔠𝔬𝔡𝔢"
length[stringToBytes[b, "UTF-8"]]
</syntaxhighlight>
===Character Length===
Frink's string operations correctly handle upper-plane Unicode characters as a single codepoint.
<
b = "𝔘𝔫𝔦𝔠𝔬𝔡𝔢"
length[b]
</syntaxhighlight>
===Grapheme Length===
<
b = "𝔘𝔫𝔦𝔠𝔬𝔡𝔢"
graphemeLength[b]
</syntaxhighlight>
=={{header|GAP}}==
<
# or same result with
Size("abc");</
=={{header|Gnuplot}}==
===Byte Length===
<
=> 5</
=={{header|Go}}==
====Byte Length====
<
import "fmt"
Line 1,421 ⟶ 1,634:
j := "J̲o̲s̲é̲"
fmt.Printf("%d %s % x\n", len(m), m, m)
fmt.Printf("%d %s % x\n", len(u), u, u)
fmt.Printf("%d %s % x\n", len(j), j, j)
}</
Output:
<pre>
7 møøse
28 𝔘𝔫𝔦𝔠𝔬𝔡𝔢
</pre>
====Character Length====
<
import (
Line 1,445 ⟶ 1,659:
fmt.Printf("%d %s %x\n", utf8.RuneCountInString(u), u, []rune(u))
fmt.Printf("%d %s %x\n", utf8.RuneCountInString(j), j, []rune(j))
}</
Output:
<pre>
Line 1,454 ⟶ 1,668:
===Grapheme Length===
Go does not have language or library features to recognize graphemes directly. For example, it does not provide functions implementing [http://www.unicode.org/reports/tr29/ Unicode Standard Annex #29, Unicode Text Segmentation]. It does however have convenient functions for recognizing Unicode character categories, and so an expected subset of grapheme possibilites is easy to recognize. Here is a solution recognizing the category "Mn", which includes the combining characters used in the task example.
<
import (
Line 1,483 ⟶ 1,697:
}
return gr
}</
Output:
<pre>
Line 1,494 ⟶ 1,708:
Calculating "Byte-length" (by which one typically means "in-memory storage size in bytes") is not possible through the facilities of the Groovy language alone. Calculating "Character length" is built into the Groovy extensions to java.lang.String.
===Character Length===
<syntaxhighlight lang="groovy">
println "Hello World!".size()
println "møøse".size()
println "𝔘𝔫𝔦𝔠𝔬𝔡𝔢".size()
println "J̲o̲s̲é̲".size()
</syntaxhighlight>
Output:
<
12
5
14
8
</pre>
Note: The Java "String.length()" method also works in Groovy, but "size()" is consistent with usage in other sequential or composite types.
Line 1,504 ⟶ 1,728:
GW-BASIC only supports single-byte characters.
<
20 PRINT LEN(A$)</
=={{header|Haskell}}==
Line 1,515 ⟶ 1,739:
There are several (non-standard, so far) Unicode encoding libraries available on [http://hackage.haskell.org/ Hackage]. As an example, we'll use [http://hackage.haskell.org/packages/archive/encoding/0.2/doc/html/Data-Encoding.html encoding-0.2], as ''Data.Encoding'':
<
import Data.ByteString as B
Line 1,525 ⟶ 1,749:
strlenUTF8 = B.length strUTF8
strlenUTF32 = B.length strUTF32</
===Character Length===
{{works with|GHC|GHCi|6.6}}
Line 1,531 ⟶ 1,755:
The base type ''Char'' defined by the standard is already intended for (plain) Unicode characters.
<
=={{header|HicEst}}==
<
=={{header|HolyC}}==
===Byte Length===
<
Print("%d\n", StrLen(string));
</syntaxhighlight>
=={{header|Icon}} and {{header|Unicon}}==
==== Character Length ====
<
Note: Neither Icon nor Unicon currently supports double-byte character sets.
Line 1,553 ⟶ 1,777:
'''Compiler:''' any IDL compiler should do
<
===Character Length===
{{needs-review|IDL}}
<
=={{header|Io}}==
===Byte Length===
<
===Character Length===
<
=={{header|J}}==
===Byte Length===
<
7</
Here we use the default encoding for character literals (8 bit wide literals).
===Character Length===
<
5</
Here we have used 16 bit wide character literals. See also the dictionary page for [http://www.jsoftware.com/help/dictionary/duco.htm u:].
=={{header|Jakt}}==
===Character Length===
<syntaxhighlight lang="jakt">
fn character_length(string: String) -> i64 {
mut length = 0
for _ in string.code_points() {
length++
}
return length
}
fn main() {
for string in [
"Hello world!"
"møøse"
"𝔘𝔫𝔦𝔠𝔬𝔡𝔢"
"J̲o̲s̲é̲"
] {
println("\"{}\" {}", string, character_length(string))
}
}
</syntaxhighlight>
{{out}}
<pre>
"Hello world!" 12
"møøse" 5
"𝔘𝔫𝔦𝔠𝔬𝔡𝔢" 7
"J̲o̲s̲é̲" 8
</pre>
===Byte Length===
<syntaxhighlight lang="jakt">
fn main() {
for string in [
"Hello world!"
"møøse"
"𝔘𝔫𝔦𝔠𝔬𝔡𝔢"
"J̲o̲s̲é̲"
] {
println("\"{}\" {}", string, string.length())
}
}
</syntaxhighlight>
{{out}}
<pre>
"Hello world!" 12
"møøse" 7
"𝔘𝔫𝔦𝔠𝔬𝔡𝔢" 28
"J̲o̲s̲é̲" 13
</pre>
=={{header|Java}}==
Line 1,581 ⟶ 1,856:
Another way to know the byte length of a string -who cares- is to explicitly specify the charset we desire.
<
int byteCountUTF16 = s.getBytes("UTF-16").length; // Incorrect: it yields 28 (that is with the BOM)
int byteCountUTF16LE = s.getBytes("UTF-16LE").length; // Correct: it yields 26
int byteCountUTF8 = s.getBytes("UTF-8").length; // yields 13 </
===Character Length===
Line 1,590 ⟶ 1,865:
The length method of String objects is not the length of that String in characters. Instead, it only gives the number of 16-bit code units used to encode a string. This is not (always) the number of Unicode characters (code points) in the string.
<
int not_really_the_length = s.length(); // XXX: does not (always) count Unicode characters (code points)! </
Since Java 1.5, the actual number of characters (code points) can be determined by calling the codePointCount method.
<
int not_really__the_length = str.length(); // value is 2, which is not the length in characters
int actual_length = str.codePointCount(0, str.length()); // value is 1, which is the length in characters</
===Grapheme Length===
Since JDK 20<ref>https://bugs.openjdk.org/browse/JDK-8291660</ref>.
<syntaxhighlight lang="java">import java.text.BreakIterator;
public class Grapheme {
Line 1,616 ⟶ 1,894:
System.out.println("Grapheme length: " + count+ " " + s);
}
}</
Output:
<pre>
Line 1,625 ⟶ 1,903:
=={{header|JavaScript}}==
===Byte length===
JavaScript encodes strings in UTF-16, which represents each character with one or two 16-bit values. The length property of string objects gives the number of 16-bit values used to encode a string, so the number of bytes can be determined by doubling that number.
<syntaxhighlight lang="javascript">
var s = "Hello, world!";
var byteCount = s.length * 2; // 26
</syntaxhighlight>
It's easier to use Buffer.byteLength (Node.JS specific, not ECMAScript).
<syntaxhighlight lang="javascript">
a = '👩❤️👩'
Buffer.byteLength(a, 'utf16le'); // 16
Buffer.byteLength(a, 'utf8'); // 20
Buffer.byteLength(s, 'utf16le'); // 26
Buffer.byteLength(s, 'utf8'); // 13
</syntaxhighlight>
In pure ECMAScript, TextEncoder() can be used to return the UTF-8 byte size:
<syntaxhighlight lang="javascript">
(new TextEncoder().encode(a)).length; // 20
(new TextEncoder().encode(s)).length; // 13
</syntaxhighlight>
=== Unicode codepoint length ===
JavaScript encodes strings in UTF-16, which represents each character with one or two 16-bit values. The most commonly used characters are represented by one 16-bit value, while rarer ones like some mathematical symbols are represented by two.
<syntaxhighlight lang="javascript">
var str1 = "Hello, world!";
var len1 = str1.length; // 13
var str2 = "\uD834\uDD2A"; // U+1D12A represented by a UTF-16 surrogate pair
var len2 = str2.length; // 2
</syntaxhighlight>
More generally, the expansion operator in an array can be used to enumerate Unicode code points:
<syntaxhighlight lang="javascript">
[...str2].length // 1
</syntaxhighlight>
=== Unicode grapheme length ===
Counting Unicode codepoints when using combining characters such as joining sequences or diacritics will return the wrong size, so we must count graphemes instead. Intl.Segmenter() default granularity is grapheme.
<syntaxhighlight lang="javascript">
[...new Intl.Segmenter().segment(a)].length; // 1
</syntaxhighlight>
===ES6 destructuring/iterators===
ES6 provides several ways to get a string split into an array of code points instead of UTF-16 code units:
<
str='AöЖ€𝄞'
,countofcodeunits=str.length // 6
Line 1,662 ⟶ 1,981:
countofcodepoints=cparr.length // 5
}
</syntaxhighlight>
=={{header|Joy}}==
;Byte length
<syntaxhighlight lang="joy">"Café" size.</syntaxhighlight>
{{out}}
<pre>5</pre>
=={{header|jq}}==
jq strings are JSON strings and are therefore encoded as UTF-8. When given a JSON string, the <tt>length</tt> filter emits the number of Unicode codepoints that it contains:
<
def describe:
"length of \(.) is \(length)";
("J̲o̲s̲é̲", "𝔘𝔫𝔦𝔠𝔬𝔡𝔢") | describe</
$ jq -n -f String_length.jq
"length of J̲o̲s̲é̲ is 8"
"length of 𝔘𝔫𝔦𝔠𝔬𝔡𝔢 is 7"</
=={{header|JudoScript}}==
===Byte Length===
{{needs-review|JudoScript}}
<
. length = "Hello World".length();</
===Character Length===
{{needs-review| JudoScript}}
<
. length = "Hello World".length()</
=={{header|Julia}}==
Julia encodes strings as UTF-8, so the byte length (via <code>sizeof</code>) will be different from the string length (via <code>length</code>) only if the string contains non-ASCII characters.
===Byte Length===
<syntaxhighlight lang="julia">
sizeof("møøse") # 7
sizeof("𝔘𝔫𝔦𝔠𝔬𝔡𝔢") # 28
sizeof("J̲o̲s̲é̲") # 13
</syntaxhighlight>
===Character Length===
<syntaxhighlight lang="julia">
length("møøse") # 5
length("𝔘𝔫𝔦𝔠𝔬𝔡𝔢") # 7
length("J̲o̲s̲é̲") # 8
</syntaxhighlight>
===Grapheme Length===
<syntaxhighlight lang="julia">
import Unicode
length(Unicode.graphemes("møøse")) # 5
length(Unicode.graphemes("𝔘𝔫𝔦𝔠𝔬𝔡𝔢")) # 7
length(Unicode.graphemes("J̲o̲s̲é̲")) # 4
</syntaxhighlight>
=={{header|K}}==
===Character Length===
<syntaxhighlight lang="k">
#"Hello, world!"
13
#"Hëllo, world!"
13
</syntaxhighlight>
=={{header|Kotlin}}==
Line 1,709 ⟶ 2,052:
As each UTF-16 character occupies 2 bytes, it follows that the number of bytes occupied by the string will be twice the length:
<syntaxhighlight lang="kotlin">
fun main(
val s = "José"
println("The char length is ${s.length}")
println("The byte length is ${
}</
{{out}}
Line 1,734 ⟶ 2,077:
The lambdatalk {W.length string} function returns the number of bytes in a string. For Unicode characters made of two bytes things are a little bit more tricky. It's easy to add (inline) a new javascript primitive to the dictionary:
<
{script
LAMBDATALK.DICT["W.unicodeLength"] = function() {
Line 1,768 ⟶ 2,111:
{W.length 𝔘𝔫𝔦𝔠𝔬𝔡𝔢} -> 14
{W.unicodeLength 𝔘𝔫𝔦𝔠𝔬𝔡𝔢} -> 7
</syntaxhighlight>
=={{header|Lasso}}==
===Character Length===
<
'møøse'->size // 5
'𝔘𝔫𝔦𝔠𝔬𝔡𝔢'->size // 7</
===Byte Length===
<
'møøse'->asBytes->size // 7
'𝔘𝔫𝔦𝔠𝔬𝔡𝔢'->asBytes->size // 28</
=={{header|LFE}}==
Line 1,786 ⟶ 2,129:
=== Character Length ===
<
(length "ASCII text")
10
Line 1,795 ⟶ 2,138:
> (length (unicode:characters_to_list encoded 'utf8))
12
</syntaxhighlight>
=== Byte Length ===
<
> (set encoded (binary ("𝔘𝔫𝔦𝔠𝔬𝔡𝔢 𝔗𝔢𝒙𝔱" utf8)))
#B(240 157 148 152 240 157 148 171 240 157 ...)
Line 1,812 ⟶ 2,155:
> (byte_size encoded)
10
</syntaxhighlight>
=={{header|Liberty BASIC}}==
Line 1,819 ⟶ 2,162:
=={{header|Lingo}}==
===Character Length===
<
put utf8Str.length
-- 15</
===Byte Length===
<
put bytearray(utf8Str).length
-- 18</
=={{header|LiveCode}}==
Line 1,832 ⟶ 2,175:
===Character Length===
<
or
<syntaxhighlight lang="livecode
or
<
for Unicode character count use the code units keyword
<syntaxhighlight lang="livecode
===Byte Length===
Use the 'byte' keyword in LiveCode for an accurate unicode char byte count
<
=={{header|Logo}}==
Logo is so old that only ASCII encoding is supported. Modern versions of Logo may have enhanced character set support.
<
print count "møøse ; 5
print char 248 ; ø - implies ISO-Latin character set</
=={{header|LSE64}}==
===Byte Length===
LSE stores strings as arrays of characters in 64-bit cells plus a count.
<
===Character Length===
LSE uses counted strings: arrays of characters, where the first cell contains the number of characters in the string.
<
=={{header|Lua}}==
Line 1,863 ⟶ 2,206:
In Lua, a character is always the size of one byte so there is no difference between byte length and character length.
===Byte Length===
Byte length
<syntaxhighlight lang="lua">str = "Hello world"
length = #str</syntaxhighlight>
or
<
length = string.len(str)</
===Character Length===
Only valid for ASCII:
<syntaxhighlight lang="lua">str = "Hello world"
length = #str</syntaxhighlight>
or
<
length = string.len(str)</
For Unicode string, use utf8 module:
<syntaxhighlight lang="lua">
utf8.len("møøse")
utf8.len("𝔘𝔫𝔦𝔠𝔬𝔡𝔢")
utf8.len("J̲o̲s̲é̲")
</syntaxhighlight>
{{out}}
<pre>
5
7
8
</pre>
=={{header|M2000 Interpreter}}==
<syntaxhighlight lang="m2000 interpreter">
module String_length {
A$=format$("J\u0332o\u0332s\u0332e\u0301\u0332")
Print Len
Print Len.Disp(A$) = 4 \\ display length
Buffer Clear Mem as Byte*100
\\ Write at memory at offset 0 or address Mem(0)
Print Eval$(Mem, 0, 18)
For i=0 to 17 step 2
\\ print hex value and character
Hex Eval(Mem, i as integer), ChrCode$(Eval(Mem, i as integer))
Next i
Document B$=A$
\\ encode to utf-8 with BOM (3 bytes 0xEF,0xBB,0xBF)
Print Filelen("Checklen.doc")=17
\\ So length is 14 bytes + 3 the BOM
Mem=Buffer("Checklen.doc")
Print len(Mem)=17 // len works for buffers too - unit byte
// version 12 can handle strings without suffix $
C=eval$(mem, 3, 14) // from 4th byte get 14 bytes in a string
Print len(C)*2=14 ' bytes // len()) for strings return double type of words (can return 0.5)
C=string$(C as utf8dec) ' decode bytes from utf8 to utf16LE
Print len(C)=9, C=A$, Len.Disp(C)=4
Print C
Report 2, C // proportional print on console - for text center justified rendering (2 - center)
}
String_length
</syntaxhighlight>
=={{header|Maple}}==
=== Character length ===
<
=== Byte count ===
<
=={{header|Mathematica}}/{{header|Wolfram Language}}==
=== Character length ===
<
=== Byte length ===
<
=={{header|MATLAB}}==
===Character Length===
<
ans =
5</
===Byte Length===
MATLAB apparently encodes strings using UTF-16.
<
ans =
10</
=={{header|Maxima}}==
<
slength(s);
/* 43 */</
=={{header|MAXScript}}==
===Character Length===
<
=={{header|Mercury}}==
Line 1,945 ⟶ 2,323:
===Byte Length===
<
:- interface.
Line 1,964 ⟶ 2,342:
write_length(String, !IO):-
NumBytes = count_utf8_code_units(String),
io.format("%s: %d bytes\n", [s(String), i(NumBytes)], !IO).</
Output:
Line 1,975 ⟶ 2,353:
===Character Length===
The function <tt>string.count_codepoints/1</tt> returns the number of code points in a string.
<
:- interface.
Line 1,994 ⟶ 2,372:
write_length(String, !IO) :-
NumChars = count_codepoints(String),
io.format("%s: %d characters\n", [s(String), i(NumChars)], !IO).</
Output:
Line 2,007 ⟶ 2,385:
Metafont has no way of handling properly encodings different from ASCII. So it is able to count only the number of bytes in a string.
<
s := "Hello Moose";
show length(s); % 11 (ok)
s := "Hello Møøse";
show length(s); % 13 (number of bytes when the string is UTF-8 encoded,
% since ø takes two bytes)</
'''Note''': in the lang tag, Møøse is Latin1-reencoded, showing up two bytes (as Latin1) instead of one
Line 2,018 ⟶ 2,396:
=={{header|MIPS Assembly}}==
This only supports ASCII encoding, so it'll return both byte length and char length.
<
.data
#.asciiz automatically adds the NULL terminator character, \0 for us.
Line 2,040 ⟶ 2,418:
li $v0,10 #set syscall to cleanly exit EXIT_SUCCESS
syscall
</syntaxhighlight>
=={{header|mIRC Scripting Language}}==
===Byte Length===
{{needs-review|mIRC Scripting Language}}
<
===Character Length===
{{needs-review|mIRC Scripting Language}}
''$utfdecode()'' converts an UTF-8 string to the locale encoding, with unrepresentable characters as question marks. Since mIRC is not yet fully Unicode aware, entering Unicode text trough a dialog box will automatically convert it to ASCII.
<
alias stringlength2 {
var %name = Børje
echo -a %name is: $utf8len(%name) characters long!
}</
=={{header|Modula-3}}==
===Byte Length===
<
IMPORT IO, Fmt, Text;
Line 2,065 ⟶ 2,443:
BEGIN
IO.Put("Byte length of s: " & Fmt.Int((Text.Length(s) * BYTESIZE(s))) & "\n");
END ByteLength.</
===Character Length===
<
IMPORT IO, Fmt, Text;
Line 2,075 ⟶ 2,453:
BEGIN
IO.Put("String length of s: " & Fmt.Int(Text.Length(s)) & "\n");
END StringLength.</
=={{header|Nemerle}}==
Both examples rely on .Net faculties, so they're almost identical to C#
===Character Length===
<
def charlength = message.Length;</
===Byte Length===
<
def message = "How long am I anyways?";
def bytelength = Encoding.Unicode.GetByteCount(message);</
=={{header|NewLISP}}==
===Character Length===
<
(println Str " is " (length Str) " characters long")</
=={{header|Nim}}==
===Byte Length===
<syntaxhighlight lang="nim">
echo "møøse".len # 7
echo "𝔘𝔫𝔦𝔠𝔬𝔡𝔢".len # 28
echo "J̲o̲s̲é̲".len # 13
</syntaxhighlight>
===Character Length===
<syntaxhighlight lang="nim">
import unicode
echo "møøse".runeLen # 5
echo "𝔘𝔫𝔦𝔠𝔬𝔡𝔢".runeLen # 7
echo "J̲o̲s̲é̲".runeLen # 8
</syntaxhighlight>
===Grapheme Length===
[https://nim-lang.org/docs/unicode.html#graphemeLen%2Cstring%2CNatural graphemeLen()] does not do what you expect. It doesn't return the number of grapheme in a string but returns the number of bytes at a character/codepoint index for a given string.
=={{header|Oberon-2}}==
===Byte Length===
<
IMPORT Out;
Line 2,123 ⟶ 2,511:
Out.LongInt(s,0);
Out.Ln;
END Size.</
Output:
Line 2,131 ⟶ 2,519:
===Character Length===
<
IMPORT Out, Strings;
Line 2,144 ⟶ 2,532:
Out.Int(l,0);
Out.Ln;
END Length.</
Output:
Line 2,155 ⟶ 2,543:
===Character Length===
<
"Foo"->Size()->PrintLine();
</syntaxhighlight>
===Byte Length===
<
"Foo"->Size()->PrintLine();
</syntaxhighlight>
=={{header|Objective-C}}==
Line 2,172 ⟶ 2,560:
The length method of NSString objects is not the length of that string in characters. Instead, it only gives the number of 16-bit code units used to encode a string. This is not (always) the number of Unicode characters (code points) in the string.
<
// XXX: does not (always) count Unicode characters (code points)!
unsigned int numberOfCharacters = [@"møøse" length]; // 5</
Since Mac OS X 10.6, CFString has methods for converting between supplementary characters and surrogate pair. However, the easiest way to get the number of characters is probably to encode it in UTF-32 (which is a fixed-length encoding) and divide by 4:
<
===Byte Length===
Objective-C encodes strings in UTF-16, which represents each character with one or two 16-bit values. The length method of NSString objects returns the number of 16-bit values used to encode a string, so the number of bytes can be determined by doubling that number.
<
Another way to know the byte length of a string is to explicitly specify the charset we desire.
<
// here explicitly UTF-8
unsigned numberOfBytes =
[@"møøse" lengthOfBytesUsingEncoding: NSUTF8StringEncoding]; // 7</
=={{header|OCaml}}==
Line 2,200 ⟶ 2,588:
Standard OCaml strings are classic ASCII ISO 8859-1, so the function String.length returns the byte length which is the character length in this encoding:
<
===Character Length===
While using the '''UTF8''' module of ''Camomile'' the byte length of an utf8 encoded string will be get with <tt>String.length</tt> and the character length will be returned by <tt>UTF8.length</tt>:
<
let () =
Printf.printf " %d\n" (String.length "møøse");
Printf.printf " %d\n" (UTF8.length "møøse");
;;</
Run this code with the command:
Line 2,217 ⟶ 2,605:
7
5
</pre>
Alternatively, you can use the UChar module (available since OCaml 4.03) to do it without additional modules.
<syntaxhighlight lang="OCaml">
let utf8_length (s: String.t) =
let byte_length = String.length s in
let rec count acc n =
if n = byte_length
then acc
else
let n' = n + (String.get_utf_8_uchar s n |> Uchar.utf_decode_length) in
count (succ acc) n'
in
count 0 0
;;
</syntaxhighlight>
<pre>
# utf8_length "møøse"
- : int = 5
</pre>
=={{header|Octave}}==
<
stringlen = length(s)</
This gives the number of bytes, not of characters. e.g. length("è") is 2 when "è" is encoded e.g. as UTF-8.
Line 2,234 ⟶ 2,642:
=={{header|Ol}}==
<
; Character length
(print (string-length "Hello, wørld!"))
Line 2,242 ⟶ 2,650:
(print (length (string->bytes "Hello, wørld!")))
; ==> 14
</syntaxhighlight>
=={{header|OpenEdge/Progress}}==
Line 2,248 ⟶ 2,656:
===Character Length===
<
FIX-CODEPAGE( lcc ) = "UTF-8".
lcc = "møøse".
MESSAGE LENGTH( lcc ) VIEW-AS ALERT-BOX.</
===Byte Length===
<
FIX-CODEPAGE( lcc ) = "UTF-8".
lcc = "møøse".
MESSAGE LENGTH( lcc, "RAW" ) VIEW-AS ALERT-BOX.</
=={{header|Oz}}==
===Byte Length===
<
Oz uses a single-byte encoding by default. So for normal strings, this will also show the correct character length.
Line 2,270 ⟶ 2,678:
===Character Length===
Characters = bytes in Pari; the underlying strings are C strings interpreted as US-ASCII.
<
===Byte Length===
This works on objects of any sort, not just strings, and includes overhead.
<
=={{header|Pascal}}==
===Byte Length===
<
const
s = 'abcdef';
Line 2,283 ⟶ 2,691:
writeln (length(s))
end.
</syntaxhighlight>
Output:
<pre>
Line 2,295 ⟶ 2,703:
Strings in Perl consist of characters. Measuring the byte length therefore requires conversion to some binary representation (called encoding, both noun and verb).
<
use Encode qw(encode);
Line 2,302 ⟶ 2,710:
print length encode 'UTF-16', "Hello, world! ☺";
# 32. 2 bytes for the BOM, then 15 byte pairs for each character.</
===Character Length===
{{works with|Perl|5.X}}
<
===Grapheme Length===
Line 2,315 ⟶ 2,723:
{{works with|Perl|5.12}}
<
my $string = "\x{1112}\x{1161}\x{11ab}\x{1100}\x{1173}\x{11af}"; # 한글
my $len;
$len++ while ($string =~ /\X/g);
printf "Grapheme length: %d\n", $len;</
{{out}}
Line 2,327 ⟶ 2,735:
{{libheader|Phix/basics}}
The standard length function returns the number of bytes, character length is achieved by converting to utf32
<!--<
<span style="color: #008080;">constant</span> <span style="color: #000000;">s</span> <span style="color: #0000FF;">=</span> <span style="color: #008000;">"𝔘𝔫𝔦𝔠𝔬𝔡𝔢"</span>
<span style="color: #0000FF;">?<span style="color: #7060A8;">length<span style="color: #0000FF;">(<span style="color: #000000;">s<span style="color: #0000FF;">)</span>
<span style="color: #0000FF;">?<span style="color: #7060A8;">length<span style="color: #0000FF;">(<span style="color: #000000;">utf8_to_utf32<span style="color: #0000FF;">(<span style="color: #000000;">s<span style="color: #0000FF;">)<span style="color: #0000FF;">)
<!--</
{{out}}
<pre>
Line 2,340 ⟶ 2,748:
=={{header|PHP}}==
Program in a UTF8 linux:
<
foreach (array('møøse', '𝔘𝔫𝔦𝔠𝔬𝔡𝔢', 'J̲o̲s̲é̲') as $s1) {
printf('String "%s" measured with strlen: %d mb_strlen: %s grapheme_strlen %s%s',
$s1, strlen($s1),mb_strlen($s1), grapheme_strlen($s1), PHP_EOL);
}
</syntaxhighlight>
yields the result:
<pre>
Line 2,354 ⟶ 2,762:
=={{header|PicoLisp}}==
<
(prinl "Character Length of \"" Str "\" is " (length Str))
(prinl "Byte Length of \"" Str "\" is " (size Str)) )</
Output:
<pre>Character Length of "møøse" is 5
Line 2,363 ⟶ 2,771:
=={{header|PL/I}}==
<
put ('Character length=', length (WS));
put skip list ('Byte length=', size(WS));
Line 2,369 ⟶ 2,777:
declare SM graphic (13) initial ('Hello world');
put ('Character length=', length(SM));
put skip list ('Byte length=', size(trim(SM)));</
=={{header|PL/SQL}}==
Line 2,378 ⟶ 2,786:
LENGTH4 uses UCS4 code points.
===Byte Length===
<
string VARCHAR2(50) := 'Hello, world!';
stringlength NUMBER;
BEGIN
stringlength := LENGTHB(string);
END;</
===Character Length===
<
string VARCHAR2(50) := 'Hello, world!';
stringlength NUMBER;
Line 2,397 ⟶ 2,805:
ucs2length := LENGTH2(string);
ucs4length := LENGTH4(string);
END;</
=={{header|Plain English}}==
===Byte Length===
{{libheader|Plain English-output}}
Plain English does not handle Unicode, so strings return their length in bytes.
<syntaxhighlight lang="text">
To run:
Start up.
Put "møøse" into a string.
Write the string's length to the output.
Wait for the escape key.
Shut down.
</syntaxhighlight>
=={{header|Pop11}}==
Line 2,403 ⟶ 2,824:
Currently Pop11 supports only strings consisting of 1-byte units. Strings can carry arbitrary binary data, so user can for example use UTF-8 (however builtin procedures will treat each byte as a single character). The length function for strings returns length in bytes:
<
lvars len = length(str);</
=={{header|PostScript}}==
===Character Length===
<syntaxhighlight lang="text">
(Hello World) length =
11
</syntaxhighlight>
=={{header|Potion}}==
===Character Length===
<
"𝔘𝔫𝔦𝔠𝔬𝔡𝔢" length print
"J̲o̲s̲é̲" length print</
=={{header|PowerShell}}==
===Character Length===
<
$s.Length</
===Byte Length===
{{trans|C#}}
For UTF-16, which is the default in .NET and therefore PowerShell:
<
[System.Text.Encoding]::Unicode.GetByteCount($s)</
For UTF-8:
<
=={{header|PureBasic}}==
===Character Length===
<
===Byte Length===
Line 2,441 ⟶ 2,862:
Note: The number of bytes returned does not include the terminating Null-Character of the string. The size of the Null-Character is 1 byte for Ascii and UTF8 mode and 2 bytes for Unicode mode.
<
b = StringByteLength("ä", #PB_Ascii) ;b will be 1
c = StringByteLength("ä", #PB_Unicode) ;c will be 2
</syntaxhighlight>
=={{header|Python}}==
Line 2,454 ⟶ 2,875:
For 8-bit strings, the byte length is the same as the character length:
<
# 5</
For Unicode strings, length depends on the internal encoding. Since version 2.2 Python shipped with two build options: it either uses 2 or 4 bytes per character. The internal representation is not interesting for the user.
<
print len(u'\u05d0'.encode('utf-8'))
# 2
print len(u'\u05d0'.encode('iso-8859-8'))
# 1</
Example from the problem statement:
<
# -*- coding: UTF-8 -*-
s = u"møøse"
assert len(s) == 5
assert len(s.encode('UTF-8')) == 7
assert len(s.encode('UTF-16-BE')) == 10 # There are 3 different UTF-16 encodings: LE and BE are little endian and big endian respectively, the third one (without suffix) adds 2 extra leading bytes: the byte-order mark (BOM).</
====Character Length====
{{works with|Python|2.4}}
Line 2,477 ⟶ 2,898:
len() returns the number of code units (not code points!) in a Unicode string or plain ASCII string. On a wide build, this is the same as the number of code points, but on a narrow one it is not. Most linux distributions install the wide build by default, you can check the build at runtime with:
<
sys.maxunicode # 1114111 on a wide build, 65535 on a narrow build </
To get the length of encoded string, you have to decode it first:
<
# 5
print len(u'\u05d0') # the letter Alef as unicode literal
Line 2,488 ⟶ 2,909:
# 1
print hex(sys.maxunicode), len(unichr(0x1F4A9))
# ('0x10ffff', 1)</
On a narrow build, len() gives the wrong answer for non-BMP chars
<
# ('0xffff', 2)</
===3.x===
Line 2,503 ⟶ 2,924:
You can use len() to get the length of a byte sequence.
<
# 13</
To get a byte sequence from a string, you have to encode it with the desired encoding:
<
print(len('\u05d0'.encode())) # the default encoding is utf-8 in Python3
# 2
print(len('\u05d0'.encode('iso-8859-8')))
# 1</
Example from the problem statement:
<
# -*- coding: UTF-8 -*-
s = "møøse"
Line 2,523 ⟶ 2,944:
u="𝔘𝔫𝔦𝔠𝔬𝔡𝔢"
assert len(u.encode()) == 28
assert len(u.encode('UTF-16-BE')) == 28</
====Character Length====
Line 2,530 ⟶ 2,951:
Thus Python is able to avoid memory overhead when dealing with only ASCII strings, while handling correctly all codepoints in Unicode. len() returns the number of characters/codepoints:
<
# 7</
Until Python 3.2 instead, length depended on the internal encoding, since it shipped with two build options: it either used 2 or 4 bytes per character.
Line 2,537 ⟶ 2,958:
len() returned the number of code units in a string, which could be different from the number of characters. In a narrow build, this is not a reliable way to get the number of characters. You can only easily count code points in a wide build. Most linux distributions install the wide build by default, you can check the build at runtime with:
<
sys.maxunicode # 1114111 on a wide build, 65535 on a narrow build</
<
# 5
print(len('\u05d0')) # the letter Alef as unicode literal
# 1</
To get the length of an encoded byte sequence, you have to decode it first:
<
# 1</
<
# ('0x10ffff', 1)</
On a narrow build, len() gives the wrong answer for non-BMP chars
<
# ('0xffff', 2)</
=={{header|R}}==
===Byte length===
<
print(nchar(a, type="bytes")) # print 7</
===Character length===
<
=={{header|Racket}}==
Using this definition:
<
on the REPL, we get the following:
===Character length===
<
str has 9 characters</
===Byte length===
<
str has 14 bytes in utf-8</
=={{header|Raku}}==
Line 2,585 ⟶ 3,006:
===Byte Length===
<syntaxhighlight lang="raku"
===Character Length===
<syntaxhighlight lang="raku"
===Grapheme Length===
<syntaxhighlight lang="raku"
=={{header|REBOL}}==
Line 2,604 ⟶ 3,025:
===Byte Length===
<
length? "møøse"
;; r3
length? to-binary "møøse"</
===Character length===
<
length? "møøse"</
=={{header|ReScript}}==
===Byte Length===
<syntaxhighlight lang="rescript">Js.String2.length("abcd") == 4</syntaxhighlight>
=={{header|Retro}}==
===Byte Length===
<syntaxhighlight lang
===Character Length===
Retro does not have built-in support for Unicode, but counting of characters can be done with a small amount of effort.
<
{{
: utf+ ( $-$ )
Line 2,636 ⟶ 3,061:
;chain
"møøse" ^UTF8'getLength putn</
=={{header|REXX}}==
Line 2,643 ⟶ 3,068:
<br>is stored as character strings.
===Byte Length===
<
/* 1 */ /*a handy-dandy over/under scale.*/
/* 123456789012345 */
Line 2,653 ⟶ 3,078:
sum = 5+1 ; say 'the length of SUM is ' length(sum)
/* [↑] is, of course, 6. */
/*stick a fork in it, we're done.*/</
'''output'''
<pre>
Line 2,666 ⟶ 3,091:
=={{header|Ring}}==
===Character Length===
<
aString = "Welcome to the Ring Programming Language"
aStringSize = len(aString)
see "Character lenghts : " + aStringSize
</syntaxhighlight>
=={{header|Robotic}}==
===Character Length===
<
set "$local1" to "Hello world!"
* "String length: &$local1.length&"
end
</syntaxhighlight>
Unfortunately, only character length can be retrieved in this language.
=={{header|RPL}}==
RPL strings are all made of 8-bit characters.
"RPL" SIZE
=={{header|Ruby}}==
UTF8 is the default encoding in Ruby.
===Byte Length===
<
===Character Length===
<
===Grapheme Length===
<
===Code Set Independence===
The next examples show the '''byte length''' and '''character length''' of "møøse" in different encodings.
Line 2,706 ⟶ 3,135:
! Output
|-
| <
s = "møøse"
puts "Byte length: %d" % s.bytesize
puts "Character length: %d" % s.length</
| <pre>Byte length: 5
Character length: 5</pre>
|-
| <
s = "møøse"
puts "Byte length: %d" % s.bytesize
puts "Character length: %d" % s.length</
| <pre>Byte length: 7
Character length: 5</pre>
|-
| <
s = "møøse"
puts "Byte length: %d" % s.bytesize
puts "Character length: %d" % s.length</
| <pre>Byte length: 11
Character length: 5</pre>
Line 2,738 ⟶ 3,167:
Then either <code>string.scan(/./u).size</code> or <code>string.gsub(/./u, ' ').size</code> counts the UTF-8 characters in string.
<
class String
Line 2,749 ⟶ 3,178:
s = "文字化け"
puts "Byte length: %d" % s.bytesize
puts "Character length: %d" % s.gsub(/./u, ' ').size</
=={{header|Run BASIC}}==
<
print len(a$)</
=={{header|Rust}}==
===Byte Length===
<syntaxhighlight lang="text">
fn main() {
let s = "文字化け"; // UTF-8
println!("Byte Length: {}", s.len());
}
</syntaxhighlight>
===Character Length===
<
fn main() {
let s = "文字化け"; // UTF-8
println!("Character length: {}", s.chars().count());
}
</syntaxhighlight>
=={{header|SAS}}==
<
a="Hello, World!";
b=length(c);
put _all_;
run;</
=={{header|Scala}}==
{{libheader|Scala}}
<
object StringLength extends App {
val s1 = "møøse"
Line 2,792 ⟶ 3,221:
} UTF16bytes= ${s.getBytes("UTF-16LE").size}"))
}
</syntaxhighlight>
{{out}}
<pre>The string: møøse, characterlength= 5 UTF8bytes= 7 UTF16bytes= 10
Line 2,802 ⟶ 3,231:
{{works_with|Gauche|0.8.7 [utf-8,pthreads]}}
'''string-size''' function is only Gauche function.
<
{{works with|PLT Scheme|4.2.4}}
<
===Character Length===
{{works_with|Gauche|0.8.7 [utf-8,pthreads]}}
'''string-length''' function is in [[R5RS]], [[R6RS]].
<
=={{header|sed}}==
Line 2,817 ⟶ 3,246:
Text is read from standard input e.g. <code>echo "string" | sed -f script.sed</code> or <code>sed -f script.sed file.txt</code> (The solution given would be the contents of a text file <code>script.sed</code> in these cases).
For files with more than one line, sed will give a count for each line.
<syntaxhighlight lang="sed"># create unary numeral (i = 1)
s/./i/g
:loop
# divide by 10 (x = 10)
s/i\{10\}/x/g
# convert remainder to decimal digit
/i/!s/[0-9]*$/0&/
s/
s/
s/i\{7\}/7/
s/i\{6\}/6/
s/
s/iiii/4/
s/
s/ii/2/
s/i/1/
# convert quotient (10s) to 1s
y/x/i/
# start over for the next magnitude (if any)
/i/b loop</syntaxhighlight>
=={{header|Seed7}}==
===Character Length===
<
=={{header|SETL}}==
===Character Length===
<
=={{header|Sidef}}==
<
===Byte Length===
UTF-8 byte length (default):
<
UTF-16 byte length:
<
===Character Length===
<
===Grapheme Length===
<
=={{header|Simula}}==
Line 2,869 ⟶ 3,304:
</pre>
===Byte Length===
<
TEXT LINE;
WHILE NOT LASTITEM DO
Line 2,883 ⟶ 3,318:
END;
END.
</syntaxhighlight>
{{out}}
<pre>
Line 2,894 ⟶ 3,329:
===Character Length===
To calculate the character length, one can do it manually:
<
! NUMBER OF UFT8 CHARACTERS IN STRING ;
Line 2,939 ⟶ 3,374:
END;
END.</
{{out}}
<pre>"møøse" CHARACTER LENGTH = 5
Line 2,948 ⟶ 3,383:
=={{header|Slate}}==
<syntaxhighlight lang
=={{header|Slope}}==
=== Character Length ===
<syntaxhighlight lang="slope">(length "møøse")</syntaxhighlight>
=== Byte Lenth ===
<syntaxhighlight lang="slope">(length (string->bytes "møøse"))</syntaxhighlight>
=={{header|Smalltalk}}==
Line 2,954 ⟶ 3,396:
{{works with|Smalltalk/X}}
<
'hello' utf8Encoded size -> 5
'hello' utf8Encoded asByteArray -> #[104 101 108 108 111]
Line 2,968 ⟶ 3,410:
'𝔘𝔫𝔦𝔠𝔬𝔡𝔢' utf8Encoded asByteArray -> #[240 157 148 152 240 157 148 171 240 157 148 166 240 157 148 160 240 157 148 172 240 157 148 161 240 157 148 162]
'𝔘𝔫𝔦𝔠𝔬𝔡𝔢' utf16Encoded size -> 14
'𝔘𝔫𝔦𝔠𝔬𝔡𝔢' utf8Encoded asWordArray -> WordArray(55349 56600 55349 56619 55349 56614 55349 56608 55349 56620 55349 56609 55349 56610)</
===Byte Length===
{{works with|GNU Smalltalk}}
<
string size.</
===Character Length===
{{works with|GNU Smalltalk}}
<
string numberOfCharacters.</
requires loading the Iconv package:
<syntaxhighlight lang
=={{header|SNOBOL4}}==
===Byte Length ===
<
output = "Byte length: " size(trim(input))
end
</syntaxhighlight>
===Character Length ===
The example works AFAIK only with CSnobol4 by Phil Budne
<
-include "utf.sno"
output = "Char length: " utfsize(trim(input))
end
</syntaxhighlight>
=={{header|Sparkling}}==
===Byte length===
<
= 14</
=={{header|SPL}}==
Line 3,007 ⟶ 3,449:
All strings in SPL are Unicode. See code below.
===Character Length===
<
> i, 1..#.size(t,1)
Line 3,030 ⟶ 3,472:
<
#.output(s)
<</
{{out}}
<pre>
Line 3,063 ⟶ 3,505:
{{works with|Db2 LUW}}
With SQL only:
<
VALUES LENGTH('møøse', CODEUNITS16);
VALUES LENGTH('møøse', CODEUNITS32);
Line 3,079 ⟶ 3,521:
VALUES LENGTH2('J̲o̲s̲é̲');
VALUES LENGTH4('J̲o̲s̲é̲');
</syntaxhighlight>
Output:
<pre>
Line 3,194 ⟶ 3,636:
{{works with|Db2 LUW}}
With SQL only:
<
VALUES LENGTH('møøse');
VALUES LENGTHB('møøse');
Line 3,201 ⟶ 3,643:
VALUES LENGTH('J̲o̲s̲é̲');
VALUES LENGTHB('J̲o̲s̲é̲');
</syntaxhighlight>
Output:
<pre>
Line 3,255 ⟶ 3,697:
{{works with|Moscow ML|2.01}}
{{works with|MLton|20061107}}
<
===Character Length===
{{works with|Standard ML of New Jersey|SML/NJ|110.74}}
<
=={{header|Stata}}==
Line 3,264 ⟶ 3,706:
Use '''[https://www.stata.com/help.cgi?f_strlen strlen]''' for byte length, and '''[https://www.stata.com/help.cgi?f_ustrlen ustrlen]''' for the number of Unicode characters in a string.
<
di strlen(s)
Line 3,270 ⟶ 3,712:
di ustrlen(s)
47</
=={{header|Stringle}}==
The only current implementation of Stringle uses 8-bit character sets, meaning character and byte length is always the same.
This prints the length of a string from input:
<syntaxhighlight lang="stringle">$ #$</syntaxhighlight>
=={{header|Swift}}==
Line 3,279 ⟶ 3,728:
To count "characters" (Unicode grapheme clusters):
{{works with|Swift|2.x}}
<
{{works with|Swift|1.2}}
<
{{works with|Swift|1.0-1.1}}
<
===Character Length===
To count Unicode code points:
{{works with|Swift|2.x}}
<
{{works with|Swift|1.2}}
<
{{works with|Swift|1.0-1.1}}
<
===Byte Length===
Line 3,299 ⟶ 3,748:
For length in UTF-8, count the number of UTF-8 code units:
{{works with|Swift|2.x}}
<
{{works with|Swift|1.2}}
<
{{works with|Swift|1.0-1.1}}
<
For length in UTF-16, count the number of UTF-16 code units, and multiply by 2:
{{works with|Swift|2.x}}
<
{{works with|Swift|1.2}}
<
{{works with|Swift|1.0-1.1}}
<
=={{header|Symsyn}}==
===Byte Length===
<
c : 'abcdefgh'
#c []
</syntaxhighlight>
Output:
<pre>
Line 3,327 ⟶ 3,776:
===Byte Length===
Formally, Tcl does not guarantee to use any particular representation for its strings internally (the underlying implementation objects can hold strings in at least three different formats, mutating between them as necessary) so the way to calculate the "byte length" of a string can only be done with respect to some user-selected encoding. This is done this way (for UTF-8):
<
<!-- Yes, there's <tt>string bytelength</tt>; don't use it. It's deeply wrong-headed and will probably go away in future releases. [[DKF]] -->
Thus, we have these examples:
<
set s2 "\u304A\u306F\u3088\u3046"
set enc utf-8
Line 3,336 ⟶ 3,785:
$s1 [string length [encoding convertto $enc $s1]]]
puts [format "length of \"%s\" in bytes is %d" \
$s2 [string length [encoding convertto $enc $s2]]]</
===Character Length===
Basic version:
<
or more elaborately, needs '''Interpreter''' any 8.X. Tested on 8.4.12.
<
set s1 "hello, world"
set s2 "\u304A\u306F\u3088\u3046"
puts [format "length of \"%s\" in characters is %d" $s1 [string length $s1]]
puts [format "length of \"%s\" in characters is %d" $s2 [string length $s2]]</
=={{header|TI-89 BASIC}}==
Line 3,355 ⟶ 3,804:
The TI-89 uses an fixed 8-bit encoding so there is no difference between character length and byte length.
<
=={{header|Toka}}==
===Byte Length===
<
=={{header|Trith}}==
===Character Length===
<
===Byte Length===
<
=={{header|TUSCRIPT}}==
===Character Length ===
<
$$ MODE TUSCRIPT
string="hello, world"
l=LENGTH (string)
PRINT "character length of string '",string,"': ",l
</syntaxhighlight>
Output:
<pre>
Line 3,381 ⟶ 3,830:
=={{header|UNIX Shell}}==
====Byte
{{works with|Bourne Shell}}
<
length=`expr "x$string" : '.*' - 1`
echo $length # if you want it printed to the terminal</
====With [[Unix|SUSv3]] parameter expansion modifier:====
This returns the byte count in ash/dash, but the character count in bash, ksh, and zsh:
{{works with|Almquist SHell}}
{{works with|Bourne Again SHell
{{works with|
{{works with|Z SHell}}
<
length=
echo $length # if you want it printed to the terminal</
=={{header|Vala}}==
===Character Length===
<
string s = "Hello, world!";
int characterLength = s.length;
</syntaxhighlight>
=={{header|VBA}}==
Line 3,411 ⟶ 3,861:
=={{header|VBScript}}==
===Byte Length===
<syntaxhighlight lang
Returns the number of bytes required to store a string in memory. Returns null if string|varname is null.
===Character Length===
<syntaxhighlight lang
Returns the length of the string|varname . Returns null if string|varname is null.
Line 3,432 ⟶ 3,882:
One method of Encoding returns the number of bytes required to encode a .NET string in that encoding (encoding objects can be obtained through readonly static [Shared in VB.NET] properties of the Encoding class).
<
Function GetByteLength(s As String, encoding As Text.Encoding) As Integer
Return encoding.GetByteCount(s)
End Function
End Module</
====Character Length====
Line 3,444 ⟶ 3,894:
An alternative implementation is to count the number of UTF-16 surrogate pairs in a string and subtract that number from the number of UTF-16 code units in the string.
<
Function GetUTF16CodeUnitsLength(s As String) As Integer
Return s.Length
Line 3,463 ⟶ 3,913:
Return GetByteLength(s, Text.Encoding.UTF32) \ 4
End Function
End Module</
====Grapheme Length====
Line 3,469 ⟶ 3,919:
<code>System.Globalization.StringInfo</code> provides a means of enumerating the text elements of a string, where each "text element" is a Unicode grapheme.
<
' Wraps an IEnumerator, allowing it to be used as an IEnumerable.
Private Iterator Function AsEnumerable(enumerator As IEnumerator) As IEnumerable
Line 3,481 ⟶ 3,931:
Return AsEnumerable(elements).OfType(Of String).Count()
End Function
End Module</
====Test Code====
Line 3,487 ⟶ 3,937:
The compiler constant <code>PRINT_TESTCASE</code> toggles whether to write the contents of each test case to the console; disable for inputs that may mess with the console.
<
Module Program
Line 3,525 ⟶ 3,975:
End Sub
End Module</
{{out}}
Line 3,572 ⟶ 4,022:
bytes (UTF-16) 18
bytes (UTF-32) 36
</pre>+
=={{header|V (Vlang)}}==
{{trans|go}}
====Byte Length====
<syntaxhighlight lang="v (vlang)">fn main() {
m := "møøse"
u := "𝔘𝔫𝔦𝔠𝔬𝔡𝔢"
j := "J̲o̲s̲é̲"
println("$m.len $m ${m.bytes()}")
println("$u.len $u ${u.bytes()}")
println("$j.len $j ${j.bytes()}")
}</syntaxhighlight>
Output:
<pre>
7 møøse [m, 0xc3, 0xb8, 0xc3, 0xb8, s, e]
28 𝔘𝔫𝔦𝔠𝔬𝔡𝔢 [0xf0, 0x9d, 0x94, 0x98, 0xf0, 0x9d, 0x94, 0xab, 0xf0, 0x9d, 0x94, 0xa6, 0xf0, 0x9d, 0x94, 0xa0, 0xf0, 0x9d, 0x94, 0xac, 0xf0, 0x9d, 0x94, 0xa1, 0xf0, 0x9d, 0x94, 0xa2]
13 J̲o̲s̲é̲ [J, 0xcc, 0xb2, o, 0xcc, 0xb2, s, 0xcc, 0xb2, 0xc3, 0xa9, 0xcc, 0xb2]
</pre>
====Character Length====
<syntaxhighlight lang="v (vlang)">fn main() {
m := "møøse"
u := "𝔘𝔫𝔦𝔠𝔬𝔡𝔢"
j := "J̲o̲s̲é̲"
println("$m.runes().len $m ${m.runes()}")
println("$u.runes().len $u ${u.runes()}")
println("$j.runes().len $j ${j.runes()}")
}</syntaxhighlight>
Output:
<pre>
5 møøse [`m`, `ø`, `ø`, `s`, `e`]
7 𝔘𝔫𝔦𝔠𝔬𝔡𝔢 [`𝔘`, `𝔫`, `𝔦`, `𝔠`, `𝔬`, `𝔡`, `𝔢`]
8 J̲o̲s̲é̲ [`J`, `̲`, `o`, `̲`, `s`, `̲`, `é`, `̲`]
</pre>
=={{header|Wren}}==
===Byte Length===
<
System.print("𝔘𝔫𝔦𝔠𝔬𝔡𝔢".bytes.count)
System.print("J̲o̲s̲é̲".bytes.count)</
{{out}}
Line 3,588 ⟶ 4,071:
===Character Length===
<
System.print("𝔘𝔫𝔦𝔠𝔬𝔡𝔢".count)
System.print("J̲o̲s̲é̲".count)</
{{out}}
Line 3,597 ⟶ 4,080:
7
8
</pre>
===Grapheme Length===
{{libheader|Wren-upc}}
<syntaxhighlight lang="wren">import "./upc" for Graphemes
System.print(Graphemes.clusterCount("møøse"))
System.print(Graphemes.clusterCount("𝔘𝔫𝔦𝔠𝔬𝔡𝔢"))
System.print(Graphemes.clusterCount("J̲o̲s̲é̲"))</syntaxhighlight>
{{out}}
<pre>
5
7
4
</pre>
Line 3,603 ⟶ 4,101:
The following code uses AT&T syntax and was tested using AS (the portable GNU assembler) under Linux.
<syntaxhighlight lang="x86 assembly">
.data
string: .asciz "Test"
Line 3,628 ⟶ 4,126:
leave
ret
</syntaxhighlight>
=={{header|XPL0}}==
<
IntOut(0, StrLen("Character length = Byte length = String length = "))</
Output:
Line 3,641 ⟶ 4,139:
=={{header|XSLT}}==
===Character Length===
<
...
<
=={{header|xTalk}}==
Line 3,651 ⟶ 4,149:
LiveCode fully supports multi-byte Unicode characters since version 7. See the LiveCode section for more information.
<
or
<
===Byte Length===
<
=={{header|Yorick}}==
===Character Length===
<
=={{header|Z80 Assembly}}==
Line 3,668 ⟶ 4,166:
===Byte Length===
Code is called as a subroutine, i.e. <code>CALL getStringLength</code>.
<
; outputs length to B. HL will point to the last character in the string just before the terminator.
; length is one-indexed and does not include the terminator. A null string will return 0 in B.
Line 3,685 ⟶ 4,183:
inc hl ;next char
inc b ;increment the byte count
jr loop_getStringLength</
=={{header|zkl}}==
Line 3,692 ⟶ 4,190:
If your terminal/editor deals with UTF-8 (mine doesn't), you don't need to use the escapes, just put the unicode characters in quotes (ie the editor inserts UTF bytes, which are non zero).
===Character Length===
<
"\ufeff\u00A2 \u20ac".len() //-->9 "BOM¢ €"</
===Byte Length===
<
"\ufeff\u00A2 \u20ac".len() //-->9
Data(0,Int,"\ufeff\u00A2 \u20ac") //-->Data(9) (bytes)
"J\u0332o\u0332s\u0332e\u0301\u0332".len() //-->14
"\U1D518;\U1D52B;\U1D526;\U1D520;\U1D52C;\U1D521;\U1D522;".len() //-->28</
===Character Length===
UTF-8 characters are counted, modifiers (such as underscore) are counted as separate characters.
<
"\ufeff\u00A2 \u20ac".len(8) //-->4 "BOM¢ €"
"\U1000;".len(8) //-->Exception thrown: ValueError(Invalid UTF-8 string)
"\uD800" //-->SyntaxError : Line 2: Bad Unicode constant (\uD800-\uDFFF)
"J\u0332o\u0332s\u0332e\u0301\u0332".len(8) //-->9 "J̲o̲s̲é̲"
"\U1D518;\U1D52B;\U1D526;\U1D520;\U1D52C;\U1D521;\U1D522;".len(8) //-->7 "𝔘𝔫𝔦𝔠𝔬𝔡𝔢"</
[[Wikipedia::https://en.wikipedia.org/wiki/Comparison_of_programming_languages_%28string_functions%29#length]]
=={{header|Zig}}==
<syntaxhighlight lang="zig">const std = @import("std");
fn printResults(alloc: std.mem.Allocator, string: []const u8) !void {
const cnt_codepts_utf8 = try std.unicode.utf8CountCodepoints(string);
// There is no sane and portable extended ascii, so the best
// we get is counting the bytes and assume regular ascii.
const cnt_bytes_utf8 = string.len;
const stdout_wr = std.io.getStdOut().writer();
try stdout_wr.print("utf8 codepoints = {d}, bytes = {d}\n", .{ cnt_codepts_utf8, cnt_bytes_utf8 });
const utf16str = try std.unicode.utf8ToUtf16LeWithNull(alloc, string);
const cnt_codepts_utf16 = try std.unicode.utf16CountCodepoints(utf16str);
const cnt_2bytes_utf16 = try std.unicode.calcUtf16LeLen(string);
try stdout_wr.print("utf16 codepoints = {d}, bytes = {d}\n", .{ cnt_codepts_utf16, 2 * cnt_2bytes_utf16 });
}
pub fn main() !void {
var arena_instance = std.heap.ArenaAllocator.init(std.heap.page_allocator);
defer arena_instance.deinit();
const arena = arena_instance.allocator();
const string1: []const u8 = "Hello, world!";
try printResults(arena, string1);
const string2: []const u8 = "møøse";
try printResults(arena, string2);
const string3: []const u8 = "𝔘𝔫𝔦𝔠𝔬𝔡𝔢";
try printResults(arena, string3);
// \u{332} is underscore of previous character, which the browser may not
// copy correctly
const string4: []const u8 = "J\u{332}o\u{332}s\u{332}e\u{301}\u{332}";
try printResults(arena, string4);
}</syntaxhighlight>
{{out}}
<pre>
utf8 codepoints = 13, bytes = 13
utf16 codepoints = 13, bytes = 26
utf8 codepoints = 5, bytes = 7
utf16 codepoints = 5, bytes = 10
utf8 codepoints = 7, bytes = 28
utf16 codepoints = 7, bytes = 28
utf8 codepoints = 9, bytes = 14
utf16 codepoints = 9, bytes = 18
</pre>
|