Implicit type conversion
Some programming languages have implicit type conversion. Type conversion is also known as coercion.
For example:
COMPL z := 1;
Here the assignment ":=" implicitly converts the integer 1, to the complex number
in the programming language ALGOL 68.
The alternative would be to explicitly convert a value from one type to another, using a function or some other mechanism (e.g. an explicit cast).
- Task
Demonstrate various type conversions and give an example of an implicit type conversion path from the smallest possible variable size to the largest possible variable size. (Where the size of the underlying variable's data strictly increases).
In strongly typed languages some types are actually mutually incompatible. In this case the language may have disjoint type conversion paths, or even branching type conversion paths. Give an example if your language does this.
Languages that don't support any implicit type conversion can be added to the /Omit category.
Indicate if your language supports user defined type conversion definitions and give an example of such a definition. (E.g. define an implicit type conversion from real to complex numbers, or from char to an array of char of length 1.)
6502 Assembly
There is no implicit type conversion in the "modern" sense. However, you are free to interpret any bit pattern to mean whatever you want, when you want. A few language constructs help with this.
The X and Y registers can be used as loop counters, or as an indexed offset into memory. It's very common for both to be true in the same procedure.
memcpy:
LDA ($00),y ;load from (the address stored at $0000) + y
STA ($02),y ;write to (the address stored at $0002) + y
iny
bne memcpy ;loop until y = 0
rts
Any 16-bit value stored at a pair of consecutive zero-page memory addresses can be treated as a pointer to memory. The above example demonstrated this with the use of ($nn),y
.
68000 Assembly
If an instruction operand is smaller than the instruction "length", then it gets typecast to that length.
For data registers, the operands are not sign-extended.
MOVE.L #$FFFF,D0 ;MOVE.L #$0000FFFF,D0
MOVE.W #$23,D1 ;MOVE.W #$0023,D1
MOVE.W #$80,D2 ;MOVE.W #$0080,D2
The only exception to this is the MOVEQ
instruction, which does sign-extend the value.
MOVEQ #$34,D0 ;MOVE.L #$00000034,D0
MOVEQ #-1,D1 ;MOVE.L #$FFFFFFFF,D1
Address registers mostly work the same way, unless you use MOVEA.W
, in which the address is sign-extended.
MOVEA.L #$A01000,A0 ;assembled as MOVE.L #$00A01000,A0
MOVEA.W #$8000,A1 ;same result as MOVEA.L #$FFFF8000,A1
MOVEA.W #$7FFF,A2 ;same result as MOVEA.L #$00007FFF,A2
ALGOL 68
File: implicit_type_conversion.a68
#!/usr/bin/a68g --script #
# -*- coding: utf-8 -*- #
main:(
# a representative sample of builtin types #
BOOL b; INT bool width = 1;
BITS bits; LONG BITS lbits;
BYTES bytes; LONG BYTES lbytes; # lbytes := bytes is not illegal #
[bits width]BOOL array bool;
[long bits width]BOOL long array bool;
CHAR c; INT char width = 1;
STRING array char;
SHORT SHORT INT ssi; SHORT INT si; INT i; LONG INT li; LONG LONG INT lli;
SHORT SHORT REAL ssr; SHORT REAL sr; REAL r; LONG REAL lr; LONG LONG REAL llr;
SHORT SHORT COMPL ssz; SHORT COMPL sz; COMPL z; LONG COMPL lz; LONG LONG COMPL llz;
INT long long compl width = 2 * long long real width;
STRUCT (BOOL b, CHAR c, INT i, REAL r, COMPL z)cbcirz; # NO implicit casting #
REF []INT rai;
FORMAT long long real fmt = $g(-0,long long real width-2)$;
FORMAT long long compl fmt = $f(long long real fmt)"+"f(long long real fmt)"i"$;
# type conversion stating points #
b := TRUE;
c := "1";
ssi := SHORT SHORT 1234; i := 1234;
# a representative sample of implied casts for subtypes of personality INT/REAL/COMPL #
si:=ssi;
i:=ssi; i:=si;
li:=ssi; li:=si; li:=i;
lli:=ssi;lli:=si; lli:=i; lli:=li;
ssr:=ssi;
sr:=ssr; sr:=ssi; sr:=si;
r:=ssr; r:=sr; r:=ssi; r:=si; r:=i;
lr:=ssr; lr:=sr; lr:=r; lr:=ssi; lr:=si; lr:=i; lr:=li;
llr:=ssr;llr:=sr; llr:=r; llr:=lr; llr:=ssi;llr:=si; llr:=i; llr:=li; llr:=i;
ssz:=ssr;ssz:=ssi;
sz:=ssz; sz:=ssr; sz:=sr; sz:=ssi; sz:=si;
z:=ssz; z:=sz; z:=ssr; z:=sr; z:=r; z:=ssi; z:=si; z:=i;
lz:=ssz; lz:=sz; lz:=z; lz:=ssr; lz:=sr; lz:=r; lz:=lr; lz:=ssi; lz:=si; lz:=i; lz:=li;
llz:=ssz;llz:=sz; llz:=z; llz:=lz; llz:=ssr;llz:=sr; llz:=r; llz:=lr; llz:=r; llz:=ssi;llz:=si; llz:=i; llz:=li; llz:=lli;
# conversion branch SHORT SHORT INT => LONG LONG COMPL #
# a summary result, using the longest sizeof increasing casting path #
printf((long long compl fmt,llz:=(llr:=(lr:=(i:=(si:=ssi)))),$
$l" was increasingly cast"$,
$" from "g(-0)$, long long compl width, long long real width, long real width,
int width, short int width, short short int width, $" digits"$,
$" from "g(-0)l$,ssi ));
# conversion branch BITS => []BOOL #
bits := 16rf0f0ff00;
lbits := bits;
printf(($g$,"[]BOOL := LONG BITS := BITS - implicit widening: ",array bool := bits, $l$));
# conversion branch BYTES => []CHAR #
bytes := bytes pack("0123456789ABCDEF0123456789abcdef");
long array bool := LONG 2r111;
printf(($g$,"[]CHAR := LONG BYTES := BYTES - implicit widening: ",array char := bytes, $l$));
# deproceduring conversion branch PROC PROC PROC INT => INT #
PROC pi = INT: i;
PROC ppi = PROC INT: pi;
PROC pppi = PROC PROC INT: ppi;
printf(($g$,"PROC PROC PROC INT => INT - implicit deprocceduring^3: ",pppi, $l$));
# dereferencing conversion branch REF REF REF INT => INT #
REF INT ri := i;
REF REF INT rri := ri;
REF REF REF INT rrri := rri;
printf(($g$,"REF REF REF INT => INT - implicit dereferencing^3: ",rrri, $l$));
# rowing conversion branch INT => []INT => [,]INT => [,,]INT #
# a representative sample of implied casts, type pointer #
rai := ai; # starts at the first element of ai #
rai := i; # an array of length 1 #
FLEX[0]INT ai := i;
FLEX[0,0]INT aai := ai;
FLEX[0,0,0]INT aaai := aai;
printf(($g$,"INT => []INT => [,]INT => [,,]INT - implicit rowing^3: ",aaai, $l$));
# uniting conversion branch UNION(VOID, INT) => UNION(VOID,INT,REAL) => UNION(VOID,INT,REAL,COMPL) #
UNION(VOID,INT) ui := i;
UNION(VOID,INT,REAL) uui := ui;
UNION(VOID,INT,REAL,COMPL) uuui := uui;
printf(($g$,"INT => UNION(VOID, INT) => UNION(VOID,INT,REAL,COMPL) - implicit uniting^3: ",(uuui|(INT i):i), $l$));
SKIP
)
- Output:
1234.0000000000000000000000000000000000000000000000000000000000000+.0000000000000000000000000000000000000000000000000000000000000i was increasingly cast from 126 from 63 from 28 from 10 from 10 from 10 digits from 1234 []BOOL := LONG BITS := BITS - implicit widening: TTTTFFFFTTTTFFFFTTTTTTTTFFFFFFFF []CHAR := LONG BYTES := BYTES - implicit widening: 0123456789ABCDEF0123456789abcdef PROC PROC PROC INT => INT - implicit deprocceduring^3: +1234 REF REF REF INT => INT - implicit dereferencing^3: +1234 INT => []INT => [,]INT => [,,]INT - implicit rowing^3: +1234 INT => UNION(VOID, INT) => UNION(VOID,INT,REAL,COMPL) - implicit uniting^3: +1234
Applesoft BASIC
There are 7 types: reals, integers, strings, defined functions, floating point arrays, integer arrays, and arrays of strings. Implicitly there are only floating point (real) and strings.
100 LET M$ = CHR$ (13)
110 PRINT M$"SIMPLE VARIABLES"M$
120 LET AB = 6728.0
130 PRINT "FLOATING POINT: ";AB
140 LET AB% = 6728.0
150 PRINT "INTEGER: ";AB%
160 LET AB$ = "HELLO"
170 PRINT "STRING: ";AB$
180 DEF FN AB(AB) = AB + 6728.0 - VAL (".")
190 PRINT "DEFINED FUNCTION: "; FN AB(0)
200 PRINT M$"ARRAY VARIABLES"M$
210 DIM AB(3,12,7)
220 LET AB(3,12,7) = 6728.0
230 PRINT "FLOATING POINT: ";AB(3,12,7)
240 DIM AB%(3,12,7)
250 LET AB%(3,12,7) = 6728.0
260 PRINT "INTEGER: ";AB%(3,12,7)
270 DIM AB$(3,12,7)
280 LET AB$(3,12,7) = "HELLO"
290 PRINT "STRING: ";AB$(3,12,7)
300 PRINT M$"TYPE CONVERSIONS"M$
310 LET AB$ = STR$(6728.0)
320 PRINT "REAL TO STRING CONVERSION: ";AB$
330 LET AB = VAL("6728.0")
340 PRINT "STRING TO REAL CONVERSION: ";AB
AWK
# syntax: GAWK -f IMPLICIT_TYPE_CONVERSION.AWK
BEGIN {
n = 1 # number
s = "1" # string
a = n "" # number coerced to string
b = s + 0 # string coerced to number
print(n,s,a,b)
print(("19" 91) + 4) # string and number concatenation
c = "10e1"
print(c,c+0)
exit(0)
}
- Output:
1 1 1 1 1995 10e1 100
C
#include <stdio.h>
main(){
/* a representative sample of builtin types */
unsigned char uc; char c;
enum{e1, e2, e3}e123;
short si; int i; long li;
unsigned short su; unsigned u; unsigned long lu;
float sf; float f; double lf; long double llf;
union {char c; unsigned u; int i; float f; }ucuif; /* manual casting only */
struct {char c; unsigned u; int i; float f; }scuif; /* manual casting only */
int ai[99];
int (*rai)[99];
int *ri;
uc = '1';
/* a representitive sample of implied casts for subtypes of personality int/float */
c=uc;
si=uc; si=c;
su=uc; su=c; su=si;
i=uc; i=c; i=si; i=su;
e123=i; i=e123;
u=uc; u=c; u=si; u=su; u=i;
li=uc; li=c; li=si; li=su; li=i; li=u;
lu=uc; lu=c; lu=si; lu=su; lu=i; lu=u; lu=li;
sf=uc; sf=c; sf=si; sf=su; sf=i; sf=u; sf=li; sf=lu;
f=uc; f=c; f=si; f=su; f=i; f=u; f=li; f=lu; f=sf;
lf=uc; lf=c; lf=si; lf=su; lf=i; lf=u; lf=li; lf=lu; lf=sf; lf=f;
llf=uc;llf=c;llf=si;llf=su;llf=i;llf=u;llf=li;llf=lu;llf=sf;llf=f;llf=lf;
/* ucuif = i; no implied cast; try: iucuif.i = i */
/* ai = i; no implied cast; try: rai = &i */
/* a representitive sample of implied casts, type pointer */
rai = ai; /* starts at the first element of ai */
ri = ai; /* points to the first element of ai */
/* a summary result, using the longest sizeof increasing casting path */
printf("%LF was increasingly cast from %d from %d from %d from %d from %d bytes from '%c'\n",
llf=(lf=(i=(si=c))), sizeof llf, sizeof lf, sizeof i, sizeof si,sizeof c, c);
}
- Output:
49.000000 was increasingly cast from 12 from 8 from 4 from 2 from 1 bytes from '1'
C#
C# has built-in implicit conversions for primitive numerical types. Any value can be implicitly converted to a value of a larger type. Many non-primitive types also have implicit conversion operators defined, for instance the BigInteger and Complex types.
byte aByte = 2;
short aShort = aByte;
int anInt = aShort;
long aLong = anInt;
float aFloat = 1.2f;
double aDouble = aFloat;
BigInteger b = 5;
Complex c = 2.5; // 2.5 + 0i
Users are able to define implicit (and also explicit) conversion operators. To define a conversion from A to B, the operator must be defined inside either type A or type B. Therefore, we cannot define a conversion from char to an array of char.
public class Person
{
//Define an implicit conversion from string to Person
public static implicit operator Person(string name) => new Person { Name = name };
public string Name { get; set; }
public override string ToString() => $"Name={Name}";
public static void Main() {
Person p = "Mike";
Console.WriteLine(p);
}
}
- Output:
Name=Mike
C++
C++ supports almost all of the same implicit conversions as the C example above. However it does not allow implicit conversions to enums and is more strict with some pointer conversions. C++ allows implicit conversions on user defined types. The example below shows implicit conversions between polar and Cartesian points.
#include <iostream>
#include <math.h>
struct PolarPoint;
// Define a point in Cartesian coordinates
struct CartesianPoint
{
double x;
double y;
// Declare an implicit conversion to a polar point
operator PolarPoint();
};
// Define a point in polar coordinates
struct PolarPoint
{
double rho;
double theta;
// Declare an implicit conversion to a Cartesian point
operator CartesianPoint();
};
// Implement the Cartesian to polar conversion
CartesianPoint::operator PolarPoint()
{
return PolarPoint
{
sqrt(x*x + y*y),
atan2(y, x)
};
}
// Implement the polar to Cartesian conversion
PolarPoint::operator CartesianPoint()
{
return CartesianPoint
{
rho * cos(theta),
rho * sin(theta)
};
}
int main()
{
// Create a Cartesian point
CartesianPoint cp1{2,-2};
// Implicitly convert it to a polar point
PolarPoint pp1 = cp1;
// Implicitily convert it back to a Cartesian point
CartesianPoint cp2 = pp1;
std::cout << "rho=" << pp1.rho << ", theta=" << pp1.theta << "\n";
std::cout << "x=" << cp2.x << ", y=" << cp2.y << "\n";
}
- Output:
rho=2.82843, theta=-0.785398 x=2, y=-2
D
This covers a large sample of built-in types and few library-defined types.
void main() {
import std.stdio, std.typetuple, std.variant, std.complex;
enum IntEnum : int { A, B }
union IntFloatUnion { int x; float y; }
struct IntStruct { int x; }
class ClassRef {}
class DerivedClassRef : ClassRef {}
alias IntDouble = Algebraic!(int, double);
alias ComplexDouble = Complex!double;
// On a 64 bit system size_t and ptrdiff_t are twice larger,
// so this changes few implicit assignments results.
writeln("On a ", size_t.sizeof * 8, " bit system:\n");
// Represented as strings so size_t prints as "size_t"
// instead of uint/ulong.
alias types = TypeTuple!(
`IntEnum`, `IntFloatUnion`, `IntStruct`,
`bool`,
`char`, `wchar`, `dchar`,
`ubyte`, `ushort`, `uint`, `ulong`, /*`ucent`,*/
`byte`, `short`, `int`, `long`, /*`cent`,*/
`size_t`, `hash_t`, `ptrdiff_t`,
`float`, `double`, `real`,
`int[2]`, `int[]`, `int[int]`,
`int*`, `void*`, `ClassRef`, `DerivedClassRef`,
`void function()`, `void delegate()`,
`IntDouble`, `ComplexDouble`,
);
foreach (T1; types) {
mixin(T1 ~ " x;");
write("A ", T1, " can be assigned with: ");
foreach (T2; types) {
mixin(T2 ~ " y;");
static if (__traits(compiles, x = y))
write(T2, " ");
}
writeln;
}
writeln;
// Represented as strings so 1.0 prints as "1.0" instead of "1."
alias values = TypeTuple!(
`true`, `'x'`, `"hello"`, `"hello"w`, `"hello"d`,
`0`, `255`, `1L`, `2.0f`, `3.0`, `4.0L`, `10_000_000_000L`,
`[1, 2]`, `[3: 4]`,
`void*`, `null`,
);
foreach (T; types) {
mixin(T ~ " x;");
write("A ", T, " can be assigned with value literal(s): ");
foreach (y; values) {
static if (__traits(compiles, x = mixin(y)))
write(y, " ");
}
writeln;
}
// Few extras:
int[] a1;
const int[] a2 = a1; // OK.
// immutable int[] a3 = a1; // Not allowed.
// immutable int[] a4 = a2; // Not allowed.
int[int] aa1;
const int[int] aa2 = aa1; // OK.
//immutable int[int] aa3 = aa1; // Not allowed.
//immutable int[int] aa4 = aa2; // Not allowed.
void foo() {}
void delegate() f1 = &foo; // OK.
void bar() pure nothrow @safe {}
void delegate() f2 = &bar; // OK.
//void delegate() pure f3 = &foo; // Not allowed.
//void delegate() nothrow f4 = &foo; // Not allowed.
//void delegate() @safe f5 = &foo; // Not allowed.
static void spam() {}
void function() f6 = &spam; // OK.
//void function() f7 = &foo; // Not allowed.
}
- Output:
On a 32 bit system: A IntEnum can be assigned with: IntEnum A IntFloatUnion can be assigned with: IntFloatUnion A IntStruct can be assigned with: IntStruct A bool can be assigned with: bool A char can be assigned with: bool char ubyte byte A wchar can be assigned with: bool char wchar ubyte ushort byte short A dchar can be assigned with: IntEnum bool char wchar dchar ubyte ushort uint byte short int size_t hash_t ptrdiff_t A ubyte can be assigned with: bool char ubyte byte A ushort can be assigned with: bool char wchar ubyte ushort byte short A uint can be assigned with: IntEnum bool char wchar dchar ubyte ushort uint byte short int size_t hash_t ptrdiff_t A ulong can be assigned with: IntEnum bool char wchar dchar ubyte ushort uint ulong byte short int long size_t hash_t ptrdiff_t A byte can be assigned with: bool char ubyte byte A short can be assigned with: bool char wchar ubyte ushort byte short A int can be assigned with: IntEnum bool char wchar dchar ubyte ushort uint byte short int size_t hash_t ptrdiff_t A long can be assigned with: IntEnum bool char wchar dchar ubyte ushort uint ulong byte short int long size_t hash_t ptrdiff_t A size_t can be assigned with: IntEnum bool char wchar dchar ubyte ushort uint byte short int size_t hash_t ptrdiff_t A hash_t can be assigned with: IntEnum bool char wchar dchar ubyte ushort uint byte short int size_t hash_t ptrdiff_t A ptrdiff_t can be assigned with: IntEnum bool char wchar dchar ubyte ushort uint byte short int size_t hash_t ptrdiff_t A float can be assigned with: IntEnum bool char wchar dchar ubyte ushort uint ulong byte short int long size_t hash_t ptrdiff_t float double real A double can be assigned with: IntEnum bool char wchar dchar ubyte ushort uint ulong byte short int long size_t hash_t ptrdiff_t float double real A real can be assigned with: IntEnum bool char wchar dchar ubyte ushort uint ulong byte short int long size_t hash_t ptrdiff_t float double real A int[2] can be assigned with: IntEnum bool char wchar dchar ubyte ushort uint byte short int size_t hash_t ptrdiff_t int[2] int[] A int[] can be assigned with: int[2] int[] A int[int] can be assigned with: int[int] A int* can be assigned with: int* A void* can be assigned with: int* void* void function() A ClassRef can be assigned with: ClassRef DerivedClassRef A DerivedClassRef can be assigned with: DerivedClassRef A void function() can be assigned with: void function() A void delegate() can be assigned with: void delegate() A IntDouble can be assigned with: int ptrdiff_t double IntDouble A ComplexDouble can be assigned with: IntEnum bool char wchar dchar ubyte ushort uint ulong byte short int long size_t hash_t ptrdiff_t float double real ComplexDouble A IntEnum can be assigned with value literal(s): A IntFloatUnion can be assigned with value literal(s): A IntStruct can be assigned with value literal(s): A bool can be assigned with value literal(s): true 0 1L A char can be assigned with value literal(s): true 'x' 0 255 1L A wchar can be assigned with value literal(s): true 'x' 0 255 1L A dchar can be assigned with value literal(s): true 'x' 0 255 1L A ubyte can be assigned with value literal(s): true 'x' 0 255 1L A ushort can be assigned with value literal(s): true 'x' 0 255 1L A uint can be assigned with value literal(s): true 'x' 0 255 1L A ulong can be assigned with value literal(s): true 'x' 0 255 1L 10_000_000_000L A byte can be assigned with value literal(s): true 'x' 0 1L A short can be assigned with value literal(s): true 'x' 0 255 1L A int can be assigned with value literal(s): true 'x' 0 255 1L A long can be assigned with value literal(s): true 'x' 0 255 1L 10_000_000_000L A size_t can be assigned with value literal(s): true 'x' 0 255 1L A hash_t can be assigned with value literal(s): true 'x' 0 255 1L A ptrdiff_t can be assigned with value literal(s): true 'x' 0 255 1L A float can be assigned with value literal(s): true 'x' 0 255 1L 2.0f 3.0 4.0L 10_000_000_000L A double can be assigned with value literal(s): true 'x' 0 255 1L 2.0f 3.0 4.0L 10_000_000_000L A real can be assigned with value literal(s): true 'x' 0 255 1L 2.0f 3.0 4.0L 10_000_000_000L A int[2] can be assigned with value literal(s): true 'x' 0 255 1L [1, 2] null A int[] can be assigned with value literal(s): [1, 2] null A int[int] can be assigned with value literal(s): [3: 4] null A int* can be assigned with value literal(s): null A void* can be assigned with value literal(s): null A ClassRef can be assigned with value literal(s): null A DerivedClassRef can be assigned with value literal(s): null A void function() can be assigned with value literal(s): null A void delegate() can be assigned with value literal(s): null A IntDouble can be assigned with value literal(s): 0 255 3.0 A ComplexDouble can be assigned with value literal(s): true 'x' 0 255 1L 2.0f 3.0 4.0L 10_000_000_000L
Delphi
Delphi has in implicit conversions for primitive numerical types. Any value can be implicitly converted to a value of a larger type. Upsize conversions:
ValueByte : Byte := 2; // (8 bits size)
ValueWord : Word := ValueByte; // 2 (16 bits size)
ValueDWord : DWord := ValueWord; // 2 (32 bits size)
ValueUint64 : Uint64 := ValueWord; // 2 (64 bits size)
Unsigend-signed conversions
ValueDWord := 4294967295; // 4294967295 (Max DWord value (unsigned))
ValueInteger : Integer := ValueDWord; // -1 (two complement conversion) (signed)
ValueDWord := ValueInteger; // 4294967295 (convert back, unsigned)
Unsigned variables will not convert literal negative values
ValueByte := -1; // this not work, and raise a error
ValueByte := byte(-1); // this work, and assign 255 (max byte value)
Float points can convert integers values, may lose precision for large integers
ValueDWord := 4294967295; // 4294967295 (Max DWord value (unsigned))
ValueDouble : Double := ValueDWord; // 4.29496729500000E+0009
Integers can not convert float values implicity
ValueDouble := 1.6;
ValueByte := ValueDouble; // this not work, and raise a error
ValueByte : Trunc(ValueDouble); // this work, and assign 1
ValueByte : Round(ValueDouble); // this work, and assign 2
Strings can convert chars, but not convert back
ValueChar: Char := #10;
ValuesString: String := ValueChar;
Boolean can not convert any type implicity, except it self
ValueBoolean: Boolean := True; // this work, and assign True
ValueBoolean: Boolean := 1; // this not work, and raise a error
ValueByte := ValueBoolean; // this not work, and raise a error
ValueByte := Byte(ValueBoolean); // this work, and assign 1
ValueBoolean := Boolean(10); // this work, and assign true
ValueBoolean := Boolean(-1); // this work, and assign true
ValueBoolean := Boolean( 0); // this work, and assign false
Variant types can implicit convert allmost types, and convert back, if possible:
ValueVariant: Variant := -20;
ValueVariant := 3.1416;
ValueVariant := 'string';
ValueVariant := #10;
ValueVariant := True;
...
Class and Record can implement implicit operator
program Implicit_type_conversion;
{$APPTYPE CONSOLE}
{$R *.res}
uses
System.SysUtils;
type
TFloatInt = record
value: Double;
class operator Implicit(a: Double): TFloatInt;
class operator Implicit(a: Integer): TFloatInt;
class operator Implicit(a: TFloatInt): Double;
class operator Implicit(a: TFloatInt): Integer;
class operator Implicit(a: TFloatInt): string;
class operator Implicit(a: string): TFloatInt;
end;
{ TFloatInt }
class operator TFloatInt.Implicit(a: Double): TFloatInt;
begin
Result.value := a;
end;
class operator TFloatInt.Implicit(a: Integer): TFloatInt;
begin
Result.value := a;
end;
class operator TFloatInt.Implicit(a: TFloatInt): Double;
begin
Result := a.value;
end;
class operator TFloatInt.Implicit(a: TFloatInt): Integer;
begin
Result := Round(a.value);
end;
class operator TFloatInt.Implicit(a: string): TFloatInt;
begin
Result.value := StrToFloatDef(a, 0.0);
end;
class operator TFloatInt.Implicit(a: TFloatInt): string;
begin
Result := FloatToStr(a.value);
end;
procedure Print(s: string);
begin
Writeln(s);
end;
var
val:TFloatInt;
valInt:Integer;
begin
// implicit from double
val := 3.1416;
// implicit to string
Print(val); // 3.1416
// implicit to integer
valInt := val;
Writeln(valInt); // 3
// implicit from integer
val := valInt;
// implicit to string
Print(val); // 3
readln;
end.
Déjà Vu
The only implicit conversion currently permitted is boolean to number:
<1:1> #interactive session
<2:1> !. + 3 true #boolean true is equal to 1
4
<3:1> !. * 2 false #boolean false is equal to 0
0
EasyLang
Numbers are automatically converted to strings when used with a string variable.
string$ = 32
print string$
Same goes for arrays of strings, when they are set to an array of numbers.
stringArray$[] = [ 16 32 64 ]
print stringArray$[1]
FreeBASIC
Var declares a variable whose type is implied from the initializer expression. It is illegal to specify an explicit type in a Var declaration. The initializer expression can be either a constant or any variable of any type.
Since the type of the variable is inferred from what you assign into it, it's helpful to know how literals work. Any literal number without a decimal point defaults to Integer. A literal number with a decimal point defaults to Double.
All ZString expressions, including string literals and dereferenced ZString Ptrs, will be given the String variable type.
Var a = Cast(Byte, 1)
Var b = Cast(Short, 1)
Var c = Cast(Integer, 1)
Var d = Cast(Longint, 1)
Var au = Cast(Ubyte, 1)
Var bu = Cast(Ushort, 1)
Var cu = Cast(Uinteger, 1)
Var du = Cast(Ulongint, 1)
Var e = Cast(Single, 1.0)
Var f = Cast(Double, 1.0)
Var g = @c '' integer ptr
Var h = @a '' byte ptr
Var s2 = "hello" '' var-len string
Var ii = 6728 '' implicit integer
Var id = 6728.0 '' implicit double
Print "Byte: "; a
Print "Short: "; b
Print "Integer: "; c
Print "Longint: "; d
Print "UByte: "; au
Print "UShort: "; bu
Print "UInteger: "; cu
Print "ULongint: "; du
Print "Single: "; e
Print "Double: "; f
Print "Integer Pointer: "; g
Print "Byte Pointer: "; h
Print "Variable String: "; s2
Print
Print "Integer: "; ii
Print "Double: "; id
- Output:
Byte: 1 Short: 1 Integer: 1 Longint: 1 UByte: 1 UShort: 1 UInteger: 1 ULongint: 1 Single: 1 Double: 1 Integer Pointer: 1375536 Byte Pointer: 1375547 Variable String: hello Integer: 6728 Double: 6728
Free Pascal
See Delphi
Furor
Strictly speaking, Furor has no any abilities of implicit conversions at all. Furor has a wide variety of explicit conversions, but none of implicit... Hovewer, since all Furor variables are "unions" "technically speaking", in the most cases this "strictness" is easy to by-pass.
Go
Go is a very strongly typed language and, strictly speaking, doesn't permit any implicit conversions at all - even 'widening' numeric conversions or conversions between types with the same underlying type. Nor does it support user-defined conversions.
Consequently, you always need to use type conversions when dealing with expressions, assignments etc. involving mixed types otherwise the code won't compile.
However, in practice, this is considerably less onerous than it sounds because of the use of 'untyped' constants which (whilst having a default type) mould themselves to whatever type is needed (consistent with their representation) and so give the same 'feel' as an implicit conversion.
There are untyped boolean, rune, integer, floating-point, complex and string constants. The most notable examples of untyped constants are literals of the aforementioned types, for example (and in the same order) :
true, 'a', 1, 1.0, 1 + 1i, "a"
In a context in which a typed value is needed and in the absence of any other type information, these would assume their default types which in the same order are:
bool, rune, int, float64, complex128, string
Otherwise, they would adopt whatever type is needed to enable a representable variable declaration, assignment or expression to compile.
Some simple examples may help to make this clearer.
package main
import "fmt"
func main() {
// 1 and 2 are untyped integer constants with a default type of int.
// Here a local variable 'i' is created and explicitly given the type int32.
// 1 implicitly assumes this type which is consistent with its representation.
var i int32 = 1
fmt.Printf("i : type %-7T value %d\n", i, i)
// Here a local variable 'j' is created and implicitly given the type int.
// In the absence of any type information, 2 adopts its default type of int
// and the type of 'j' is therefore inferred to be 'int' also.
j := 2
fmt.Printf("j : type %-7T value %d\n", j, j)
// Here 'k' is declared to be an int64 variable and it can only therefore be initialized
// with an expression of the same type. As 'i' and 'j' have respective types of int32
// and int, they need to be explicitly converted to int64 so that, first the addition and
// second the assignment compile.
var k int64 = int64(i) + int64(j)
fmt.Printf("k : type %-7T value %d\n", k, k)
// 4.0 is an untyped floating point constant with a default type of float64. However,
// (unusually in my experience of other strongly typed languages) it can also represent
// an integer type because it happens to be a whole number.
// Here 'l' is declared to be an int8 variable and it can only therefore be initialized
// with an expression of the same type. As noted above 4.0 can represent this type and
// so the following compiles fine.
var l int8 = 4.0
fmt.Printf("l : type %-7T value %d\n", l, l)
// Here 'm' is created and implicitly given the type float64 because the expression on
// the RHS is of that type. Note that 'l' needs to be converted to this type before it
// can be added to the untyped floating point constants 0.3 and 0.7.
m := 0.3 + float64(l) + 0.7
fmt.Printf("m : type %-7T value %g\n", m, m)
}
- Output:
i : type int32 value 1 j : type int value 2 k : type int64 value 3 l : type int8 value 4 m : type float64 value 5
Idris
Idris provides the "implicit" keyword which enables the implicit conversion from one type to another.
implicit
boolToInt : Bool -> Int
boolToInt True = 1
boolToInt False = 0
one : Int
one = 1
two : Int
two = one + True
J
Overview: Types are viewed as a necessary evil - where possible mathematical identities are given precedence over the arbitrariness of machine representation.
J has 4 "static types" (noun, verb, adverb, conjunction). There are almost no implicit conversions between these types (but a noun can be promoted to a constant verb in certain static contexts or a noun representation of a verb can be placed in a context which uses that definition to perform the corresponding operation).
Translating from "traditional english" to "contemporary computer science" nomenclature: Nouns are "data", verbs are "functions", adverbs and conjunctions are "metafunctions".
Nouns break down into four disjoint collections of subtypes: boxed, literal, numeric and symbolic (which is rarely used). Most of J's implicit conversions happen within the first three subtypes. (And J supports some "extra conversions" between these types in some cases where no values are involved. For example a length 0 list which contains no characters (literals) may be used as a list which contains no numbers (numerics)).
There is one type of box, three types of literals (8 bit wide, 16 bit wide and 32 bit wide -- each of which are typically interpreted as unicode by the OS), and a variety of types of numerics.
Sparse arrays are also (partially) supported and treated internally as distinct datatypes, implemented under the covers as a sequence of arrays (one to indicate which indices have values, and another to hold the corresponding values, and also a default value to fill in the rest).
The primary implicit type conversion in J applies to numeric values.
In particular, J tries to present numeric values as "analytic"; that is, numeric values which are "the same" should presented to the user (J programmer) as "the same" in as many different contexts as is feasible, irrespective of their representation in the the computer's model or layout in memory. So, for example, on a 32-bit machine, `2147483647` (or `(2^31)-1`) is the largest value a signed integer, which is stored in 4 bytes, can represent; in a 32 bit J implementation, incrementing this value (adding 1) causes the underlying representation to switch to IEEE double-precision floating point number. In a 64 bit J implementation, `9223372036854775807` (or `(2^63)-1`) is the largest value which can be represented as a fixed width integer.
In other words `1+(2^31)-1` doesn't overflow: it represents `2^31` exactly (using double the memory on a 32 bit J implementation: 8 bytes). Similar comments apply to the two varieties of character values (ASCII and Unicode), though the implications are more straightforward and less interesting.
Having said that all that, because of the potential performance penalties involved, J does not stretch this abstraction too far. For example, numbers will never be automatically promoted to the (available, but expensive) arbitrary precision format, nor will values be automatically "demoted" (automatic demotion, paired with automatic promotion, has the potential to cause cycles of expansion and contraction during calculation of intermediate values; this, combined with J's homogeneous array-oriented nature, which requires an entire array to be promoted/demoted along with any one of its values, means including automatic demotion would probably hurt programs' performance more often than it benefited them.)
[And, though it is unrelated to type conversion, note that hiding the details of representation also requires J to treat comparison in a tolerant fashion: that is, floating point values are considered identical if they are equal up to some epsilon (2^-44 by default) times the value with the larger magnitude; and character values are identical if their code points are equal). Intolerant or exact comparison is available, though, should that be needed.]
The rich j datatypes:
datatype NB. data type identification verb
3 : 0
n=. 1 2 4 8 16 32 64 128 1024 2048 4096 8192 16384 32768 65536 131072
t=. '/boolean/literal/integer/floating/complex/boxed/extended/rational'
t=. t,'/sparse boolean/sparse literal/sparse integer/sparse floating'
t=. t,'/sparse complex/sparse boxed/symbol/unicode'
(n i. 3!:0 y) pick <;._1 t
)
NB. examples of the data types
[A =: 0 1 ; 0 1 2 ; (!24x) ; 1r2 ; 1.2 ; 1j2 ; (<'boxed') ; (s:'`symbol') ; 'literal' ; (u: 16b263a)
┌───┬─────┬────────────────────────┬───┬───┬───┬───────┬───────┬───────┬───┐
│0 1│0 1 2│620448401733239439360000│1r2│1.2│1j2│┌─────┐│`symbol│literal│☺│
│ │ │ │ │ │ ││boxed││ │ │ │
│ │ │ │ │ │ │└─────┘│ │ │ │
└───┴─────┴────────────────────────┴───┴───┴───┴───────┴───────┴───────┴───┘
datatype&.>A
┌───────┬───────┬────────┬────────┬────────┬───────┬─────┬──────┬───────┬───────┐
│boolean│integer│extended│rational│floating│complex│boxed│symbol│literal│unicode│
└───────┴───────┴────────┴────────┴────────┴───────┴─────┴──────┴───────┴───────┘
[I =: =i.4 NB. Boolean matrix
1 0 0 0
0 1 0 0
0 0 1 0
0 0 0 1
datatype I
boolean
$. I NB. sparse matrix
0 0 │ 1
1 1 │ 1
2 2 │ 1
3 3 │ 1
datatype $. I
sparse boolean
(+ $.)I NB. hook adds data to sparse version of data resulting in sparse
0 0 │ 2
1 1 │ 2
2 2 │ 2
3 3 │ 2
J has verbs causing explicit conversion. Some appear in the above examples.
J's lexical notation let's us specify the datatype of a constant, as demonstrated in these samples. The Extended and Rational Arithmetic section of the J dictionary (DOJ) explains the fundamental implicit conversions.
Various primitive verbs produce (exact) rational results if the argument(s) are rational; non-rational verbs produce (inexact) floating point or complex results when applied to rationals, some verbs which produce irrational results may instead produce rational results when it's possible to do so accurately and the verb's arguments are rational (or are values which can be promoted to rational).
(For example, %:y is rational if the atoms of y are perfect squares; ^0r1 is floating point.) The quotient of two extended integers is an extended integer (if evenly divisible) or rational (if not). Comparisons involving two rationals are exact. Dyadic verbs (e.g. + - * % , = <) that require argument type conversions do so according to the following table:
| B I X Q D Z ---+------------------ B | B I X Q D Z B - bit I | I I X Q D Z I - integer X | X X X Q D Z X - extended integer Q | Q Q Q Q D Z Q - rational D | D D D D D Z D - floating point Z | Z Z Z Z Z Z Z - complex
Java
The Java Language Specification includes several varieties of implicit type conversion. This code illustrates:
- widening conversions of primitives
- boxing and unboxing conversions
- string conversions (with the + operator)
public class ImplicitTypeConversion{
public static void main(String...args){
System.out.println( "Primitive conversions" );
byte by = -1;
short sh = by;
int in = sh;
long lo = in;
System.out.println( "byte value -1 to 3 integral types: " + lo );
float fl = 0.1f;
double db = fl;
System.out.println( "float value 0.1 to double: " + db );
int in2 = -1;
float fl2 = in2;
double db2 = fl2;
System.out.println( "int value -1 to float and double: " + db2 );
int in3 = Integer.MAX_VALUE;
float fl3 = in3;
double db3 = fl3;
System.out.println( "int value " + Integer.MAX_VALUE + " to float and double: " + db3 );
char ch = 'a';
int in4 = ch;
double db4 = in4;
System.out.println( "char value '" + ch + "' to int and double: " + db4 );
System.out.println();
System.out.println( "Boxing and unboxing" );
Integer in5 = -1;
int in6 = in5;
System.out.println( "int value -1 to Integer and int: " + in6 );
Double db5 = 0.1;
double db6 = db5;
System.out.println( "double value 0.1 to Double and double: " + db6 );
}
}
- Output:
Primitive conversions byte value -1 to 3 integral types: -1 float value 0.1 to double: 0.10000000149011612 int value -1 to float and double: -1.0 int value 2147483647 to float and double: 2.147483648E9 char value 'a' to int and double: 97.0 Boxing and unboxing int value -1 to Integer and int: -1 double value 0.1 to Double and double: 0.1
Notice that in the second and fourth conversions the result is a slightly different value.
Not included are:
- reference conversions (for example, you can treat a reference as if it were a reference to its superclass)
- unchecked conversions (the sort of thing that the compiler warns about if you use non-generic collections)
- capture conversions (not sure I understand, but it seems to allow wildcard generics where explicit generics would otherise be required)
jq
jq variables are simply untyped references to JSON values, and so there are no implicit conversions on assignment.
Some builtin operators are polymorphic, e.g. 1 (of type "number") and null (of type "null") can be added (for any finite number, x, "x + null" yields x); in effect, null is in such cases implicitly converted to 0.
Currently jq uses IEEE 754 64-bit numbers, which means that some conversions between integers and floats take place implicitly, but jq's only builtin numeric type is "number", so these are behind-the-scenes type conversions.
However, if one were to define "is_integer" as follows:
def is_integer: type == "number" and . == floor;
then one would find:
(1/3) | is_integer # yields false
(1/3 + 1/3 + 1/3) | is_integer # yields true
For reference, jq's builtin types are "number", "boolean", "null", "object" and "array".
Julia
In general, Julia will promote a smaller sized datatype to a larger one when needed for a calculation involving mixed data types. Julia also accepts type annotations on variable declarations as shown below, though such type declarations are usually only allowed for variables that are declared within a function.
julia> function testme()
ui8::UInt8 = 1
ui16::UInt16 = ui8
ui32::UInt32 = ui8
ui64::UInt64 = ui8
flo::Float64 = ui8
return ui8, sizeof(ui8), ui16, sizeof(ui16), ui32, sizeof(ui32), ui64, sizeof(ui64), flo, sizeof(flo)
end
testme (generic function with 1 method)
julia> testme()
(0x01, 1, 0x0001, 2, 0x00000001, 4, 0x0000000000000001, 8, 1.0, 8)
Kotlin
In general, the only implicit type conversions allowed in Kotlin are between objects of sub-types and variables of their ancestor types. Even then, unless 'boxing' of primitive types is required, the object is not actually changed - it is simply viewed in a different way.
If follows from this that there are no implicit conversions (even 'widening' conversions) between the numeric types because there is no inheritance relationship between them. However, there is one exception to this - integer literals, which would normally be regarded as of type Int (4 bytes) can be assigned to variables of the other integral types provided they are within the range of that type. This is allowed because it can be checked statically by the compiler.
// version 1.1.2
open class C(val x: Int)
class D(x: Int) : C(x)
fun main(args: Array<String>) {
val c: C = D(42) // OK because D is a sub-class of C
println(c.x)
val b: Byte = 100 // OK because 100 is within the range of the Byte type
println(b)
val s: Short = 32000 // OK because 32000 is within the range of the Short Type
println(s)
val l: Long = 1_000_000 // OK because any Int literal is within the range of the Long type
println(l)
val n : Int? = c.x // OK because Int is a sub-class of its nullable type Int? (c.x is boxed on heap)
println(n)
}
- Output:
42 100 32000 1000000 42
Lua
For the most part, Lua is strongly typed, but there are a few cases where it will coerce if the result would be of a predictable type. Coercions are never performed during comparisons or while indexing an object.
-- During concatenation, numbers are always converted to strings. arithmetic operations will attempt to coerce strings to numbers, or throw an error if they can't
type(123 .. "123") --> string
type(123 + "123") --> number
type(123 + "foo") --> error thrown
-- Because Lua supports multiple returns, there is a concept of "no" value when a function does not return anything, or does not return enough. If Lua is expecting a value, it will coerce these "nothing" values into nil. The same applies for lists of values in general.
function noop () end
local a = noop()
print(a) --> nil
local x, y, z = noop()
print(x, y, z) --> nil nil nil
-- As in many languages, all types can be automatically coerced into their boolean value if required. Only nil and false will coerce to false
print(not not nil, not not false, not not 1, not not "foo", not not { }) --> false false true true true
The only two explicit conversion functions offered by Lua are tonumber
and tostring
.
Only the latter has a corresponding metamethod, so the former is usually only ever useful for converting strings, although in LuaJIT tonumber
is used for converting numerical cdata into Lua numbers.
M2000 Interpreter
Module Checkit {
Long a=12.5
\\ 12 is a double literal
Print a=12 ' compare a long with a double
Print a=12& ' compare two long values
def decimal b
b=12.e12
Print b=12000000000000 ' compare a decimal and a double
Print b=12000000000000@ ' now we compare two decimals
z=10#+20&
Print type$(z)="Currency"
\\ now z can't change type (except using a Swap)
z=100@ ' 100 decimal
Print type$(z)="Currency"
z=12.1234567 ' round to fit to currency
Print z=12.1235
\\ swap just swap values
swap z, b
Print type$(z)="Decimal", type$(b)="Currency"
\\ if a variable is a number then can be change to an object
z=(1,2,3,4,5)
Print type$(z)="mArray"
Try {
z=100
}
Print Error$ ' Missing Object
Function AnyType$(x) {
=type$(x)
}
Print AnyType$(z)="mArray"
Print AnyType$(100@)="Decimal", AnyType$(100#)="Currency"
\\ integer 16bit, long 32 bit
Print AnyType$(100%)="Integer", AnyType$(100&)="Long"
Print AnyType$(100~)="Single", AnyType$(100)="Double"
\\ double used as unsigned 32 bit long
Print AnyType$(0x100)="Double", AnyType$(0x100&)="Long", AnyType$(0x100%)="Integer"
Print 0xFFFFFFFF=4294967295, 0xFFFFFFFF&=-1, 0xFFFF%=-1
k=100@
Print AnyType$(val(k->Long))="Long"
\\ define type for argument
Function OnlyLong$(x as Long) {
=type$(x)
}
Print OnlyLong$(k)="Long"
Try {
s=stack:=1,2,3,4
Print OnlyLong$(s)
}
Print Error$ 'Wrong object type
Print AnyType$(s)="mStiva" ' name of object for stack
Try {
Print OnlyLong$(0xFFFFFFFF)
}
Print Error$ ' overflow long
}
Checkit
Nim
As a strongly typed language, Nim is very strict regarding implicit conversion. However, it provides some mechanisms which help the user to avoid to do a lot of explicit conversions.
By default, when doing operations or when assigning, Nim allows conversions between signed integer types (int8, int16, int32, int64), between unsigned integer types (uint8, uint16, uint32, uint64) and between floats (float32, float64). There is no implicit conversions between these three categories, except for literals.
Indeed, literals such as 1 are considered as being of type "int" (32 or 64 bits according to the platform). But they can be used in operations involving unsigned ints or floats. The compiler doesn’t generate code to convert but directly provide the converted value.
There is also implicit conversions between synonym types. If we declare type MyInt = int
, "int" values are fully compatible with "MyInt". If we want the values to be incompatible, we have to declare type MyInt = distinct int
. To assign a value such of 1 to a variable "x" of this type, we must use x = MyInt(1)
.
The third case of implicit conversion regards the tuples. It is possible to assign named tuples to unnamed tuples and to assign unnamed tuples to named tuples without explicit conversion. But conversion between named tuples with different field names must be explicitly expressed.
The mechanisms allowing to avoid explicit conversions are only shortcuts. The conversion is still explicit, but done elsewhere. The first one is the overloading of procedures/operators. For instance, to avoid to specify explicit conversions when doing operations mixing integers and floats, we can import the module lenientops
which defines a set of operators accepting mixed parameter types. These operators simply convert the integer parameter to a float, then apply the usual operator.
The second mechanism consists to define a converter. If, for instance, we define a converter from bool to int, then, after that, we can add ints and bools as in weakly typed languages. This mechanism is powerful, but must be used with care as it may weak the type system logic.
Here are some examples:
const A = 1 # "1" is considered as as int, so this is the type of "x".
const B: float = 1 # Implicit conversion from int to float done by the compiler.
var x: uint = 1 # Implicit conversion from int to uint done by the compiler.
var y = 1f32 # "y" is a float32.
var z: float64 = y # The compiler generates code to convert from float32 to float64.
# Tuple conversions.
# Note that these conversions doesn’t change the memory representation of the data.
var t1: tuple[a, b: int] # Named tuple.
var t2: tuple[c, d: int] # Named tuple, incompatible with the previous one.
var t3: (int, int) # Unnamed tuple.
t3 = (1, 2) # No conversion here.
t1 = t3 # Implicit conversion from unnamed to named.
t2 = (int, int)(t1) # Explicit conversion followed by an implicit conversion.
t3 = t2 # Implicit conversion from named to unnamed.
# Simplifying operations with "lenientops".
var f1, f2, f3: float
var i1, i2, i3: int
f1 = f2 + i1.toFloat * (f3 * i2.toFloat) + i3.toFloat
import lenientops
f1 = f2 + i1 * (f3 * i2) + i3 # Looks like implicit conversions for the user.
# Another example: / operator.
# This operator is defined for float32 and float64 operands. But is also defined
# for integer operands in "system" module. Actually, it simply converts the operands
# to float and apply the float division. This is another example of overloading
# hiding the conversions.
echo 1 / 2 # Displays 0.5.
# Converter examples.
# The following ones are absolutely not recommended as Nim is not C or Python.
converter toInt(b: bool): int = ord(b)
converter toBool(i: int): bool = i != 0
echo 1 + true # Displays 2.
if 2: echo "ok" # Displays "ok".
Oforth
Oforth allows implicit conversions only on : ==, <=, +, -, * /, rem, pow
Classes have a priority. Most classes have 0 as priority. Into basic classes :
Integer priority is 2 Float priority is 40 String priority is 1024 List priority is 2048
A new class is created with 0 priority unless explicitly provided.
When, for instance, + is called, it checks priorities and convert the object with the smaller priority.
Conversion uses a convertor : a method with name "asClass" where Class is the name of the object wwith the higher priority. Conversion is not the right word here, as all theses objects are immutables : new objects are created.
For instance, adding an Integer and a Float will convert the integer into a float using asFloat method.
Let's create a Complex class with 80 as priority (please note asComplex methods that will be used for conversions) :
100 Number Class newPriority: Complex(re, im)
Complex method: re @re ;
Complex method: im @im ;
Complex method: initialize := im := re ;
Complex method: << '(' <<c @re << ',' <<c @im << ')' <<c ;
Integer method: asComplex Complex new(self, 0) ;
Float method: asComplex Complex new(self, 0) ;
Complex new(0, 1) Constant new: I
Complex method: ==(c) c re @re == c im @im == and ;
Complex method: norm @re sq @im sq + sqrt ;
Complex method: conj Complex new(@re, @im neg) ;
Complex method: +(c) Complex new(c re @re +, c im @im +) ;
Complex method: -(c) Complex new(c re @re -, c im @im -) ;
Complex method: *(c) Complex new(c re @re * c im @im * -, c re @im * @re c im * + ) ;
Complex method: inv
| n |
@re sq @im sq + asFloat ->n
Complex new(@re n /, @im neg n / ) ;
Complex method: /(c) c self inv * ;
Usage :
2 3.2 I * + println
Complex new(2, 3) 1.2 + println
Complex new(2, 3) 1.2 * println
2 Complex new(2, 3) / println
- Output:
(2,3.2) (3.2,3) (2.4,3.6) (0.307692307692308,-0.461538461538462)
PARI/GP
PARI has access to all the implicit type conversions of C. In addition, certain objects are automatically simplified when stored in history objects (in addition to explicit conversions of various types). So a complex number with imaginary part an exact 0 is simplified to a t_REAL, t_INT, etc.
There are no user-defined types and hence no implicit conversion on them.
Pascal
program implicitTypeConversion;
var
i: integer;
r: real;
begin
i := 42;
r := i { integer → real }
end.
program additionalImplicitTypeConversionInExtendedPascal;
var
c: char;
s: string(20);
i: integer;
r: real;
x: complex;
begin
c := 'X';
s := c; { char → string(…) }
i := 42;
x := i; { integer → complex }
r := 123.456;
x := r { real → complex }
end.
All available implicit conversions are applicable when providing parameters to a routine call. For instance the Extended Pascal function im intended to return the imaginary part of a complex number can be called with an integer value. Obviously, im(42) will always return 0 (zero), but the implicit conversion (integer to complex) is performed. This is important to remember, when the conversion could cause a loss in precision. For instance Extended Pascal’s exponentiation operator ** will promote any integer operand to a real value first.
PascalABC.NET
type MyInt = class
i: integer;
constructor (ii: integer) := i := ii;
static function operator implicit(i: integer): MyInt := new MyInt(i);
end;
begin
var ssi: shortint := 1;
var si: smallint := ssi;
var i: integer := si;
var i64: int64 := i;
var my: MyInt := i;
end.
Perl
Perl is the original DWIM language, implicit type conversion is the default mode of operation. Perl does not have static types; the concept of distinct integers/strings/floats is not present. Instead, operators defines how the operands will behave. An operator that requires a string coerces its operand into a string, an operator that requires an integer coerces its operand into an integer, and so forth.
print 1 + '2'; # 3
print '1' + '2'; # 3
print 1 . 1; # 11
$a = 1;
$b = 2;
say "$a+$b"; # 1+2
# Even if you intentionally jumble the expected roles of numbers and strings, thing just work out
say hex int( (2 . 0 x '2') ** substr 98.5, '2', '2' ) . 'beef'; # 1359599
On the other hand, since Perl gives you a lot of rope, you have to be careful what you do with it. The expression
'x' + 1
will return the answer '1', silently glossing over the meaningless use of an alphabetic character in addition.
This is the reason that use warnings
should be present in most all your Perl code. Enabling warnings will alert you that Argument "x" isn't numeric in addition.
Phix
If a numerical operation on floating points results in an exact integer, that is how it is stored, eg 1.5+1.5
Likewise a numerical operation on integers can need to be stored in a float, eg 1/3 or #123456*#123456.
If a string character (or slice) is replaced with any value that will not fit in a byte, it is automatically
converted to a dword_sequence, eg
sequence s = "this" s[3] = PI -- s is now {'t','h',3.1415926,'s'}
Phix does not, or at least tries very hard not to "drop bits" or "clock round". 1/3 is 0.333333 not 0, 0-1 is -1 not +#FFFFFFFF, in all cases, with a plain english fatal run-time error should a type check occur, as would occur above were s defined as a string instead of a sequence.
Python
Python does do some automatic conversions between different types but is still considered a strongly typed language. Allowed automatic conversions include between numeric types (where it makes sense), and the general rule that empty container types as well as zero are considered False in a boolean context.
from fractions import Fraction
from decimal import Decimal, getcontext
getcontext().prec = 60
from itertools import product
casting_functions = [int, float, complex, # Numbers
Fraction, Decimal, # Numbers
hex, oct, bin, # Int representations - not strictly types
bool, # Boolean/integer Number
iter, # Iterator type
list, tuple, range, # Sequence types
str, bytes, # Strings, byte strings
bytearray, # Mutable bytes
set, frozenset, # Set, hashable set
dict, # hash mapping dictionary
]
examples_of_types = [0, 42,
0.0 -0.0, 12.34, 56.0,
(0+0j), (1+2j), (1+0j), (78.9+0j), (0+1.2j),
Fraction(0, 1), Fraction(22, 7), Fraction(4, 2),
Decimal('0'),
Decimal('3.14159265358979323846264338327950288419716939937510'),
Decimal('1'), Decimal('1.5'),
True, False,
iter(()), iter([1, 2, 3]), iter({'A', 'B', 'C'}),
iter([[1, 2], [3, 4]]), iter((('a', 1), (2, 'b'))),
[], [1, 2], [[1, 2], [3, 4]],
(), (1, 'two', (3+0j)), (('a', 1), (2, 'b')),
range(0), range(3),
"", "A", "ABBA", "Milü",
b"", b"A", b"ABBA",
bytearray(b""), bytearray(b"A"), bytearray(b"ABBA"),
set(), {1, 'two', (3+0j), (4, 5, 6)},
frozenset(), frozenset({1, 'two', (3+0j), (4, 5, 6)}),
{}, {1: 'one', 'two': (2+3j), ('RC', 3): None}
]
if __name__ == '__main__':
print('Common Python types/type casting functions:')
print(' ' + '\n '.join(f.__name__ for f in casting_functions))
print('\nExamples of those types:')
print(' ' + '\n '.join('%-26s %r' % (type(e), e) for e in examples_of_types))
print('\nCasts of the examples:')
for f, e in product(casting_functions, examples_of_types):
try:
ans = f(e)
except BaseException:
ans = 'EXCEPTION RAISED!'
print('%-60s -> %r' % ('%s(%r)' % (f.__name__, e), ans))
- Output:
(Elided due to size)
Common Python types/type casting functions: int float complex Fraction Decimal hex oct bin bool iter list tuple range str bytes bytearray set frozenset dict Examples of those types: <class 'int'> 0 <class 'int'> 42 <class 'float'> 0.0 <class 'float'> 12.34 <class 'float'> 56.0 <class 'complex'> 0j <class 'complex'> (1+2j) <class 'complex'> (1+0j) <class 'complex'> (78.9+0j) <class 'complex'> 1.2j <class 'fractions.Fraction'> Fraction(0, 1) <class 'fractions.Fraction'> Fraction(22, 7) <class 'fractions.Fraction'> Fraction(2, 1) <class 'decimal.Decimal'> Decimal('0') <class 'decimal.Decimal'> Decimal('3.14159265358979323846264338327950288419716939937510') <class 'decimal.Decimal'> Decimal('1') <class 'decimal.Decimal'> Decimal('1.5') <class 'bool'> True <class 'bool'> False <class 'tuple_iterator'> <tuple_iterator object at 0x00000085D128E438> <class 'list_iterator'> <list_iterator object at 0x00000085D128E550> <class 'set_iterator'> <set_iterator object at 0x00000085D127EAF8> <class 'list_iterator'> <list_iterator object at 0x00000085D128E668> <class 'tuple_iterator'> <tuple_iterator object at 0x00000085D128E5C0> <class 'list'> [] <class 'list'> [1, 2] <class 'list'> [[1, 2], [3, 4]] <class 'tuple'> () <class 'tuple'> (1, 'two', (3+0j)) <class 'tuple'> (('a', 1), (2, 'b')) <class 'range'> range(0, 0) <class 'range'> range(0, 3) <class 'str'> '' <class 'str'> 'A' <class 'str'> 'ABBA' <class 'str'> 'Milü' <class 'bytes'> b'' <class 'bytes'> b'A' <class 'bytes'> b'ABBA' <class 'bytearray'> bytearray(b'') <class 'bytearray'> bytearray(b'A') <class 'bytearray'> bytearray(b'ABBA') <class 'set'> set() <class 'set'> {1, 'two', (3+0j), (4, 5, 6)} <class 'frozenset'> frozenset() <class 'frozenset'> frozenset({1, 'two', (3+0j), (4, 5, 6)}) <class 'dict'> {} <class 'dict'> {1: 'one', 'two': (2+3j), ('RC', 3): None} Casts of the examples: int(0) -> 0 int(42) -> 42 int(0.0) -> 0 int(12.34) -> 12 int(56.0) -> 56 int(0j) -> 'EXCEPTION RAISED!' int((1+2j)) -> 'EXCEPTION RAISED!' int((1+0j)) -> 'EXCEPTION RAISED!' int((78.9+0j)) -> 'EXCEPTION RAISED!' int(1.2j) -> 'EXCEPTION RAISED!' int(Fraction(0, 1)) -> 0 int(Fraction(22, 7)) -> 3 int(Fraction(2, 1)) -> 2 int(Decimal('0')) -> 0 int(Decimal('3.14159265358979323846264338327950288419716939937510')) -> 3 int(Decimal('1')) -> 1 int(Decimal('1.5')) -> 1 int(True) -> 1 int(False) -> 0 int(<tuple_iterator object at 0x00000085D128E438>) -> 'EXCEPTION RAISED!' int(<list_iterator object at 0x00000085D128E550>) -> 'EXCEPTION RAISED!' int(<set_iterator object at 0x00000085D127EAF8>) -> 'EXCEPTION RAISED!' int(<list_iterator object at 0x00000085D128E668>) -> 'EXCEPTION RAISED!' int(<tuple_iterator object at 0x00000085D128E5C0>) -> 'EXCEPTION RAISED!' int([]) -> 'EXCEPTION RAISED!' ''' dict((('a', 1), (2, 'b'))) -> {'a': 1, 2: 'b'} dict(range(0, 0)) -> {} dict(range(0, 3)) -> 'EXCEPTION RAISED!' dict('') -> {} dict('A') -> 'EXCEPTION RAISED!' dict('ABBA') -> 'EXCEPTION RAISED!' dict('Milü') -> 'EXCEPTION RAISED!' dict(b'') -> {} dict(b'A') -> 'EXCEPTION RAISED!' dict(b'ABBA') -> 'EXCEPTION RAISED!' dict(bytearray(b'')) -> {} dict(bytearray(b'A')) -> 'EXCEPTION RAISED!' dict(bytearray(b'ABBA')) -> 'EXCEPTION RAISED!' dict(set()) -> {} dict({1, 'two', (3+0j), (4, 5, 6)}) -> 'EXCEPTION RAISED!' dict(frozenset()) -> {} dict(frozenset({1, 'two', (3+0j), (4, 5, 6)})) -> 'EXCEPTION RAISED!' dict({}) -> {} dict({1: 'one', 'two': (2+3j), ('RC', 3): None}) -> {1: 'one', 'two': (2+3j), ('RC', 3): None}
alternative
One case of implicit casting and one quasi-case:
>>> 12/3 # Implicit cast from int to float, by / operator.
4.0
>>> (2+4j)/2 # But no implicit cast for complex parts.
(1+2j)
>>> (11.5+12j)/0.5 # Quasi-case, complex parts implicit cast from float to int.
(23+24j)
Racket
The only automatic conversions are in the numeric tower. The common case is in some operations like +
, -
, *
, /
, when one of the arguments is of a different type of the other argument. For example, in all the following cases the fixnum 1
is added to more general kinds of numbers.
#lang racket
(+ 1 .1) ; ==> 1.1
(+ 1 0+1i) ; ==> 1+1i
(+ 1 1/2) ; ==> 3/2
(+ 1 (expt 10 30)) ; ==> 1000000000000000000000000000001
Raku
(formerly Perl 6)
Raku was designed with a specific goal of maximizing the principle of DWIM (Do What I Mean) while simultaneously minimizing the principle of DDWIDM (Don't Do What I Don't Mean). Implicit type conversion is a natural and basic feature.
Variable names in Raku are prepended with a sigil. The most basic variable container type is a scalar, with the sigil dollar sign: $x. A single scalar variable in list context will be converted to a list of one element regardless of the variables structure. (A scalar variable may be bound to any object, including a collective object. A scalar variable is always treated as a singular item, regardless of whether the object is essentially composite or unitary. There is no implicit conversion from singular to plural; a plural object within a singular container must be explicitly decontainerized somehow. Use of a subscript is considered sufficiently explicit though.)
The type of object contained in a scalar depends on how you assign it and how you use it.
my $x;
$x = 1234; say $x.WHAT; # (Int) Integer
$x = 12.34; say $x.WHAT; # (Rat) Rational
$x = 1234e-2; say $x.WHAT; # (Num) Floating point Number
$x = 1234+i; say $x.WHAT; # (Complex)
$x = '1234'; say $x.WHAT; # (Str) String
$x = (1,2,3,4); say $x.WHAT; # (List)
$x = [1,2,3,4]; say $x.WHAT; # (Array)
$x = 1 .. 4; say $x.WHAT; # (Range)
$x = (1 => 2); say $x.WHAT; # (Pair)
$x = {1 => 2}; say $x.WHAT; # (Hash)
$x = {1, 2}; say $x.WHAT; # (Block)
$x = sub {1}; say $x.WHAT; # (Sub) Code Reference
$x = True; say $x.WHAT; # (Bool) Boolean
Objects may be converted between various types many times during an operation. Consider the following line of code.
say :16(([+] 1234.ords).sqrt.floor ~ "beef");
In English: Take the floor of the square root of the sum of the ordinals of the digits of the integer 1234, concatenate that number with the string 'beef', interpret the result as a hexadecimal number and print it.
Broken down step by step:
my $x = 1234; say $x, ' ', $x.WHAT; # 1234 (Int)
$x = 1234.ords; say $x, ' ', $x.WHAT; # 49 50 51 52 (List)
$x = [+] 1234.ords; say $x, ' ', $x.WHAT; # 202 (Int)
$x = ([+] 1234.ords).sqrt; say $x, ' ', $x.WHAT; # 14.2126704035519 (Num)
$x = ([+] 1234.ords).sqrt.floor; say $x, ' ', $x.WHAT; # 14 (Int)
$x = ([+] 1234.ords).sqrt.floor ~ "beef"; say $x, ' ', $x.WHAT; # 14beef (Str)
$x = :16(([+] 1234.ords).sqrt.floor ~ "beef"); say $x, ' ', $x.WHAT; # 1359599 (Int)
Some types are not implicitly converted.
For instance, you must explicitly request and cast to Complex numbers and FatRat numbers.
(A normal Rat number has a denominator that is limited to 64 bits, with underflow to floating point to prevent performance degradation; a FatRat, in contrast, has an unlimited denominator size, and can chew up all your memory if you're not careful.)
my $x;
$x = (-1).sqrt; say $x, ' ', $x.WHAT; # NaN (Num)
$x = (-1).Complex.sqrt; say $x, ' ', $x.WHAT; # 6.12323399573677e-17+1i (Complex)
$x = (22/7) * 2; say $x, ' ', $x.WHAT; # 6.285714 (Rat)
$x /= 10**10; say $x, ' ', $x.WHAT; # 0.000000000629 (Rat)
$x /= 10**10; say $x, ' ', $x.WHAT; # 6.28571428571429e-20 (Num)
$x = (22/7).FatRat * 2; say $x, ' ', $x.WHAT; # 6.285714 (FatRat)
$x /= 10**10; say $x, ' ', $x.WHAT; # 0.000000000629 (FatRat)
$x /= 10**10; say $x, ' ', $x.WHAT; # 0.0000000000000000000629 (FatRat)
User defined types will support implicit casting if the object has Bridge method that tells it how to do so, or if the operators in question supply multiple dispatch variants that allow for coercions. Note that Raku does not support implicit assignment coercion to typed variables. However, different-sized storage types (int16, int32, int64, for example) are not considered different types, and such assignment merely enforces a constraint that will throw an exception if the size is exceeded. (The calculations on the right side of the assignment are done in an arbitrarily large type such as Int.)
Types may be explicitly cast by using a bridge method (.Int, .Rat, .Str, whatever) or by using a coercion operator:
+ or - numify ~ stringify ? or ! boolify i (postfix) complexify $() singularize @() pluralize %() hashify
RPL
To make basic calculations easier, RPL accepts to mix floating-point real numbers and either complex numbers or binary integers when using + - * /
operations.
(2,3) 2 * #37h 16 +
- Output:
2: (4,6) 1: #47h
REXX
╔═══════════════════════════════════════════════════════════════════════════════════╗ ║ The REXX language has conversion, if normalization can be regarded as a type of ║ ║ conversion. Normalization can remove all blanks from (numeric) literals, leading ║ ║ plus (+) signs, the decimal point (if it's not significant), and leading and/or ║ ║ trailing zeroes (except for zero itself), remove insignificant leading zeroes in ║ ║ the exponent, add a plus sign (+) for any positive exponent, and will capitalize ║ ║ the "e" in an exponentiated number. ║ ║ ║ ║ Almost all numerical expressions can be normalized after computation, shown below ║ ║ are a few examples. Other expressions with non-numeric values are treated as ║ ║ simple literals. ║ ║ ║ ║ Note that REXX can store the number with leading signs, leading, trailing, and ║ ║ sometimes imbedded blanks (which can only occur after a leading sign). ║ ║ ║ ║ Also noted is how numbers can be assigned using quotes ['] or apostrophes ["]. ║ ╚═══════════════════════════════════════════════════════════════════════════════════╝
/*REXX program demonstrates various ways REXX can convert and/or normalize some numbers.*/
digs=digits() ; say digs /* 9, the default.*/
a=.1.2...$ ; say a /* .1.2...$ */
a=+7 ; say a /* 7 */
a='+66' ; say a /* +66 */
a='- 66.' ; say a /* - 66. */
a=- 66 ; say a /* -66 */
a=- 66. ; say a /* -66 */
a=+ 66 ; say a /* 66 */
a=1 ; b=2.000 ; x=a+b ; say x /* 3.000 */
a=1 ; b=2.000 ; x=(a+b)/1 ; say x /* 3 */
a=+2 ; b=+3 ; x=a+b ; say x /* 5 */
a=+5 ; b=+3e1 ; x=a+b ; say x /* 35 */
a=1e3 ; say a /* 1E3 */
a="1e+003" ; say a /* 1e+003 */
a=1e+003 ; say a /* 1E+003 */
a=1e+003 ; b=0 ; x=a+b ; say x /* 1000 */
a=12345678912 ; say a /* 123456789012 */
a=12345678912 ; b=0 ; x=a+b ; say x /* 1.23456789E+10 */
output
9 .1.2...$ 7 +66 - 66. -66 -66 66 3.000 3 5 35 1E3 1e+003 1E+003 1000 12345678912 1.23456789E+10
Scala
See the section Tour of Scala about impplicit conversions.
Sidef
Since version 3.00, all the number types (int, rat, float and complex) are unified in the Number class and all the needed conversions are done implicitly. Methods from other classes also make implicit conversions where possible.
> 1+"2" #=> 3
> "1"+2 #=> 12
> sqrt(-4) #=> 2i
> ("a" + [1,2]) #=> a[1,2]
> ('ha' * '3') #=> hahaha
> ('ha' * true) #=> ha
Tcl
Virtually all type conversions in Tcl are implicit.
A value is an integer (or a string, or a list, or …) because that is how you are using it.
The only true explicit type conversion operations are some of the functions in the expression sub-language (int()
, double()
, etc.).
- Integer conversion
set value "123" incr someVar $value # $value will now hold an integer (strictly, one of many integer-related types) with value 123
- Float conversion
set value "1.23" expr {$value + 3.5} # $value will now hold a double-precision IEEE floating point number that is (approx.) 1.23
- String conversion
set value [expr {123 + 456}] string length $value # $value will now hold a string (of length 3)
- List conversion
set value {a b c d} llength $value # $value will now hold a list (of length 4)
- Dictionary conversion
set value {a b c d} dict size $value # $value will now hold a dictionary (of size 2)
There are many other value types (command names, variable names, subcommand index names, etc.) but user code would not normally seek to explicitly convert to those.
Defining a new type requires writing an extension to Tcl in C (or whatever the host programming language is, so Java for JTcl); the interfaces for doing this are not directly exposed to the Tcl script level because they require direct memory access, which Tcl normally does not permit in order to promote overall process stability.
V (Vlang)
A small primitive type can be automatically promoted if it fits completely into the data range of the type on the other side.
These are the allowed possibilities:
i8 → i16 → int → i64
↘ ↘
f32 → f64
↗ ↗
u8 → u16 → u32 → u64 ⬎
↘ ↘ ↘ ptr
i8 → i16 → int → i64 ⬏
For example, an int value can be automatically promoted to f64 or i64, but not to u32.
Note- u32 would mean loss of the sign for negative values.
Promotion from int to f32, however, is currently done automatically (but can lead to precision loss).
u := u16(12)
x := f32(45.6)
a := 75
b := 14.7
c := u + a // c is of type `int` - automatic promotion of `u`'s value
println(c) // 87
d := b + x // d is of type `f64` - automatic promotion of `x`'s value
println(d) // 60.2999984741211
Wren
As far as the built-in types are concerned, Wren does not really have any implicit conversions.
There is only one built-in numeric type, Num, which represents a double precision floating point number. When using the 'bit' operators, Num values are converted internally to 32 bit unsigned integers though this type is not actually exposed to the user who only knows about it from its effects.
In conditional expressions, 'null' acts like it is false and any other value acts like it is true but these aren't really implicit conversions to Bool; it's just the way the language works.
Also adding implicit conversions to the built-in types is not an option, as inheriting from these types is not allowed.
However, you can define implicit conversions for user defined types. For example, BigInts can be generated from integral Nums or Strings and, when doing operations on BigInts, values of these types are automatically converted to BigInts so the operations can succeed.
import "./big" for BigInt
var b1 = BigInt.new(32)
var b2 = BigInt.new ("64")
var b3 = b1 + b2 + 2 + "2"
System.print(b3) // 100
XPL0
XPL0 doesn't have implicit type conversions. It only has two basic data types: integer and real. Conversions must be done explicitly. For example:
int I;
real R;
I:= fix(R);
R:= float(I);
There is a third declaration type called 'character' (usually abbreviated 'char'). Its variables are the same as integers except when they are indexed, as for arrays. An indexed character variable accesses bytes, whereas an indexed integer variable accesses integers.
Strings are one-dimensional arrays consisting of bytes containing ASCII characters.
Booleans are represented by integers. Zero is false, and non-zero is true.
Real literals are distinguished from integers by including a decimal point or exponent. For example: 12. or 6e12
Z80 Assembly
As with other assembly languages, types don't really exist in the sense that we're used to thinking about them. All the functionality is there but there are no "rules" forbidding you from doing anything particular with data. Type "conversion" doesn't happen by itself, but there is sort of "type casting" if you will, in the sense that the programmer can decide whether a particular bit pattern represents an integer, a pointer, etc. whenever the programmer wants. Type conversion in the "modern" sense (such as an integer becoming a float or a string) will have to be done manually.
When a constant or 16-bit register is used as an operand and is in parentheses, it is treated as a pointer to memory for that instruction only. The type of the data pointed to depends on the instruction.
LD HL,$4000 ;load HL with the number 0x4000
LD A,(HL) ;load the byte stored at memory address 0x4000 into A.
LD DE,($C000) ;load DE with the 16-bit value stored at $C000.
Note: The Game Boy cannot use 16-bit constant pointers, with two exceptions:
- Reading/writing in the $FF00-$FFFF memory range.
- Storing the stack pointer at a constant 16-bit address. (The stack pointer can only be saved this way, it cannot be loaded this way.)
The individual 8-bit registers that form a register pair can be operated on separately, or as a single 16-bit value.
LD BC,$1234
INC B ;BC = $1334
INC C ;BC = $1335
INC BC ;BC = $1336
zkl
Type conversions usually just happen (ie the object knows what it wants and attempts to convert) but sometimes the conversion needs to be explicit (ie the conversion is ambiguous, the object doesn't know about the other type or is too lazy to convert).
zkl: 1+"2"
3
zkl: "1"+2
12
zkl: 1/2
0
zkl: (1).toFloat()/2
0.5
zkl: T("one",1,"two",2).toDictionary()
D(two:2,one:1)
zkl: T("one",1,"two",2).toDictionary().toList()
L(L("two",2),L("one",1))
zkl: T("one",1,"two",2).toDictionary().toList().toDictionary()
D(two:2,one:1)
etc
- Draft Programming Tasks
- Basic language learning
- 6502 Assembly
- 68000 Assembly
- ALGOL 68
- Applesoft BASIC
- AWK
- C
- C sharp
- C++
- D
- Delphi
- Déjà Vu
- EasyLang
- FreeBASIC
- Free Pascal
- Furor
- Go
- Idris
- J
- Java
- Jq
- Julia
- Kotlin
- Lua
- M2000 Interpreter
- Nim
- Oforth
- PARI/GP
- Pascal
- PascalABC.NET
- Perl
- Phix
- Phix/basics
- Python
- Racket
- Raku
- RPL
- REXX
- Scala
- Sidef
- Tcl
- V (Vlang)
- Wren
- Wren-big
- XPL0
- Z80 Assembly
- Zkl
- 6502 Assembly/Omit
- 6800 Assembly/Omit
- 68000 Assembly/Omit
- 8080 Assembly/Omit
- 8086 Assembly/Omit
- ARM Assembly/Omit
- AArch64 Assembly/Omit
- MIPS Assembly/Omit
- X86 Assembly/Omit
- Z80 Assembly/Omit