Implicit type conversion

From Rosetta Code
Revision as of 17:12, 23 March 2016 by rosettacode>Gerard Schildberger (→‎{{header|REXX}}: reformatted the comment box.)
Implicit type conversion is a draft programming task. It is not yet considered ready to be promoted as a complete task, for reasons that should be found in its talk page.

Some programming languages have implicit type conversion. Type conversion is also known as coercion.

For example: <lang algol68>COMPL z := 1;</lang>Here the assignment ":=" implicitly converts the integer 1, to the complex number in the programming language ALGOL 68.

The alternative would be to explicitly convert a value from one type to another, using a function or some other mechanism (e.g. an explicit cast).

The following code samples demonstrate the various type conversions in each language, and give an example of a implicit type conversion path from the smallest possible variable size to the largest possible variable size. (Where the size of the underlying variable's data strictly increases).

In strong typed languages some types are actually mutually incompatible. In this case the language may have disjoint type conversion paths, or even branching type conversion paths. (Where it occurs in a specific language, it is demonstrated in the code samples below.)

Languages that don't support any implicit type conversion are detailed in the /Omit categories at the bottom of this page.

Indicate if the language supports user defined type conversion definitions. And give an example of such a definition. (E.g. define an implicit type conversion from real to complex numbers, or from char to an array of char of length 1.)

ALGOL 68

Works with: ALGOL 68 version Revision 1
Works with: ALGOL 68G version Any - tested with release algol68g-2.6.

File: implicit_type_conversion.a68<lang algol68>#!/usr/bin/a68g --script #

  1. -*- coding: utf-8 -*- #

main:(

  1. a representative sample of builtin types #
   BOOL b; INT bool width = 1;
   BITS bits; LONG BITS lbits;
   BYTES bytes; LONG BYTES lbytes; # lbytes := bytes is not illegal #
   [bits width]BOOL array bool;
   [long bits width]BOOL long array bool;
   CHAR c; INT char width = 1;
   STRING array char;
   SHORT SHORT INT ssi; SHORT INT si; INT i; LONG INT li; LONG LONG INT lli;
   SHORT SHORT REAL ssr; SHORT REAL sr; REAL r; LONG REAL lr; LONG LONG REAL llr;
   SHORT SHORT COMPL ssz; SHORT COMPL sz; COMPL z; LONG COMPL lz; LONG LONG COMPL llz;
   INT long long compl width = 2 * long long real width;
   STRUCT (BOOL b, CHAR c, INT i, REAL r, COMPL z)cbcirz;  # NO implicit casting #
   REF []INT rai;
   FORMAT long long real fmt = $g(-0,long long real width-2)$;
   FORMAT long long compl fmt = $f(long long real fmt)"+"f(long long real fmt)"i"$;
  1. type conversion stating points #
   b := TRUE;
   c := "1";
   ssi := SHORT SHORT 1234; i := 1234;
  1. a representative sample of implied casts for subtypes of personality INT/REAL/COMPL #
    si:=ssi;
     i:=ssi;  i:=si;
    li:=ssi; li:=si;  li:=i;
   lli:=ssi;lli:=si; lli:=i;  lli:=li;
   ssr:=ssi;
    sr:=ssr; sr:=ssi; sr:=si;
     r:=ssr;  r:=sr;   r:=ssi;  r:=si;   r:=i;
    lr:=ssr; lr:=sr;  lr:=r;   lr:=ssi; lr:=si;  lr:=i;   lr:=li;
   llr:=ssr;llr:=sr; llr:=r;  llr:=lr; llr:=ssi;llr:=si; llr:=i;  llr:=li; llr:=i;
   ssz:=ssr;ssz:=ssi;
    sz:=ssz; sz:=ssr; sz:=sr;  sz:=ssi; sz:=si;
     z:=ssz;  z:=sz;   z:=ssr;  z:=sr;   z:=r;    z:=ssi;  z:=si;   z:=i;
    lz:=ssz; lz:=sz;  lz:=z;   lz:=ssr; lz:=sr;  lz:=r;   lz:=lr;  lz:=ssi; lz:=si;  lz:=i;   lz:=li;
   llz:=ssz;llz:=sz; llz:=z;  llz:=lz; llz:=ssr;llz:=sr; llz:=r;  llz:=lr; llz:=r;  llz:=ssi;llz:=si; llz:=i;  llz:=li; llz:=lli;
  1. conversion branch SHORT SHORT INT => LONG LONG COMPL #
  2. a summary result, using the longest sizeof increasing casting path #
   printf((long long compl fmt,llz:=(llr:=(lr:=(i:=(si:=ssi)))),$
           $l"  was increasingly cast"$,
           $" from "g(-0)$, long long compl width, long long real width, long real width,
                           int width, short int width, short short int width, $" digits"$,
           $" from "g(-0)l$,ssi ));
  1. conversion branch BITS => []BOOL #
   bits := 16rf0f0ff00;
   lbits := bits;
   printf(($g$,"[]BOOL := LONG BITS := BITS - implicit widening: ",array bool := bits, $l$));
  1. conversion branch BYTES => []CHAR #
   bytes := bytes pack("0123456789ABCDEF0123456789abcdef");
   long array bool := LONG 2r111;
   printf(($g$,"[]CHAR := LONG BYTES := BYTES - implicit widening: ",array char := bytes, $l$));
  1. deproceduring conversion branch PROC PROC PROC INT => INT #
   PROC pi = INT: i;
   PROC ppi = PROC INT: pi;
   PROC pppi = PROC PROC INT: ppi;
   printf(($g$,"PROC PROC PROC INT => INT - implicit deprocceduring^3: ",pppi, $l$));
  1. dereferencing conversion branch REF REF REF INT => INT #
   REF INT ri := i;
   REF REF INT rri := ri;
   REF REF REF INT rrri := rri;
   printf(($g$,"REF REF REF INT => INT - implicit dereferencing^3: ",rrri, $l$));
  1. rowing conversion branch INT => []INT => [,]INT => [,,]INT #
  2. a representative sample of implied casts, type pointer #
   rai := ai; # starts at the first element of ai #
   rai := i;  # an array of length 1 #
   FLEX[0]INT ai := i;
   FLEX[0,0]INT aai := ai;
   FLEX[0,0,0]INT aaai := aai;
   printf(($g$,"INT => []INT => [,]INT => [,,]INT - implicit rowing^3: ",aaai, $l$));
  1. uniting conversion branch UNION(VOID, INT) => UNION(VOID,INT,REAL) => UNION(VOID,INT,REAL,COMPL) #
   UNION(VOID,INT) ui := i;
   UNION(VOID,INT,REAL) uui := ui;
   UNION(VOID,INT,REAL,COMPL) uuui := uui;
   printf(($g$,"INT => UNION(VOID, INT) => UNION(VOID,INT,REAL,COMPL) - implicit uniting^3: ",(uuui|(INT i):i), $l$));
 SKIP

)</lang>

Output:
1234.0000000000000000000000000000000000000000000000000000000000000+.0000000000000000000000000000000000000000000000000000000000000i
  was increasingly cast from 126 from 63 from 28 from 10 from 10 from 10 digits from 1234
[]BOOL := LONG BITS := BITS - implicit widening: TTTTFFFFTTTTFFFFTTTTTTTTFFFFFFFF
[]CHAR := LONG BYTES := BYTES - implicit widening: 0123456789ABCDEF0123456789abcdef
PROC PROC PROC INT => INT - implicit deprocceduring^3:       +1234
REF REF REF INT => INT - implicit dereferencing^3:       +1234
INT => []INT => [,]INT => [,,]INT - implicit rowing^3:       +1234
INT => UNION(VOID, INT) => UNION(VOID,INT,REAL,COMPL) - implicit uniting^3:       +1234

AWK

<lang AWK>

  1. syntax: GAWK -f IMPLICIT_TYPE_CONVERSION.AWK

BEGIN {

   n = 1     # number
   s = "1"   # string
   a = n ""  # number coerced to string
   b = s + 0 # string coerced to number
   print(n,s,a,b)
   print(("19" 91) + 4) # string and number concatenation
   c = "10e1"
   print(c,c+0)
   exit(0)

} </lang>

Output:
1 1 1 1
1995
10e1 100

C

<lang c>#include <stdio.h> main(){ /* a representative sample of builtin types */

   unsigned char uc; char c;
   enum{e1, e2, e3}e123;
   short si; int i; long li;
   unsigned short su; unsigned u; unsigned long lu;
   float sf; float f; double lf; long double llf;
   union {char c; unsigned u; int i; float f; }ucuif;  /* manual casting only */
   struct {char c; unsigned u; int i; float f; }scuif; /* manual casting only */
   int ai[99];
   int (*rai)[99];
   int *ri;
   uc = '1';

/* a representitive sample of implied casts for subtypes of personality int/float */

   c=uc;
   si=uc; si=c;
   su=uc; su=c; su=si;
   i=uc;  i=c;  i=si;  i=su;
   e123=i; i=e123;
   u=uc;  u=c;  u=si;  u=su;  u=i;
   li=uc; li=c; li=si; li=su; li=i; li=u;
   lu=uc; lu=c; lu=si; lu=su; lu=i; lu=u; lu=li;
   sf=uc; sf=c; sf=si; sf=su; sf=i; sf=u; sf=li; sf=lu;
   f=uc;  f=c;  f=si;  f=su;  f=i;  f=u;  f=li;  f=lu;  f=sf;
   lf=uc; lf=c; lf=si; lf=su; lf=i; lf=u; lf=li; lf=lu; lf=sf; lf=f;
   llf=uc;llf=c;llf=si;llf=su;llf=i;llf=u;llf=li;llf=lu;llf=sf;llf=f;llf=lf;

/* ucuif = i; no implied cast; try: iucuif.i = i */ /* ai = i; no implied cast; try: rai = &i */ /* a representitive sample of implied casts, type pointer */

   rai = ai; /* starts at the first element of ai */
   ri = ai;  /* points to the first element of ai */

/* a summary result, using the longest sizeof increasing casting path */

   printf("%LF was increasingly cast from %d from %d from %d from %d from %d bytes from '%c'\n",
          llf=(lf=(i=(si=c))), sizeof llf, sizeof lf, sizeof i, sizeof si,sizeof c, c);

}</lang>

Output:
49.000000 was increasingly cast from 12 from 8 from 4 from 2 from 1 bytes from '1'

D

This covers a large sample of built-in types and few library-defined types. <lang d>void main() {

   import std.stdio, std.typetuple, std.variant, std.complex;
   enum IntEnum : int { A, B }
   union IntFloatUnion { int x; float y; }
   struct IntStruct { int x; }
   class ClassRef {}
   class DerivedClassRef : ClassRef {}
   alias IntDouble = Algebraic!(int, double);
   alias ComplexDouble = Complex!double;
   // On a 64 bit system size_t and ptrdiff_t are twice larger,
   // so this changes few implicit assignments results.
   writeln("On a ", size_t.sizeof * 8, " bit system:\n");
   // Represented as strings so size_t prints as "size_t"
   // instead of uint/ulong.
   alias types = TypeTuple!(
       `IntEnum`, `IntFloatUnion`, `IntStruct`,
       `bool`,
       `char`, `wchar`, `dchar`,
       `ubyte`, `ushort`, `uint`, `ulong`, /*`ucent`,*/
       `byte`, `short`, `int`, `long`, /*`cent`,*/
       `size_t`, `hash_t`, `ptrdiff_t`,
       `float`, `double`, `real`,
       `int[2]`, `int[]`, `int[int]`,
       `int*`, `void*`, `ClassRef`, `DerivedClassRef`,
       `void function()`, `void delegate()`,
       `IntDouble`, `ComplexDouble`,
   );
   foreach (T1; types) {
       mixin(T1 ~ " x;");
       write("A ", T1, " can be assigned with: ");
       foreach (T2; types) {
           mixin(T2 ~ " y;");
           static if (__traits(compiles, x = y))
               write(T2, " ");
       }
       writeln;
   }
   writeln;
   // Represented as strings so 1.0 prints as "1.0" instead of "1."
   alias values = TypeTuple!(
       `true`, `'x'`, `"hello"`, `"hello"w`, `"hello"d`,
       `0`, `255`, `1L`, `2.0f`, `3.0`, `4.0L`, `10_000_000_000L`,
       `[1, 2]`, `[3: 4]`,
       `void*`, `null`,
   );
   foreach (T; types) {
       mixin(T ~ " x;");
       write("A ", T, " can be assigned with value literal(s): ");
       foreach (y; values) {
           static if (__traits(compiles, x = mixin(y)))
               write(y, " ");
       }
       writeln;
   }
   // Few extras:
   int[] a1;
   const int[] a2 = a1;                 // OK.
   // immutable int[] a3 = a1;          // Not allowed.
   // immutable int[] a4 = a2;          // Not allowed.
   int[int] aa1;
   const int[int] aa2 = aa1;            // OK.
   //immutable int[int] aa3 = aa1;      // Not allowed.
   //immutable int[int] aa4 = aa2;      // Not allowed.
   void foo() {}
   void delegate() f1 = &foo;           // OK.
   void bar() pure nothrow @safe {}
   void delegate() f2 = &bar;           // OK.
   //void delegate() pure f3 = &foo;    // Not allowed.
   //void delegate() nothrow f4 = &foo; // Not allowed.
   //void delegate() @safe f5 = &foo;   // Not allowed.
   static void spam() {}
   void function() f6 = &spam;          // OK.
   //void function() f7 = &foo;         // Not allowed.

}</lang>

Output:
On a 32 bit system:

A IntEnum can be assigned with: IntEnum 
A IntFloatUnion can be assigned with: IntFloatUnion 
A IntStruct can be assigned with: IntStruct 
A bool can be assigned with: bool 
A char can be assigned with: bool char ubyte byte 
A wchar can be assigned with: bool char wchar ubyte ushort byte short 
A dchar can be assigned with: IntEnum bool char wchar dchar ubyte ushort uint byte short int size_t hash_t ptrdiff_t 
A ubyte can be assigned with: bool char ubyte byte 
A ushort can be assigned with: bool char wchar ubyte ushort byte short 
A uint can be assigned with: IntEnum bool char wchar dchar ubyte ushort uint byte short int size_t hash_t ptrdiff_t 
A ulong can be assigned with: IntEnum bool char wchar dchar ubyte ushort uint ulong byte short int long size_t hash_t ptrdiff_t 
A byte can be assigned with: bool char ubyte byte 
A short can be assigned with: bool char wchar ubyte ushort byte short 
A int can be assigned with: IntEnum bool char wchar dchar ubyte ushort uint byte short int size_t hash_t ptrdiff_t 
A long can be assigned with: IntEnum bool char wchar dchar ubyte ushort uint ulong byte short int long size_t hash_t ptrdiff_t 
A size_t can be assigned with: IntEnum bool char wchar dchar ubyte ushort uint byte short int size_t hash_t ptrdiff_t 
A hash_t can be assigned with: IntEnum bool char wchar dchar ubyte ushort uint byte short int size_t hash_t ptrdiff_t 
A ptrdiff_t can be assigned with: IntEnum bool char wchar dchar ubyte ushort uint byte short int size_t hash_t ptrdiff_t 
A float can be assigned with: IntEnum bool char wchar dchar ubyte ushort uint ulong byte short int long size_t hash_t ptrdiff_t float double real 
A double can be assigned with: IntEnum bool char wchar dchar ubyte ushort uint ulong byte short int long size_t hash_t ptrdiff_t float double real 
A real can be assigned with: IntEnum bool char wchar dchar ubyte ushort uint ulong byte short int long size_t hash_t ptrdiff_t float double real 
A int[2] can be assigned with: IntEnum bool char wchar dchar ubyte ushort uint byte short int size_t hash_t ptrdiff_t int[2] int[] 
A int[] can be assigned with: int[2] int[] 
A int[int] can be assigned with: int[int] 
A int* can be assigned with: int* 
A void* can be assigned with: int* void* void function() 
A ClassRef can be assigned with: ClassRef DerivedClassRef 
A DerivedClassRef can be assigned with: DerivedClassRef 
A void function() can be assigned with: void function() 
A void delegate() can be assigned with: void delegate() 
A IntDouble can be assigned with: int ptrdiff_t double IntDouble 
A ComplexDouble can be assigned with: IntEnum bool char wchar dchar ubyte ushort uint ulong byte short int long size_t hash_t ptrdiff_t float double real ComplexDouble 

A IntEnum can be assigned with value literal(s): 
A IntFloatUnion can be assigned with value literal(s): 
A IntStruct can be assigned with value literal(s): 
A bool can be assigned with value literal(s): true 0 1L 
A char can be assigned with value literal(s): true 'x' 0 255 1L 
A wchar can be assigned with value literal(s): true 'x' 0 255 1L 
A dchar can be assigned with value literal(s): true 'x' 0 255 1L 
A ubyte can be assigned with value literal(s): true 'x' 0 255 1L 
A ushort can be assigned with value literal(s): true 'x' 0 255 1L 
A uint can be assigned with value literal(s): true 'x' 0 255 1L 
A ulong can be assigned with value literal(s): true 'x' 0 255 1L 10_000_000_000L 
A byte can be assigned with value literal(s): true 'x' 0 1L 
A short can be assigned with value literal(s): true 'x' 0 255 1L 
A int can be assigned with value literal(s): true 'x' 0 255 1L 
A long can be assigned with value literal(s): true 'x' 0 255 1L 10_000_000_000L 
A size_t can be assigned with value literal(s): true 'x' 0 255 1L 
A hash_t can be assigned with value literal(s): true 'x' 0 255 1L 
A ptrdiff_t can be assigned with value literal(s): true 'x' 0 255 1L 
A float can be assigned with value literal(s): true 'x' 0 255 1L 2.0f 3.0 4.0L 10_000_000_000L 
A double can be assigned with value literal(s): true 'x' 0 255 1L 2.0f 3.0 4.0L 10_000_000_000L 
A real can be assigned with value literal(s): true 'x' 0 255 1L 2.0f 3.0 4.0L 10_000_000_000L 
A int[2] can be assigned with value literal(s): true 'x' 0 255 1L [1, 2] null 
A int[] can be assigned with value literal(s): [1, 2] null 
A int[int] can be assigned with value literal(s): [3: 4] null 
A int* can be assigned with value literal(s): null 
A void* can be assigned with value literal(s): null 
A ClassRef can be assigned with value literal(s): null 
A DerivedClassRef can be assigned with value literal(s): null 
A void function() can be assigned with value literal(s): null 
A void delegate() can be assigned with value literal(s): null 
A IntDouble can be assigned with value literal(s): 0 255 3.0 
A ComplexDouble can be assigned with value literal(s): true 'x' 0 255 1L 2.0f 3.0 4.0L 10_000_000_000L 

Déjà Vu

The only implicit conversion currently permitted is boolean to number:

<lang dejavu><1:1> #interactive session <2:1> !. + 3 true #boolean true is equal to 1 4 <3:1> !. * 2 false #boolean false is equal to 0 0</lang>

J

Overview: Types are viewed as a necessary evil - where possible mathematical identities are given precedence over the arbitrariness of machine representation.

J has 4 "static types" (noun, verb, adverb, conjunction). There are almost no implicit conversions between these types (but a noun can be promoted to a constant verb in certain static contexts or a noun representation of a verb can be placed in a context which uses that definition to perform the corresponding operation).

Translating from "traditional english" to "contemporary computer science" nomenclature: Nouns are "data", verbs are "functions", adverbs and conjunctions are "metafunctions".

Nouns break down into four disjoint collections of subtypes: boxed, literal, numeric and symbolic (which is rarely used). Most of J's implicit conversions happen within the first three subtypes. (And J supports some "extra conversions" between these types in some cases where no values are involved. For example a list which contains no characters (literals) may be used as a list which contains no numbers (numerics)).

There is one type of box, two types of literals (8 bit wide and 16 bit wide), and a variety of types of numerics. Sparse arrays are also (partially) supported and treated internally as distinct datatypes, implemented under the covers as a sequence of arrays (one to indicate which indices have values, and another to hold the corresponding values, and also a default value to fill in the rest).

The primary implicit type conversion in J applies to numeric values. In particular, J tries to present numeric values as "analytic"; that is, numeric values which are "the same" should presented to the user (J programmer) as "the same" in as many different contexts as is feasible, irrespective of their representation in the the computer's model or layout in memory. So, for example, on a 32-bit machine, `(2^31)-1` is the largest value a signed integer, which is stored in 4 bytes, can represent; in J, incrementing this value (adding 1) causes the underlying representation to switch to IEEE double-precision floating point number. In other words `1+(2^31)-1` doesn't overflow: it represents `2^31` exactly (using double the memory: 8 bytes). Similar comments apply to the two varieties of character values (ASCII and Unicode), though the implications are more straightforward and less interesting.

Having said that all that, because of the potential performance penalties involved, J does not stretch this abstraction too far. For example, numbers will never be automatically promoted to the (available, but expensive) arbitrary precision format, nor will values be automatically "demoted" (automatic demotion, paired with automatic promotion, has the potential to cause cycles of expansion and contraction during calculation of intermediate values; this, combined with J's homogeneous array-oriented nature, which requires an entire array to be promoted/demoted along with any one of its values, means including automatic demotion would probably hurt programs' performance more often than it benefited them.)

[And, though it is unrelated to type conversion, note that hiding the details of representation also requires J to treat comparison in a tolerant fashion: that is, floating point values are considered identical if they are equal up to some epsilon (2^-44 by default) times the value with the larger magnitude; and character values are identical if their code points are equal). Intolerant or exact comparison is available, though, should that be needed.]

The rich j datatypes: <lang J>

  datatype    NB. data type identification verb

3 : 0 n=. 1 2 4 8 16 32 64 128 1024 2048 4096 8192 16384 32768 65536 131072 t=. '/boolean/literal/integer/floating/complex/boxed/extended/rational' t=. t,'/sparse boolean/sparse literal/sparse integer/sparse floating' t=. t,'/sparse complex/sparse boxed/symbol/unicode' (n i. 3!:0 y) pick <;._1 t )


  NB. examples of the data types
  [A =: 0 1 ; 0 1 2 ; (!24x) ; 1r2 ; 1.2 ; 1j2 ; (<'boxed') ; (s:'`symbol')  ; 'literal' ; (u: 16b263a)

┌───┬─────┬────────────────────────┬───┬───┬───┬───────┬───────┬───────┬───┐ │0 1│0 1 2│620448401733239439360000│1r2│1.2│1j2│┌─────┐│`symbol│literal│☺│ │ │ │ │ │ │ ││boxed││ │ │ │ │ │ │ │ │ │ │└─────┘│ │ │ │ └───┴─────┴────────────────────────┴───┴───┴───┴───────┴───────┴───────┴───┘


  datatype&.>A

┌───────┬───────┬────────┬────────┬────────┬───────┬─────┬──────┬───────┬───────┐ │boolean│integer│extended│rational│floating│complex│boxed│symbol│literal│unicode│ └───────┴───────┴────────┴────────┴────────┴───────┴─────┴──────┴───────┴───────┘


  [I =: =i.4  NB. Boolean matrix

1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1

  datatype I

boolean


  $. I  NB. sparse matrix

0 0 │ 1 1 1 │ 1 2 2 │ 1 3 3 │ 1


  datatype $. I

sparse boolean

  (+ $.)I  NB. hook adds data to sparse version of data resulting in sparse

0 0 │ 2 1 1 │ 2 2 2 │ 2 3 3 │ 2 </lang>

J has verbs causing explicit conversion. Some appear in the above examples. J's lexical notation provides for us to directly specify the datatype as demonstrated in the other samples. The Extended and Rational Arithmetic section of the J dictionary (DOJ) explains the fundamental implicit conversions. Before quoting this the section here, please note that arrays have the homogeneous data type of the highest atomic data type as shown in the 0 1 2 integer vector---implicit conversion without using the primitive verbs.

Various primitive verbs produce (exact) rational results if the argument(s) are rational; non-rational verbs produce (inexact) floating point or complex results when applied to rationals, if the verb only has a limited number of rational arguments that produce rational results. (For example, %:y is rational if the atoms of y are perfect squares; ^0r1 is floating point.) The quotient of two extended integers is an extended integer (if evenly divisible) or rational (if not). Comparisons involving two rationals are exact. Dyadic verbs (e.g. + - * % , = <) that require argument type conversions do so according to the following table:

     |  B  I  X  Q  D  Z
  ---+------------------
  B  |  B  I  X  Q  D  Z     B - bit
  I  |  I  I  X  Q  D  Z     I - integer
  X  |  X  X  X  Q  D  Z     X - extended integer
  Q  |  Q  Q  Q  Q  D  Z     Q - rational
  D  |  D  D  D  D  D  Z     D - floating point
  Z  |  Z  Z  Z  Z  Z  Z     Z - complex

Java

See Language Specification Chapter 5. Conversions and Promotions

jq

jq variables are simply untyped references to JSON values, and so there are no implicit conversions on assignment.

Some builtin operators are polymorphic, e.g. 1 (of type "number") and null (of type "null") can be added (for any finite number, x, "x + null" yields x); in effect, null is in such cases implicitly converted to 0.

Currently jq uses IEEE 754 64-bit numbers, which means that some conversions between integers and floats take place implicitly, but jq's only builtin numeric type is "number", so these are behind-the-scenes type conversions.

However, if one were to define "is_integer" as follows: <lang jq>def is_integer: type == "number" and . == floor;</lang>

then one would find: <lang jq>(1/3) | is_integer # yields false

(1/3 + 1/3 + 1/3) | is_integer # yields true</lang>

For reference, jq's builtin types are "number", "boolean", "null", "object" and "array".

Lua

For the most part, Lua is strongly typed, but there are a few cases where it will coerce if the result would be of a predictable type. Coercions are never performed during comparisons or while indexing an object. <lang lua>-- During concatenation, numbers are always converted to strings. arithmetic operations will attempt to coerce strings to numbers, or throw an error if they can't type(123 .. "123") --> string type(123 + "123") --> number type(123 + "foo") --> error thrown

-- Because Lua supports multiple returns, there is a concept of "no" value when a function does not return anything, or does not return enough. If Lua is expecting a value, it will coerce these "nothing" values into nil. The same applies for lists of values in general. function noop () end local a = noop() print(a) --> nil local x, y, z = noop() print(x, y, z) --> nil nil nil

-- As in many languages, all types can be automatically coerced into their boolean value if required. Only nil and false will coerce to false print(not not nil, not not false, not not 1, not not "foo", not not { }) --> false false true true true</lang>

The only two explicit conversion functions offered by Lua are tonumber and tostring. Only the latter has a corresponding metamethod, so the former is usually only ever useful for converting strings, although in LuaJIT tonumber is used for converting numerical cdata into Lua numbers.

Oforth

Oforth allows implicit conversions only on : ==, <=, +, -, * /, rem, pow

Classes have a priority. Most classes have 0 as priority. Into basic classes :

  Integer priority is 2
  Float   priority is 40
  String  priority is 1024
  List    priority is 2048

A new class is created with 0 priority unless explicitly provided.

When, for instance, + is called, it checks priorities and convert the object with the smaller priority.

Conversion uses a convertor : a method with name "asClass" where Class is the name of the object wwith the higher priority. Conversion is not the right word here, as all theses objects are immutables : new objects are created.

For instance, adding an Integer and a Float will convert the integer into a float using asFloat method.

Let's create a Complex class with 80 as priority (please note asComplex methods that will be used for conversions) :

<lang Oforth>100 Number Class newPriority: Complex(re, im)

Complex method: re { @re } Complex method: im { @im }

Complex method: initialize { := re := im } Complex method: << { '(' <<c @re << ',' <<c @im << ')' <<c }

Integer method: asComplex { Complex new(self, 0) } Float method: asComplex { Complex new(self, 0) }

Complex new(0, 1) Constant new: I

Complex method: ==(c) { c re @re == c im @im == and } Complex method: norm { @re sq @im sq + sqrt } Complex method: conj { Complex new(@re, @im neg) } Complex method: +(c) { Complex new(c re @re +, c im @im +) } Complex method: -(c) { Complex new(c re @re -, c im @im -) }

Complex method: *(c) { Complex new(c re @re * c im @im * -, c re @im * @re c im * + ) } Complex method: inv { | n |

  @re sq @im sq + asFloat ->n
  Complex new(@re n /, @im neg n / ) 

} Complex method: /(c) { c self inv * }</lang>

Usage :

<lang Oforth>2 3.2 I * + println Complex new(2, 3) 1.2 + println Complex new(2, 3) 1.2 * println 2 Complex new(2, 3) / println</lang>

Output:
(2,3.2)
(3.2,3)
(2.4,3.6)
(0.307692307692308,-0.461538461538462)

PARI/GP

PARI has access to all the implicit type conversions of C. In addition, certain objects are automatically simplified when stored in history objects (in addition to explicit conversions of various types). So a complex number with imaginary part an exact 0 is simplified to a t_REAL, t_INT, etc.

There are no user-defined types and hence no implicit conversion on them.

Perl 6

Perl 6 was designed with a specific goal of maximizing the principle of DWIM (Do What I Mean) while simultaneously minimizing the principle of DDWIDM (Don't Do What I Don't Mean). Implicit type conversion is a natural and basic feature.

Variable names in Perl 6 are prepended with a sigil. The most basic variable container type is a scalar, with the sigil dollar sign: $x. A single scalar variable in list context will be converted to a list of one element regardless of the variables structure. (A scalar variable may be bound to any object, including a collective object. A scalar variable is always treated as a singular item, regardless of whether the object is essentially composite or unitary. There is no implicit conversion from singular to plural; a plural object within a singular container must be explicitly decontainerized somehow. Use of a subscript is considered sufficiently explicit though.)

The type of object contained in a scalar depends on how you assign it and how you use it.

<lang perl6>my $x;

$x = 1234; say $x.WHAT; # (Int) Integer $x = 12.34; say $x.WHAT; # (Rat) Rational $x = 1234e-2; say $x.WHAT; # (Num) Floating point Number $x = 1234+i; say $x.WHAT; # (Complex) $x = '1234'; say $x.WHAT; # (Str) String $x = (1,2,3,4); say $x.WHAT; # (List) $x = [1,2,3,4]; say $x.WHAT; # (Array) $x = 1 .. 4; say $x.WHAT; # (Range) $x = (1 => 2); say $x.WHAT; # (Pair) $x = {1 => 2}; say $x.WHAT; # (Hash) $x = {1, 2}; say $x.WHAT; # (Block) $x = sub {1}; say $x.WHAT; # (Sub) Code Reference $x = True; say $x.WHAT; # (Bool) Boolean</lang>


Objects may be converted between various types many times during an operation. Consider the following line of code.

<lang perl6>say :16(([+] 1234.ords).sqrt.floor ~ "beef");</lang>

In English: Take the floor of the square root of the sum of the ordinals of the digits of the integer 1234, concatenate that number with the string 'beef', interpret the result as a hexadecimal number and print it.

Broken down step by step:

<lang perl6>my $x = 1234; say $x, ' ', $x.WHAT; # 1234 (Int) $x = 1234.ords; say $x, ' ', $x.WHAT; # 49 50 51 52 (List) $x = [+] 1234.ords; say $x, ' ', $x.WHAT; # 202 (Int) $x = ([+] 1234.ords).sqrt; say $x, ' ', $x.WHAT; # 14.2126704035519 (Num) $x = ([+] 1234.ords).sqrt.floor; say $x, ' ', $x.WHAT; # 14 (Int) $x = ([+] 1234.ords).sqrt.floor ~ "beef"; say $x, ' ', $x.WHAT; # 14beef (Str) $x = :16(([+] 1234.ords).sqrt.floor ~ "beef"); say $x, ' ', $x.WHAT; # 1359599 (Int)</lang>


Some types are not implicitly converted. For instance, you must explicitly request and cast to Complex numbers and FatRat numbers. (A normal Rat number has a denominator that is limited to 64 bits, with underflow to floating point to prevent performance degradation; a FatRat, in contrast, has an unlimited denominator size, and can chew up all your memory if you're not careful.)

<lang perl6>$x = (-1).sqrt; say $x, ' ', $x.WHAT; # NaN (Num) $x = (-1).Complex.sqrt; say $x, ' ', $x.WHAT; # 6.12323399573677e-17+1i (Complex)

$x = (22/7) * 2; say $x, ' ', $x.WHAT; # 6.285714 (Rat) $x /= 10**10; say $x, ' ', $x.WHAT; # 0.000000000629 (Rat) $x /= 10**10; say $x, ' ', $x.WHAT; # 6.28571428571429e-20 (Num)

$x = (22/7).FatRat * 2; say $x, ' ', $x.WHAT; # 6.285714 (FatRat) $x /= 10**10; say $x, ' ', $x.WHAT; # 0.000000000629 (FatRat) $x /= 10**10; say $x, ' ', $x.WHAT; # 0.0000000000000000000629 (FatRat) </lang>

User defined types will support implicit casting if the object has Bridge method that tells it how to do so, or if the operators in question supply multiple dispatch variants that allow for coercions. Note that Perl 6 does not support implicit assignment coercion to typed variables. However, different-sized storage types (int16, int32, int64, for example) are not considered different types, and such assignment merely enforces a constraint that will throw an exception if the size is exceeded. (The calculations on the right side of the assignment are done in an arbitrarily large type such as Int.)

Types may be explicitly cast by using a bridge method (.Int, .Rat, .Str, whatever) or by using a coercion operator:

    + or -      numify
    ~           stringify
    ? or !      boolify
    i (postfix) complexify
    $()         singularize
    @()         pluralize
    %()         hashify

Python

Python does do some automatic conversions between different types but is still considered a strongly typed language. Allowed automatic conversions include between numeric types (where it makes sense), and the general rule that empty container types as well as zero are considered False in a boolean context. <lang python>from fractions import Fraction from decimal import Decimal, getcontext getcontext().prec = 60 from itertools import product

casting_functions = [int, float, complex, # Numbers

                    Fraction, Decimal,     # Numbers
                    hex, oct, bin,         # Int representations - not strictly types
                    bool,                  # Boolean/integer Number
                    iter,                  # Iterator type
                    list, tuple, range,    # Sequence types
                    str, bytes,            # Strings, byte strings
                    bytearray,             # Mutable bytes
                    set, frozenset,        # Set, hashable set
                    dict,                  # hash mapping dictionary
                   ]

examples_of_types = [0, 42,

                    0.0 -0.0, 12.34, 56.0, 
                    (0+0j), (1+2j), (1+0j), (78.9+0j), (0+1.2j),
                    Fraction(0, 1), Fraction(22, 7), Fraction(4, 2), 
                    Decimal('0'),
                    Decimal('3.14159265358979323846264338327950288419716939937510'),
                    Decimal('1'), Decimal('1.5'),
                    True, False,
                    iter(()), iter([1, 2, 3]), iter({'A', 'B', 'C'}), 
                    iter([[1, 2], [3, 4]]), iter((('a', 1), (2, 'b'))),
                    [], [1, 2], [[1, 2], [3, 4]],
                    (), (1, 'two', (3+0j)), (('a', 1), (2, 'b')),
                    range(0), range(3),
                    "", "A", "ABBA", "Milü",
                    b"", b"A", b"ABBA",
                    bytearray(b""), bytearray(b"A"), bytearray(b"ABBA"),
                    set(), {1, 'two', (3+0j), (4, 5, 6)},
                    frozenset(), frozenset({1, 'two', (3+0j), (4, 5, 6)}),
                    {}, {1: 'one', 'two': (2+3j), ('RC', 3): None} 
                   ]

if __name__ == '__main__':

   print('Common Python types/type casting functions:')
   print('  ' + '\n  '.join(f.__name__ for f in casting_functions))
   print('\nExamples of those types:')
   print('  ' + '\n  '.join('%-26s %r' % (type(e), e) for e in examples_of_types))
   print('\nCasts of the examples:')
   for f, e in product(casting_functions, examples_of_types):
       try:
           ans = f(e)
       except BaseException:
           ans = 'EXCEPTION RAISED!'
       print('%-60s -> %r' % ('%s(%r)' % (f.__name__, e), ans))</lang>
Output:

(Elided due to size)

Common Python types/type casting functions:
  int
  float
  complex
  Fraction
  Decimal
  hex
  oct
  bin
  bool
  iter
  list
  tuple
  range
  str
  bytes
  bytearray
  set
  frozenset
  dict

Examples of those types:
  <class 'int'>              0
  <class 'int'>              42
  <class 'float'>            0.0
  <class 'float'>            12.34
  <class 'float'>            56.0
  <class 'complex'>          0j
  <class 'complex'>          (1+2j)
  <class 'complex'>          (1+0j)
  <class 'complex'>          (78.9+0j)
  <class 'complex'>          1.2j
  <class 'fractions.Fraction'> Fraction(0, 1)
  <class 'fractions.Fraction'> Fraction(22, 7)
  <class 'fractions.Fraction'> Fraction(2, 1)
  <class 'decimal.Decimal'>  Decimal('0')
  <class 'decimal.Decimal'>  Decimal('3.14159265358979323846264338327950288419716939937510')
  <class 'decimal.Decimal'>  Decimal('1')
  <class 'decimal.Decimal'>  Decimal('1.5')
  <class 'bool'>             True
  <class 'bool'>             False
  <class 'tuple_iterator'>   <tuple_iterator object at 0x00000085D128E438>
  <class 'list_iterator'>    <list_iterator object at 0x00000085D128E550>
  <class 'set_iterator'>     <set_iterator object at 0x00000085D127EAF8>
  <class 'list_iterator'>    <list_iterator object at 0x00000085D128E668>
  <class 'tuple_iterator'>   <tuple_iterator object at 0x00000085D128E5C0>
  <class 'list'>             []
  <class 'list'>             [1, 2]
  <class 'list'>             [[1, 2], [3, 4]]
  <class 'tuple'>            ()
  <class 'tuple'>            (1, 'two', (3+0j))
  <class 'tuple'>            (('a', 1), (2, 'b'))
  <class 'range'>            range(0, 0)
  <class 'range'>            range(0, 3)
  <class 'str'>              ''
  <class 'str'>              'A'
  <class 'str'>              'ABBA'
  <class 'str'>              'Milü'
  <class 'bytes'>            b''
  <class 'bytes'>            b'A'
  <class 'bytes'>            b'ABBA'
  <class 'bytearray'>        bytearray(b'')
  <class 'bytearray'>        bytearray(b'A')
  <class 'bytearray'>        bytearray(b'ABBA')
  <class 'set'>              set()
  <class 'set'>              {1, 'two', (3+0j), (4, 5, 6)}
  <class 'frozenset'>        frozenset()
  <class 'frozenset'>        frozenset({1, 'two', (3+0j), (4, 5, 6)})
  <class 'dict'>             {}
  <class 'dict'>             {1: 'one', 'two': (2+3j), ('RC', 3): None}

Casts of the examples:
int(0)                                                       -> 0
int(42)                                                      -> 42
int(0.0)                                                     -> 0
int(12.34)                                                   -> 12
int(56.0)                                                    -> 56
int(0j)                                                      -> 'EXCEPTION RAISED!'
int((1+2j))                                                  -> 'EXCEPTION RAISED!'
int((1+0j))                                                  -> 'EXCEPTION RAISED!'
int((78.9+0j))                                               -> 'EXCEPTION RAISED!'
int(1.2j)                                                    -> 'EXCEPTION RAISED!'
int(Fraction(0, 1))                                          -> 0
int(Fraction(22, 7))                                         -> 3
int(Fraction(2, 1))                                          -> 2
int(Decimal('0'))                                            -> 0
int(Decimal('3.14159265358979323846264338327950288419716939937510')) -> 3
int(Decimal('1'))                                            -> 1
int(Decimal('1.5'))                                          -> 1
int(True)                                                    -> 1
int(False)                                                   -> 0
int(<tuple_iterator object at 0x00000085D128E438>)           -> 'EXCEPTION RAISED!'
int(<list_iterator object at 0x00000085D128E550>)            -> 'EXCEPTION RAISED!'
int(<set_iterator object at 0x00000085D127EAF8>)             -> 'EXCEPTION RAISED!'
int(<list_iterator object at 0x00000085D128E668>)            -> 'EXCEPTION RAISED!'
int(<tuple_iterator object at 0x00000085D128E5C0>)           -> 'EXCEPTION RAISED!'
int([])                                                      -> 'EXCEPTION RAISED!'

'''

dict((('a', 1), (2, 'b')))                                   -> {'a': 1, 2: 'b'}
dict(range(0, 0))                                            -> {}
dict(range(0, 3))                                            -> 'EXCEPTION RAISED!'
dict('')                                                     -> {}
dict('A')                                                    -> 'EXCEPTION RAISED!'
dict('ABBA')                                                 -> 'EXCEPTION RAISED!'
dict('Milü')                                                 -> 'EXCEPTION RAISED!'
dict(b'')                                                    -> {}
dict(b'A')                                                   -> 'EXCEPTION RAISED!'
dict(b'ABBA')                                                -> 'EXCEPTION RAISED!'
dict(bytearray(b''))                                         -> {}
dict(bytearray(b'A'))                                        -> 'EXCEPTION RAISED!'
dict(bytearray(b'ABBA'))                                     -> 'EXCEPTION RAISED!'
dict(set())                                                  -> {}
dict({1, 'two', (3+0j), (4, 5, 6)})                          -> 'EXCEPTION RAISED!'
dict(frozenset())                                            -> {}
dict(frozenset({1, 'two', (3+0j), (4, 5, 6)}))               -> 'EXCEPTION RAISED!'
dict({})                                                     -> {}
dict({1: 'one', 'two': (2+3j), ('RC', 3): None})             -> {1: 'one', 'two': (2+3j), ('RC', 3): None}

Racket

The only automatic conversions are in the numeric tower. The common case is in some operations like +, -, *, /, when one of the arguments is of a different type of the other argument. For example, in all the following cases the fixnum 1 is added to more general kinds of numbers. <lang Racket>#lang racket

(+ 1 .1) ; ==> 1.1 (+ 1 0+1i) ; ==> 1+1i (+ 1 1/2) ; ==> 3/2 (+ 1 (expt 10 30)) ; ==> 1000000000000000000000000000001 </lang>

REXX

╔═══════════════════════════════════════════════════════════════════════════════════╗
║ The REXX language has conversion, if  normalization  can be regarded as a type of ║
║ conversion.  Normalization can remove all blanks from (numeric) literals, leading ║
║ plus (+) signs,  the decimal point  (if it's not significant), and leading and/or ║
║ trailing zeroes  (except for zero itself), remove insignificant leading zeroes in ║
║ the exponent,  add a plus sign (+) for any positive exponent, and will capitalize ║
║ the    "e"    in an exponentiated number.                                         ║
║                                                                                   ║
║ Almost all numerical expressions can be normalized after computation, shown below ║
║ are a few examples.   Other expressions with non-numeric values are treated as    ║
║ simple literals.                                                                  ║
║                                                                                   ║
║ Note that REXX can store the number with leading signs,  leading,  trailing,  and ║
║ sometimes imbedded blanks  (which can only occur after a leading sign).           ║
║                                                                                   ║
║ Also noted is how numbers can be assigned using quotes [']  or  apostrophes ["].  ║
╚═══════════════════════════════════════════════════════════════════════════════════╝

<lang rexx>/*REXX program demonstrates various ways REXX can convert and/or normalize some numbers.*/ digs=digits()  ; say digs /* 9, the default.*/

a=.1.2...$  ; say a /* .1.2...$ */ a=+7  ; say a /* 7 */ a='+66'  ; say a /* +66 */ a='- 66.'  ; say a /* - 66. */ a=- 66  ; say a /* -66 */ a=- 66.  ; say a /* -66 */ a=+ 66  ; say a /* 66 */ a=1  ; b=2.000  ; x=a+b  ; say x /* 3.000 */ a=1  ; b=2.000  ; x=(a+b)/1  ; say x /* 3 */ a=+2  ; b=+3  ; x=a+b  ; say x /* 5 */ a=+5  ; b=+3e1  ; x=a+b  ; say x /* 35 */ a=1e3  ; say a /* 1E3 */ a="1e+003"  ; say a /* 1e+003 */ a=1e+003  ; say a /* 1E+003 */ a=1e+003  ; b=0  ; x=a+b  ; say x /* 1000 */ a=12345678912  ; say a /* 123456789012 */ a=12345678912  ; b=0  ; x=a+b  ; say x /* 1.23456789E+10 */</lang> output

9
.1.2...$
7
+66
- 66.
-66
-66
66
3.000
3
5
35
1E3
1e+003
1E+003
1000
12345678912
1.23456789E+10

Tcl

Virtually all type conversions in Tcl are implicit. A value is an integer (or a string, or a list, or …) because that is how you are using it. The only true explicit type conversion operations are some of the functions in the expression sub-language (int(), double(), etc.).

Integer conversion
<lang tcl>set value "123"

incr someVar $value

  1. $value will now hold an integer (strictly, one of many integer-related types) with value 123</lang>
Float conversion
<lang tcl>set value "1.23"

expr {$value + 3.5}

  1. $value will now hold a double-precision IEEE floating point number that is (approx.) 1.23</lang>
String conversion
<lang tcl>set value [expr {123 + 456}]

string length $value

  1. $value will now hold a string (of length 3)</lang>
List conversion
<lang tcl>set value {a b c d}

llength $value

  1. $value will now hold a list (of length 4)</lang>
Dictionary conversion
<lang tcl>set value {a b c d}

dict size $value

  1. $value will now hold a dictionary (of size 2)</lang>

There are many other value types (command names, variable names, subcommand index names, etc.) but user code would not normally seek to explicitly convert to those.

Defining a new type requires writing an extension to Tcl in C (or whatever the host programming language is, so Java for JTcl); the interfaces for doing this are not directly exposed to the Tcl script level because they require direct memory access, which Tcl normally does not permit in order to promote overall process stability.

zkl

Type conversions usually just happen (ie the object knows what it wants and attempts to convert) but sometimes the conversion needs to be explicit (ie the conversion is ambiguous, the object doesn't know about the other type or is too lazy to convert). <lang zkl>zkl: 1+"2" 3 zkl: "1"+2 12 zkl: 1/2 0 zkl: (1).toFloat()/2 0.5 zkl: T("one",1,"two",2).toDictionary() D(two:2,one:1) zkl: T("one",1,"two",2).toDictionary().toList() L(L("two",2),L("one",1)) zkl: T("one",1,"two",2).toDictionary().toList().toDictionary() D(two:2,one:1) etc</lang>