Integer overflow

From Rosetta Code
Task
Integer overflow
You are encouraged to solve this task according to the task description, using any language you may know.

Some languages support one or more integer types of the underlying processor.

This integer types have fixed size;   usually   8-bit,   16-bit,   32-bit,   or   64-bit.
The integers supported by such a type can be   signed   or   unsigned.

Arithmetic for machine level integers can often be done by single CPU instructions.
This allows high performance and is the main reason to support machine level integers.


Definition

An integer overflow happens when the result of a computation does not fit into the fixed size integer. The result can be too small or too big to be representable in the fixed size integer.


Task

When a language has fixed size integer types, create a program that does arithmetic computations for the fixed size integers of the language.

These computations must be done such that the result would overflow.

The program should demonstrate what the following expressions do.


For 32-bit signed integers:

Expression Result that does not fit into a 32-bit signed integer
-(-2147483647-1) 2147483648
2000000000 + 2000000000 4000000000
-2147483647 - 2147483647 -4294967294
46341 * 46341 2147488281
(-2147483647-1) / -1 2147483648

For 64-bit signed integers:

Expression Result that does not fit into a 64-bit signed integer
-(-9223372036854775807-1) 9223372036854775808
5000000000000000000+5000000000000000000 10000000000000000000
-9223372036854775807 - 9223372036854775807 -18446744073709551614
3037000500 * 3037000500 9223372037000250000
(-9223372036854775807-1) / -1 9223372036854775808

For 32-bit unsigned integers:

Expression Result that does not fit into a 32-bit unsigned integer
-4294967295 -4294967295
3000000000 + 3000000000 6000000000
2147483647 - 4294967295 -2147483648
65537 * 65537 4295098369

For 64-bit unsigned integers:

Expression Result that does not fit into a 64-bit unsigned integer
-18446744073709551615 -18446744073709551615
10000000000000000000 + 10000000000000000000 20000000000000000000
9223372036854775807 - 18446744073709551615 -9223372036854775808
4294967296 * 4294967296 18446744073709551616


Notes
  •   When the integer overflow does trigger an exception show how the exception is caught.
  •   When the integer overflow produces some value,   print it.
  •   It should be explicitly noted when an integer overflow is not recognized,   the program continues with wrong results.
  •   This should be done for signed and unsigned integers of various sizes supported by the computer programming language.
  •   When a language has no fixed size integer type,   or when no integer overflow can occur for other reasons,   this should be noted.
  •   It is okay to mention,   when a language supports unlimited precision integers,   but this task is NOT the place to demonstrate the
      capabilities of unlimited precision integers.



360 Assembly

You can choose to manage or not the binary integer overflow with the program mask bits of the PSW (Program Status Word). Bit 20 enables fixed-point overflow. Two non-privileged instructions (IPM,SPM) are available for retrieving and setting the program mask of the current PSW.
If you mask, you can test it in your program:

         L     2,=F'2147483647'   2**31-1
         L     3,=F'1'            1
         AR    2,3                add register3 to register2
         BO    OVERFLOW           branch on overflow
         ....
OVERFLOW EQU   *

On the other hand, you will have the S0C8 system abend code : fixed point overflow exception with the same program, if you unmask bit 20 by:

         IPM   1                  Insert Program Mask
         O     1,BITFPO           unmask Fixed Overflow
         SPM   1                  Set Program Mask
         ...
         DS    0F                 alignment
BITFPO   DC    BL1'00001000'      bit20=1    [start at 16]

6502 Assembly

8-Bit Overflow

Signed overflow (crossing the 7F-80 boundary) is detected by the CPU's overflow flag V.

Unsigned overflow (crossing the FF-00 boundary) is detected by the CPU's carry flag C.

The following instructions allow for branching based on the state of these flags:

  • BVS Branch if Overflow Set (signed overflow has occurred)
  • BVC Branch if Overflow Clear (signed overflow did not occur)
  • BCS Branch if Carry Set (unsigned overflow has occurred)
  • BCC Branch if Carry Clear (unsigned overflow did not occur)

These flags will automatically be set or cleared depending on the results of a calculation that can affect them.

LDA #$7F
CLC
ADC #$01
BVS ErrorHandler ;this branch will always be taken.
LDA #$FF
CLC
ADC #$01
BCS ErrorHandler ;this branch will always be taken.

Keep in mind that not all instructions affect the flags in the same way. The only arithmetic instructions that affect the overflow flag are ADC and SBC. Therefore, signed overflow can be "missed" by the CPU very easily if it occurs in other ways:

LDX #$7F
INX              ;although X went from $7F to $80, INX does not affect the overflow flag!
BVS ErrorHandler ;whether this branch is taken has NOTHING to do with the INX instruction.
LDA #%01000000
ORA #%10000000   ;accumulator crossed from below $7F to above $80, but ORA doesn't affect the overflow flag. 
BVS ErrorHandler ;whether this branch is taken has NOTHING to do with the ORA instruction.


The same is true for unsigned overflow, but less so since the zero flag can be used as a substitute in these cases.

LDX #$FF
INX                   ;the carry flag is not affected by this unsigned overflow, but the zero flag will be set 
                      ;    so we can detect overflow that way instead!
BEQ OverflowOccurred  ;notice that we used BEQ here and not BCS.

By default, the CPU will continue with the wrong result, unless you specifically program a branch based on overflow after the calculation. This is because on a hardware level the CPU has no knowledge of whether you intend your data to be signed or unsigned (this is still true even on modern computers).

16-Bit or Higher Overflow

Unlike in Z80 Assembly, the 6502 has no 16-bit registers or built-in 16-bit arithmetic instructions. It can perform 16-bit or higher addition and subtraction, by separating the number into 8-bit pieces and operating on them separately. Unfortunately, this means that the 6502's flags cannot look at the number as a whole; only the individual bytes. As a result, the CPU will detect "overflow" when any of the bytes cross the $7F-$80 boundary, regardless of whether the byte is the most significant byte or not. This is another reason why the ability to selectively ignore overflow is handy, as it only counts as signed overflow when the most significant byte crosses the $7F-$80 boundary.

;adding two 16-bit signed numbers, the first is stored at $10 and $11, the second at $12 and $13.
;The result will be stored at $14 and $15.

;add the low bytes

LDA $10              ;low byte of first operand
CLC
ADC $12              ;low byte of second operand
STA $14              ;low byte of sum

;add the high bytes

LDA $11              ;high byte of first operand
ADC $13              ;high byte of second operand
STA $15              ;high byte of result
BVS HandleOverflow   ;only check for overflow when adding the most significant bytes.

68000 Assembly

Overflow happens when certain arithmetic operations result in the most significant byte of the register crossing over from 0x7F to 0x80. (Which byte of the 32-bit register is treated as "most significant" depends on the data size of the last instruction. See the example below)

MOVE.W D0,#0000117F
ADD.W #1,D0 ;DOESN'T SET THE OVERFLOW FLAG, SINCE AT WORD LENGTH WE DIDN'T CROSS FROM 7FFF TO 8000

SUB.B #1,D0 ;WILL SET THE OVERFLOW FLAG SINCE AT BYTE LENGTH WE CROSSED FROM 80 TO 7F

Like the 6502, the 68000 will continue with the wrong result unless you tell it to stop. As with the majority of computer architectures, whether a value is "signed" or "unsigned" is not actually a property of the value itself, but of the comparators used to evaluate it. Otherwise even unsigned arithmetic would produce overflow errors! There are a few options for handling overflow errors:

  • TRAPV will call an exception handler if the overflow flag is set, otherwise it will do nothing.
  • DBVS Dn will loop a section of code until the overflow flag is set or the chosen data register is decremented to 0xFFFF, whichever occurs first.
  • BVS branches if the overflow flag is set.

Ada

In Ada, both predefined and user-defined integer types are in a given range, between Type'First and Type'Last, inclusive. The range of predefined types is implementation specific. When the result of a computation is out of the type's range, the program does not continue with a wrong result, but instead raises an exception.

with Ada.Text_IO; use Ada.Text_IO;

procedure Overflow is
   
   generic 
      type T is Range <>;
      Name_Of_T: String;
   procedure Print_Bounds; -- first instantiate this with T, Name
                           -- then call the instantiation
   procedure Print_Bounds is
   begin
      Put_Line("   " & Name_Of_T & " " & T'Image(T'First) 
		 & " .." & T'Image(T'Last));
   end Print_Bounds;
   
   procedure P_Int  is new Print_Bounds(Integer,      "Integer ");
   procedure P_Nat  is new Print_Bounds(Natural,      "Natural ");
   procedure P_Pos  is new Print_Bounds(Positive,     "Positive");
   procedure P_Long is new Print_Bounds(Long_Integer, "Long    ");
   
   type Unsigned_Byte is range 0 .. 255;
   type Signed_Byte   is range -128 .. 127;
   type Unsigned_Word is range 0 .. 2**32-1;
   type Thousand is range 0 .. 999;
   type Signed_Double is range - 2**63 .. 2**63-1;
   type Crazy is range -11 .. -3;
   
   procedure P_UB is new Print_Bounds(Unsigned_Byte, "U 8  ");
   procedure P_SB is new Print_Bounds(Signed_Byte, "S 8  ");
   procedure P_UW is new Print_Bounds(Unsigned_Word, "U 32 ");
   procedure P_Th is new Print_Bounds(Thousand, "Thous");
   procedure P_SD is new Print_Bounds(Signed_Double, "S 64 ");
   procedure P_Cr is new Print_Bounds(Crazy, "Crazy");
   
   A: Crazy := Crazy'First;
   
begin
   Put_Line("Predefined Types:");
   P_Int; P_Nat; P_Pos; P_Long; 
   New_Line;
   
   Put_Line("Types defined by the user:");
   P_UB; P_SB; P_UW; P_Th; P_SD; P_Cr;
   New_Line;
   
   Put_Line("Forcing a variable of type Crazy to overflow:");
   loop -- endless loop
      Put("  " & Crazy'Image(A) &  "+1");
      A := A + 1; -- line 49 -- this will later raise a CONSTRAINT_ERROR
   end loop;
end Overflow;
Output:
Predefined Types:
   Integer  -2147483648 .. 2147483647
   Natural   0 .. 2147483647
   Positive  1 .. 2147483647
   Long     -9223372036854775808 .. 9223372036854775807

Types defined by the user:
   U 8    0 .. 255
   S 8   -128 .. 127
   U 32   0 .. 4294967295
   Thous  0 .. 999
   S 64  -9223372036854775808 .. 9223372036854775807
   Crazy -11 ..-3

Forcing a variable of type Crazy to overflow:
  -11+1  -10+1  -9+1  -8+1  -7+1  -6+1  -5+1  -4+1  -3+1

raised CONSTRAINT_ERROR : overflow.adb:49 range check failed

ALGOL 68

In this instance, one must distinguish between the language and a particular implementation of the language. The Algol 68 Genie manual describes its behaviour thusly:

As mentioned, the maximum integer which a68g can represent is max int and the maximum real is max real. Addition could give a sum which exceeds those two values, which is called overflow. Algol 68 leaves such case [sic] undefined, meaning that an implementation can choose what to do. a68g will give a runtime error in case of arithmetic overflow.

Other implementations are at liberty to take any action they wish, including to continue silently with a "wrong" result or to throw a catchable exception (though the latter would require at least one addition to the standard prelude so as to provide the handler routine(s).

BEGIN
   print (max int);
   print (1+max int)
END
Output:
+2147483647
3        print (1+max int)
                 1        
a68g: runtime error: 1: INT math error (numerical result out of range) (detected in VOID closed-clause starting at "BEGIN" in line 1).

Note that, unlike many other languages, there is no presupposition that Algol 68 is running on a binary computer. The second example code below shows that for variables of mode long int arithmetic is fundamentally decimal in Algol 68 Genie.

BEGIN
   print (long max int);
   print (1+ long max int)
END
Output:
+99999999999999999999999999999999999
3        print (1+ long max int)
                 1              
a68g: runtime error: 1: LONG INT value out of bounds (numerical result out of range) (detected in VOID closed-clause starting at "BEGIN" in line 1).

Applesoft BASIC

The integer variable type is a signed 16-bit integer with a range from -32767 to 32767. When an integer variable is assigned a value less than -32767 or greater than 32767, an "?ILLEGAL QUANTITY ERROR" message is displayed and no change is made to the current value of the variable. All of the expressions for assigning the values use floating point.

A% = -(-32767-1)
Output:
?ILLEGAL QUANTITY ERROR
A% = 20000 + 20000
Output:
?ILLEGAL QUANTITY ERROR
A% = -32767 -32767
Output:
?ILLEGAL QUANTITY ERROR
A% = 182 * 182
Output:
?ILLEGAL QUANTITY ERROR

It is possible using a POKE statement to assign the value -32768 which would normally be out of range.

A% = -32767 : POKE PEEK(131) + PEEK(132) * 256, 0 : ? A%
Output:
-32768

Arturo

Arturo has unlimited-precision integers, without the possibility of an overflow, all with the same :integer type.

big32bit: 2147483646
big64bit: 9223372036854775808

print type big32bit
print type big64bit

print big32bit + 1
print big64bit + 1

print big32bit * 2
print big64bit * 2
Output:
:integer
:integer
2147483647
9223372036854775809
4294967292
18446744073709551616

AutoHotkey

Since AutoHotkey treats all integers as signed 64-bit, there is no point in demonstrating overflow with other integer types. A AutoHotkey program does not recognize a signed integer overflow and the program continues with wrong results.

Msgbox, % "Testing signed 64-bit integer overflow with AutoHotkey:`n" -(-9223372036854775807-1) "`n" 5000000000000000000+5000000000000000000 "`n" -9223372036854775807-9223372036854775807 "`n" 3037000500*3037000500 "`n" (-9223372036854775807-1)//-1
Output:
Testing signed 64-bit integer overflow with AutoHotkey:
-9223372036854775808
-8446744073709551616
2
-9223372036709301616
-9223372036854775808

This shows AutoHotkey does not handle integer overflow, and produces wrong results.

Axe

Axe supports 16-bit unsigned integers. It also supports 16-bit unsigned integers, but only for comparison. The task has been modified accordingly to accommodate this.

Overflow does not trigger an exception (because Axe does not support exceptions). After an overflow the program continues with wrong results (specifically, the value modulo 65536).

Disp -65535▶Dec,i
Disp 40000+40000▶Dec,i
Disp 32767-65535▶Dec,i
Disp 257*257▶Dec,i
Output:
    1
14464
32768
  513

Befunge

The Befunge-93 specification defines the range for stack cells as being the equivalent of a C signed long int on the same platform. However, in practice it will often depend on the underlying language of the interpreter, with Python-base implementations typically having an unlimited range, and JavaScript implementations using floating point.

For those with a finite integer range, though, the most common stack cell size is a 32 bit signed integer, which will usually just wrap when overflowing (as shown in the sample output below). That said, it's not uncommon for the last expression to produce some kind of runtime error or OS exception, frequently even crashing the interpreter itself.

"a9jc>"*:*+*+:0\- "(-",,:.048*"="99")1 -" >:#,_$v
v,,,9"="*84 .: ,,"+"*84 .: **:*" }}" ,+55 .-\0-1<
>:+. 55+, ::0\- :. 48*"-",, \:. 48*"="9,,, -. 55v
v.*: ,,,,,999"="*84 .: ,,"*"*84 .: *+8*7"s9"  ,+<
>55+, 0\- "(",:.048*"="99"1-/)1 -">:#,_$ 1-01-/.@
Output:
-(-2147483647 - 1)              = -2147483648
2000000000 + 2000000000         = -294967296
-2147483647 - 2147483647        = 2
46341 * 46341                   = -2147479015
(-2147483647 - 1)/-1            = -2147483648

Bracmat

Bracmat does arithmetic with arbitrary precision integer and rational numbers. No fixed size number types are supported.

C

C supports integer types of various sizes with and without signedness. Unsigned integer arithmetic is defined to be modulus a power of two. An overflow for signed integer arithmetic is undefined behavior. A C program does not recognize a signed integer overflow and the program continues with wrong results.

#include <stdio.h>

int main (int argc, char *argv[])
{
  printf("Signed 32-bit:\n");
  printf("%d\n", -(-2147483647-1));
  printf("%d\n", 2000000000 + 2000000000);
  printf("%d\n", -2147483647 - 2147483647);
  printf("%d\n", 46341 * 46341);
  printf("%d\n", (-2147483647-1) / -1);
  printf("Signed 64-bit:\n");
  printf("%ld\n", -(-9223372036854775807-1));
  printf("%ld\n", 5000000000000000000+5000000000000000000);
  printf("%ld\n", -9223372036854775807 - 9223372036854775807);
  printf("%ld\n", 3037000500 * 3037000500);
  printf("%ld\n", (-9223372036854775807-1) / -1);
  printf("Unsigned 32-bit:\n");
  printf("%u\n", -4294967295U);
  printf("%u\n", 3000000000U + 3000000000U);
  printf("%u\n", 2147483647U - 4294967295U);
  printf("%u\n", 65537U * 65537U);
  printf("Unsigned 64-bit:\n");
  printf("%lu\n", -18446744073709551615LU);
  printf("%lu\n", 10000000000000000000LU + 10000000000000000000LU);
  printf("%lu\n", 9223372036854775807LU - 18446744073709551615LU);
  printf("%lu\n", 4294967296LU * 4294967296LU);
  return 0;
}
Output:
Signed 32-bit:
-2147483648
-294967296
2
-2147479015
-2147483648
Signed 64-bit:
-9223372036854775808
-8446744073709551616
2
-9223372036709301616
-9223372036854775808
Unsigned 32-bit:
1
1705032704
2147483648
131073
Unsigned 64-bit:
1
1553255926290448384
9223372036854775808
0

C#

C# has 2 modes for doing arithmetic: checked and unchecked.

Constant arithmetic (i.e. compile-time) is checked by default. Since all the examples use constant expressions, all these statements would result in compile-time exceptions. To change this behaviour, the statements can be wrapped inside a block marked with the 'unchecked' keyword.

Runtime arithmetic is unchecked by default. Values that overflow will simply 'wrap around' and the program will continue with wrong results. To make C# recognize overflow and throw an OverflowException, the statements can be wrapped inside a block marked with the 'checked' keyword.

The default behavior can be changed with a compiler flag.

using System;
    
public class IntegerOverflow
{
    public static void Main() {
        unchecked {
            Console.WriteLine("For 32-bit signed integers:");
            Console.WriteLine(-(-2147483647 - 1));
            Console.WriteLine(2000000000 + 2000000000);
            Console.WriteLine(-2147483647 - 2147483647);
            Console.WriteLine(46341 * 46341);
            Console.WriteLine((-2147483647 - 1) / -1);
            Console.WriteLine();
            
            Console.WriteLine("For 64-bit signed integers:");
            Console.WriteLine(-(-9223372036854775807L - 1));
            Console.WriteLine(5000000000000000000L + 5000000000000000000L);
            Console.WriteLine(-9223372036854775807L - 9223372036854775807L);
            Console.WriteLine(3037000500L * 3037000500L);
            Console.WriteLine((-9223372036854775807L - 1) / -1);
            Console.WriteLine();

            Console.WriteLine("For 32-bit unsigned integers:");
            //Negating a 32-bit unsigned integer will convert it to a signed 64-bit integer.
            Console.WriteLine(-4294967295U);
            Console.WriteLine(3000000000U + 3000000000U);
            Console.WriteLine(2147483647U - 4294967295U);
            Console.WriteLine(65537U * 65537U);
            Console.WriteLine();

            Console.WriteLine("For 64-bit unsigned integers:");
            // The - operator cannot be applied to 64-bit unsigned integers; it will always give a compile-time error.
            //Console.WriteLine(-18446744073709551615UL);
            Console.WriteLine(10000000000000000000UL + 10000000000000000000UL);
            Console.WriteLine(9223372036854775807UL - 18446744073709551615UL);
            Console.WriteLine(4294967296UL * 4294967296UL);
            Console.WriteLine();
        }
        
        int i = 2147483647;
        Console.WriteLine(i + 1);
        try {
            checked { Console.WriteLine(i + 1); }
        } catch (OverflowException) {
            Console.WriteLine("Overflow!");
        }
    }
    
}
Output:
For 32-bit signed integers:
-2147483648
-294967296
2
-2147479015
-2147483648

For 64-bit signed integers:
-9223372036854775808
-8446744073709551616
2
-9223372036709301616
-9223372036854775808

For 32-bit unsigned integers:
-4294967295
1705032704
2147483648
131073

For 64-bit unsigned integers:
1553255926290448384
9223372036854775808
0

-2147483648
Overflow!

C++

Same as C, except that if std::numeric_limits<IntegerType>::is_modulo is true, then the type IntegerType uses modulo arithmetic (the behavior is defined), even if it is a signed type. A C++ program does not recognize a signed integer overflow and the program continues with wrong results.

Works with: g++ version 4.7
#include <iostream>
#include <cstdint>
#include <limits>

int main (int argc, char *argv[])
{
  std::cout << std::boolalpha
  << std::numeric_limits<std::int32_t>::is_modulo << '\n'
  << std::numeric_limits<std::uint32_t>::is_modulo << '\n' // always true
  << std::numeric_limits<std::int64_t>::is_modulo << '\n'
  << std::numeric_limits<std::uint64_t>::is_modulo << '\n' // always true
  << "Signed 32-bit:\n"
    << -(-2147483647-1) << '\n'
    << 2000000000 + 2000000000 << '\n'
    << -2147483647 - 2147483647 << '\n'
    << 46341 * 46341 << '\n'
    << (-2147483647-1) / -1 << '\n'
  << "Signed 64-bit:\n"
    << -(-9223372036854775807-1) << '\n'
    << 5000000000000000000+5000000000000000000 << '\n'
    << -9223372036854775807 - 9223372036854775807 << '\n'
    << 3037000500 * 3037000500 << '\n'
    << (-9223372036854775807-1) / -1 << '\n'
  << "Unsigned 32-bit:\n"
    << -4294967295U << '\n'
    << 3000000000U + 3000000000U << '\n'
    << 2147483647U - 4294967295U << '\n'
    << 65537U * 65537U << '\n'
  << "Unsigned 64-bit:\n"
    << -18446744073709551615LU << '\n'
    << 10000000000000000000LU + 10000000000000000000LU << '\n'
    << 9223372036854775807LU - 18446744073709551615LU << '\n'
    << 4294967296LU * 4294967296LU << '\n';
  return 0;
}
Output:
true
true
true
true
Signed 32-bit:
-2147483648
-294967296
2
-2147479015
-2147483648
Signed 64-bit:
-9223372036854775808
-8446744073709551616
2
-9223372036709301616
-9223372036854775808
Unsigned 32-bit:
1
1705032704
2147483648
131073
Unsigned 64-bit:
1
1553255926290448384
9223372036854775808
0

Clojure

Clojure supports Java's primitive integers, int (32-bit signed) and long (64-bit signed). However, Clojure automatically promotes the smaller int to a 64-bit long internally, so no 32-bit integer overflow issues can occur. For more information, see the documentation.

By default, Clojure throws Exceptions on overflow conditions:

(* -1 (dec -9223372036854775807))
(+ 5000000000000000000 5000000000000000000)
(- -9223372036854775807 9223372036854775807)
(* 3037000500 3037000500)
Output:
for all of the above statements
ArithmeticException integer overflow  clojure.lang.Numbers.throwIntOverflow

If you want to silently overflow, you can set the special *unchecked-math* variable to true or use the special operations, unchecked-add, unchecked-multiply, etc..

(set! *unchecked-math* true)
(* -1 (dec -9223372036854775807)) ;=> -9223372036854775808
(+ 5000000000000000000 5000000000000000000) ;=> -8446744073709551616
(- -9223372036854775807 9223372036854775807) ;=> 2
(* 3037000500 3037000500) ;=> -9223372036709301616
; Note: The following division will currently silently overflow regardless of *unchecked-math*
; See: http://dev.clojure.org/jira/browse/CLJ-1253
(/ (dec -9223372036854775807) -1) ;=> -9223372036854775808

Clojure supports an arbitrary precision integer, BigInt and alternative math operators suffixed with an apostrophe: +', -', *', inc', and dec'. These operators auto-promote to BigInt upon overflow.

COBOL

COBOL uses decimal arithmetic, so the examples given in the specification are not directly relevant. This program declares a variable that can store three decimal digits, and attempts to assign a four-digit number to it. The result is that the number is truncated to fit, with only the three least significant digits actually being stored; and the program then proceeds. This behaviour may sometimes be what we want.

IDENTIFICATION DIVISION.
PROGRAM-ID. PROCRUSTES-PROGRAM.
DATA DIVISION.
WORKING-STORAGE SECTION.
01  WS-EXAMPLE.
    05 X            PIC  999.
PROCEDURE DIVISION.
    MOVE     1002   TO   X.
    DISPLAY  X      UPON CONSOLE.
    STOP RUN.
Output:
002


Update:
COBOL is by specification designed to be safe for use in financial situations. All standard native types are fixed size.

  • BINARY-CHAR [SIGNED/UNSIGNED], is 8 bits, always
  • BINARY-SHORT [SIGNED/UNSIGNED], is fixed at 16 bits, always
  • BINARY-LONG [SIGNED/UNSIGNED], is fixed at 32 bits, by spec
  • BINARY-DOUBLE [SIGNED/UNSIGNED] is 64 bits, always
  • PICTURE data is sized according to picture, a 9 means decimal display storage, holding one digit within the grouping.

All data types are fully specified.

All COBOL basic arithmetic operations support ON SIZE ERROR and NOT ON SIZE ERROR clauses, which trap any attempt to store invalid data, for both native and PICTURE types. Programmers are free to ignore these features, but that willful ignorance is unlikely in production systems. Especially in programs destined for use in banking, government, or other industries where correctness of result is paramount.

A small example:

       identification division.
       program-id. overflowing.

       data division.
       working-storage section.
       01 bit8-sized       usage binary-char.          *> standard
       01 bit16-sized      usage binary-short.         *> standard
       01 bit32-sized      usage binary-long.          *> standard
       01 bit64-sized      usage binary-double.        *> standard
       01 bit8-unsigned    usage binary-char unsigned. *> standard

       01 nebulous-size    usage binary-c-long.        *> extension

       01 picture-size     picture s999.               *> standard

      *> ***************************************************************
       procedure division.

      *> 32 bit signed integer
       subtract 2147483647 from zero giving bit32-sized
       display bit32-sized

       subtract 1 from bit32-sized giving bit32-sized
           ON SIZE ERROR display "32bit signed SIZE ERROR"
       end-subtract
      *> value was unchanged due to size error trap and trigger
       display bit32-sized
       display space

      *> 8 bit unsigned, size tested, invalid results discarded
       add -257 to zero giving bit8-unsigned
           ON SIZE ERROR display "bit8-unsigned SIZE ERROR"
       end-add
       display bit8-unsigned

      *> programmers can ignore the safety features
       compute bit8-unsigned = -257
       display "you asked for it: " bit8-unsigned
       display space

      *> fixed size
       move 999 to picture-size
       add 1 to picture-size
           ON SIZE ERROR display "picture-sized SIZE ERROR"
       end-add
       display picture-size

      *> programmers doing the following, inadvertently,
      *>   do not stay employed at banks for long
       move 999 to picture-size
       add 1 to picture-size
      *> intermediate goes to 1000, left end truncated on storage
       display "you asked for it: " picture-size

       add 1 to picture-size
       display "really? you want to keep doing this?: " picture-size
       display space

      *> C values are undefined by spec, only minimums givens
       display "How many bytes in a C long? "
               length of nebulous-size
               ", varies by platform"
       display "Regardless, ON SIZE ERROR will catch any invalid result"

      *> on a 64bit machine, C long of 8 bytes
       add 1 to h'ffffffffffffffff' giving nebulous-size
           ON SIZE ERROR display "binary-c-long SIZE ERROR"
       end-add
       display nebulous-size
      *> value will still be in initial state, GnuCOBOL initializes to 0
      *> value now goes to 1, no size error, that ship has sailed
       add 1 to nebulous-size
           ON SIZE ERROR display "binary-c-long size error"
       end-add
       display "error state is not persistent: ", nebulous-size

       goback.
       end program overflowing.
Output:
prompt$ cobc -xj overflowing.cob 
-2147483647
32bit signed SIZE ERROR
-2147483647
 
bit8-unsigned SIZE ERROR
000
you asked for it: 001
 
picture-sized SIZE ERROR
+999
you asked for it: +000
really? you want to keep doing this?: +001
 
How many bytes in a C long? 8, varies by platform
Regardless, ON SIZE ERROR will catch any invalid result
binary-c-long SIZE ERROR
+00000000000000000000
error state is not persistent: +00000000000000000001

Computer/zero Assembly

Arithmetic is performed modulo 256; overflow is not detected. This fragment:

        LDA  ff
        ADD  one

...

ff:          255
one:         1

causes the accumulator to adopt the value 0. With a little care, the programmer can exploit this behaviour by treating each eight-bit word as either an unsigned byte or a signed byte using two's complement (although the instruction set does not provide explicit support for negative numbers). On the two's complement interpretation, the code given above would express the computation "–1 + 1 = 0".

D

In D both signed and unsigned integer arithmetic is defined to be modulus a power of two. Such overflow is not detected at run-time and the program continues with wrong results.

Additionally, standard functions are available to perform arithmetic on int, long, uint, ulong values that modify a boolean value to signal when an overflow has occurred.

void main() @safe {
    import std.stdio;

    writeln("Signed 32-bit:");
    writeln(-(-2_147_483_647 - 1));
    writeln(2_000_000_000 + 2_000_000_000);
    writeln(-2147483647 - 2147483647);
    writeln(46_341 * 46_341);
    writeln((-2_147_483_647 - 1) / -1);

    writeln("\nSigned 64-bit:");
    writeln(-(-9_223_372_036_854_775_807 - 1));
    writeln(5_000_000_000_000_000_000 + 5_000_000_000_000_000_000);
    writeln(-9_223_372_036_854_775_807 - 9_223_372_036_854_775_807);
    writeln(3_037_000_500 * 3_037_000_500);
    writeln((-9_223_372_036_854_775_807 - 1) / -1);

    writeln("\nUnsigned 32-bit:");
    writeln(-4_294_967_295U);
    writeln(3_000_000_000U + 3_000_000_000U);
    writeln(2_147_483_647U - 4_294_967_295U);
    writeln(65_537U * 65_537U);

    writeln("\nUnsigned 64-bit:");
    writeln(-18_446_744_073_709_551_615UL);
    writeln(10_000_000_000_000_000_000UL + 10_000_000_000_000_000_000UL);
    writeln(9_223_372_036_854_775_807UL - 18_446_744_073_709_551_615UL);
    writeln(4_294_967_296UL * 4_294_967_296UL);

    import core.checkedint;
    bool overflow = false;
    // Checked signed multiplication. 
    // Eventually such functions will be recognized by D compilers
    // and they will be implemented with efficient intrinsics.
    immutable r = muls(46_341, 46_341, overflow);
    writeln("\n", r, " ", overflow);
}
Output:
Signed 32-bit:
-2147483648
-294967296
2
-2147479015
-2147483648

Signed 64-bit:
-9223372036854775808
-8446744073709551616
2
-9223372036709301616
-9223372036854775808

Unsigned 32-bit:
1
1705032704
2147483648
131073

Unsigned 64-bit:
1
1553255926290448384
9223372036854775808
0

-2147479015 true

Delphi

Works with: Delphi version 6.0

As demonstrated by the program below, the Delphi catches these overflow conditions before they can be compiled.


var IS32: integer;	{Signed 32-bit integer}
var IS64: Int64;	{Signed 64-bit integer}
var IU32: cardinal;	{Unsigned 32-bit integer}

{============ Signed 32 bit tests ===================================}

procedure TestSigned32_1;
begin
IS32:=-(-2147483647-1);
end;

// Compiler: "Overflow in conversion or arithmetic operation"

procedure TestSigned32_2;
begin
IS32:=2000000000 + 2000000000;
end;

// Compiler: "Overflow in conversion or arithmetic operation"



procedure TestSigned32_3;
begin
IS32:=-2147483647 - 2147483647;
end;

// Compiler: "Overflow in conversion or arithmetic operation"

procedure TestSigned32_4;
begin
IS32:=46341 * 46341;
end;

// Compiler: "Overflow in conversion or arithmetic operation"



procedure TestSigned32_5;
begin
IS32:=(-2147483647-1) div -1;
end;

// Compiler: "Overflow in conversion or arithmetic operation"

{============ Signed 64 bit tests ===================================}

procedure TestSigned64_1;
begin
IS64:=-(-9223372036854775807-1);
end;

// Compiler: "Overflow in conversion or arithmetic operation"


procedure TestSigned64_2;
begin
IS64:=5000000000000000000+5000000000000000000;
end;

// Compiler: "Overflow in conversion or arithmetic operation"


procedure TestSigned64_3;
begin
IS64:=-9223372036854775807 - 9223372036854775807;
end;

// Compiler: "Overflow in conversion or arithmetic operation"


procedure TestSigned64_4;
begin
IS64:=3037000500 * 3037000500;
end;

// Compiler: "Overflow in conversion or arithmetic operation"


procedure TestSigned64_5;
begin
IS64:=(-9223372036854775807-1) div -1;
end;

// Compiler: "Overflow in conversion or arithmetic operation"


{============ UnSigned 32 bit tests ===================================}

procedure TestUnSigned32_1;
begin
IU32:=-4294967295;
end;

// Compiler: "Overflow in conversion or arithmetic operation"


procedure TestUnSigned32_2;
begin
IU32:=3000000000 + 3000000000;
end;

// Compiler: "Overflow in conversion or arithmetic operation"


procedure TestUnSigned32_3;
begin
IU32:=2147483647 - 4294967295;
end;

// Compiler: "Overflow in conversion or arithmetic operation"


procedure TestUnSigned32_4;
begin
IU32:=65537 * 65537;
end;

// Compiler: "Overflow in conversion or arithmetic operation"


//Delphi-6 does not have 64-bit unsigned integers.
//Later version have 64-bit unsigned integers.
Output:

Factor

fixnum integers are automatically promoted to bignum integers when they no longer fit in a machine cell, and no overflow occurs. Note, however, that bignums are not demoted back to fixnums automatically.

Fortran

The Fortran standard does not specify the behaviour of program during integer overflow, so it depends on compiler implementation. Intel Fortran compiler does not have integer overflow detection. GNU gfortran runs some limited checks during compilations. The standard's model of integers is symmetric around zero, and using intrinsic function huge(my_integer) one can only discover the maximal number for kind of integer my_integer but cannot go beyond that.

FreeBASIC

For the 64-bit integer type a FreeBASIC program does not recognize a signed integer overflow and the program continues with wrong results.

#include <stdio.h>
' FB 1.05.0 Win64

' The suffixes L, LL, UL and ULL are added to the numbers to make it
' clear to the compiler that they are to be treated as:
' signed 4 byte, signed 8 byte, unsigned 4 byte and unsigned 8 byte
' integers, respectively. 

' Integer types in FB are freely convertible to each other.
' In general if the result of a computation would otherwise overflow
' it is converted to a higher integer type.

' Consequently, although the calculations are the same as the C example,
' the results for the 32-bit integers are arithmetically correct (and different
' therefore from the C results) because they are converted to 8 byte integers.

' However, as 8 byte integers are the largest integral type, no higher conversions are
' possible and so the results 'wrap round'. The 64-bit results are therefore the
' same as the C examples except the one where the compiler warns that there is an overflow
' which, frankly, I don't understand.

Print "Signed 32-bit:"
Print -(-2147483647L-1L)
Print 2000000000L + 2000000000L
Print -2147483647L - 2147483647L
Print 46341L * 46341L
Print (-2147483647L-1L) \ -1L
Print
Print "Signed 64-bit:"
Print -(-9223372036854775807LL-1LL)
Print 5000000000000000000LL + 5000000000000000000LL
Print -9223372036854775807LL - 9223372036854775807LL
Print 3037000500LL * 3037000500LL
Print (-9223372036854775807LL - 1LL) \ -1LL  ' compiler warning : Overflow in constant conversion
Print
Print "Unsigned 32-bit:"
Print -4294967295UL
Print 3000000000UL + 3000000000UL
Print 2147483647UL - 4294967295UL
Print 65537UL * 65537UL
Print
Print "Unsigned 64-bit:"
Print -18446744073709551615ULL  ' compiler warning : Implicit conversion 
Print 10000000000000000000ULL + 10000000000000000000ULL
Print 9223372036854775807ULL - 18446744073709551615ULL
Print 4294967296ULL * 4294967296ULL
Print
Print "Press any key to quit"
Sleep
Output:
Signed 32-bit:
 2147483648
 4000000000
-4294967294
 2147488281
 2147483648

Signed 64-bit:
-9223372036854775808
-8446744073709551616
 2
-9223372036709301616
 0

Unsigned 32-bit:
-4294967295
 6000000000
-2147483648
 4295098369

Unsigned 64-bit:
 1
1553255926290448384
9223372036854775808
0

Frink

Frink's numerical type is designed to "do the right thing" with all mathematics. It will not overflow, and integers can be of any size.

Frink's numerical type automatically promotes and demotes between arbitrary-size integers, arbitrary-size rational numbers, arbitrary-precision floating-point numbers, complex numbers, and arbitrary-sized intervals of real values.

Go

Run this in the Go playground. A Go program does not recognize an integer overflow and the program continues with wrong results.

package main

import "fmt"

func main() {
	// Go's builtin integer types are:
	//    int,  int8,  int16,  int32,  int64
	//   uint, uint8, uint16, uint32, uint64
	//   byte, rune, uintptr
	//
	// int is either 32 or 64 bit, depending on the system
	// uintptr is large enough to hold the bit pattern of any pointer
	// byte is 8 bits like int8
	// rune is 32 bits like int32
	//
	// Overflow and underflow is silent. The math package defines a number
	// of constants that can be helpfull, e.g.:
	//    math.MaxInt64  = 1<<63 - 1
	//    math.MinInt64  = -1 << 63
	//    math.MaxUint64 = 1<<64 - 1
	//
	// The math/big package implements multi-precision
	// arithmetic (big numbers).
	//
	// In all cases assignment from one type to another requires
	// an explicit cast, even if the types are otherwise identical
	// (e.g. rune and int32 or int and either int32 or int64).
	// Casts silently truncate if required.
	//
	// Invalid:
	//    var i int  = int32(0)
	//    var r rune = int32(0)
	//    var b byte = int8(0)
	//
	// Valid:
	var i64 int64 = 42
	var i32 int32 = int32(i64)
	var i16 int16 = int16(i64)
	var i8 int8 = int8(i16)
	var i int = int(i8)
	var r rune = rune(i)
	var b byte = byte(r)
	var u64 uint64 = uint64(b)
	var u32 uint32

	//const c int = -(-2147483647 - 1) // Compiler error on 32 bit systems, ok on 64 bit
	const c = -(-2147483647 - 1) // Allowed even on 32 bit systems, c is untyped
	i64 = c
	//i32 = c                          // Compiler error
	//i32 = -(-2147483647 - 1)         // Compiler error
	i32 = -2147483647
	i32 = -(-i32 - 1)
	fmt.Println("32 bit signed integers")
	fmt.Printf("  -(-2147483647 - 1) = %d, got %d\n", i64, i32)

	i64 = 2000000000 + 2000000000
	//i32 = 2000000000 + 2000000000    // Compiler error
	i32 = 2000000000
	i32 = i32 + i32
	fmt.Printf("  2000000000 + 2000000000 = %d, got %d\n", i64, i32)
	i64 = -2147483647 - 2147483647
	i32 = 2147483647
	i32 = -i32 - i32
	fmt.Printf("  -2147483647 - 2147483647 = %d, got %d\n", i64, i32)
	i64 = 46341 * 46341
	i32 = 46341
	i32 = i32 * i32
	fmt.Printf("  46341 * 46341 = %d, got %d\n", i64, i32)
	i64 = (-2147483647 - 1) / -1
	i32 = -2147483647
	i32 = (i32 - 1) / -1
	fmt.Printf("  (-2147483647-1) / -1 = %d, got %d\n", i64, i32)

	fmt.Println("\n64 bit signed integers")
	i64 = -9223372036854775807
	fmt.Printf("  -(%d - 1): %d\n", i64, -(i64 - 1))
	i64 = 5000000000000000000
	fmt.Printf("  %d + %d: %d\n", i64, i64, i64+i64)
	i64 = 9223372036854775807
	fmt.Printf("  -%d - %d: %d\n", i64, i64, -i64-i64)
	i64 = 3037000500
	fmt.Printf("  %d * %d: %d\n", i64, i64, i64*i64)
	i64 = -9223372036854775807
	fmt.Printf("  (%d - 1) / -1: %d\n", i64, (i64-1)/-1)

	fmt.Println("\n32 bit unsigned integers:")
	//u32 = -4294967295 // Compiler error
	u32 = 4294967295
	fmt.Printf("  -%d: %d\n", u32, -u32)
	u32 = 3000000000
	fmt.Printf("  %d + %d: %d\n", u32, u32, u32+u32)
	a := uint32(2147483647)
	u32 = 4294967295
	fmt.Printf("  %d - %d: %d\n", a, u32, a-u32)
	u32 = 65537
	fmt.Printf("  %d * %d: %d\n", u32, u32, u32*u32)

	fmt.Println("\n64 bit unsigned integers:")
	u64 = 18446744073709551615
	fmt.Printf("  -%d: %d\n", u64, -u64)
	u64 = 10000000000000000000
	fmt.Printf("  %d + %d: %d\n", u64, u64, u64+u64)
	aa := uint64(9223372036854775807)
	u64 = 18446744073709551615
	fmt.Printf("  %d - %d: %d\n", aa, u64, aa-u64)
	u64 = 4294967296
	fmt.Printf("  %d * %d: %d\n", u64, u64, u64*u64)
}
Output:
32 bit signed integers
  -(-2147483647 - 1) = 2147483648, got -2147483646
  2000000000 + 2000000000 = 4000000000, got -294967296
  -2147483647 - 2147483647 = -4294967294, got 2
  46341 * 46341 = 2147488281, got -2147479015
  (-2147483647-1) / -1 = 2147483648, got -2147483648

64 bit signed integers
  -(-9223372036854775807 - 1): -9223372036854775808
  5000000000000000000 + 5000000000000000000: -8446744073709551616
  -9223372036854775807 - 9223372036854775807: 2
  3037000500 * 3037000500: -9223372036709301616
  (-9223372036854775807 - 1) / -1: -9223372036854775808

32 bit unsigned integers:
  -4294967295: 1
  3000000000 + 3000000000: 1705032704
  2147483647 - 4294967295: 2147483648
  65537 * 65537: 131073

64 bit unsigned integers:
  -18446744073709551615: 1
  10000000000000000000 + 10000000000000000000: 1553255926290448384
  9223372036854775807 - 18446744073709551615: 9223372036854775808
  4294967296 * 4294967296: 0

Groovy

Translation of: Java
+ assertions + BigInteger + Groovy differences


Type int is a signed 32-bit integer. Type long is a signed 64-bit integer. Type BigInteger (also in Java) is a signed unbounded integer.

Other integral types (also in Java): byte (8-bit signed), short (16-bit signed), char (16-bit signed)

Groovy does not recognize integer overflow in any bounded integral type and the program continues with wrong results. All bounded integral types use 2's-complement arithmetic.

println "\nSigned 32-bit (failed):"
assert -(-2147483647-1) != 2147483648g
println(-(-2147483647-1))
assert 2000000000 + 2000000000 != 4000000000g
println(2000000000 + 2000000000)
assert -2147483647 - 2147483647 != -4294967294g
println(-2147483647 - 2147483647)
assert 46341 * 46341 != 2147488281g
println(46341 * 46341)
//Groovy converts divisor and dividend of "/" to floating point. Use "intdiv" to remain integral
//assert (-2147483647-1) / -1 != 2147483648g
assert (-2147483647-1).intdiv(-1) != 2147483648g
println((-2147483647-1).intdiv(-1))

println "\nSigned 64-bit (passed):"
assert -(-2147483647L-1) == 2147483648g
println(-(-2147483647L-1))
assert 2000000000L + 2000000000L == 4000000000g
println(2000000000L + 2000000000L)
assert -2147483647L - 2147483647L == -4294967294g
println(-2147483647L - 2147483647L)
assert 46341L * 46341L == 2147488281g
println(46341L * 46341L)
assert (-2147483647L-1).intdiv(-1) == 2147483648g
println((-2147483647L-1).intdiv(-1))

println "\nSigned 64-bit (failed):"
assert -(-9223372036854775807L-1) != 9223372036854775808g
println(-(-9223372036854775807L-1))
assert 5000000000000000000L+5000000000000000000L != 10000000000000000000g
println(5000000000000000000L+5000000000000000000L)
assert -9223372036854775807L - 9223372036854775807L != -18446744073709551614g
println(-9223372036854775807L - 9223372036854775807L)
assert 3037000500L * 3037000500L != 9223372037000250000g
println(3037000500L * 3037000500L)
//Groovy converts divisor and dividend of "/" to floating point. Use "intdiv" to remain integral
//assert (-9223372036854775807L-1) / -1 != 9223372036854775808g
assert (-9223372036854775807L-1).intdiv(-1) != 9223372036854775808g
println((-9223372036854775807L-1).intdiv(-1))

println "\nSigned unbounded (passed):"
assert -(-2147483647g-1g) == 2147483648g
println(-(-2147483647g-1g))
assert 2000000000g + 2000000000g == 4000000000g
println(2000000000g + 2000000000g)
assert -2147483647g - 2147483647g == -4294967294g
println(-2147483647g - 2147483647g)
assert 46341g * 46341g == 2147488281g
println(46341g * 46341g)
assert (-2147483647g-1g).intdiv(-1) == 2147483648g
println((-2147483647g-1g).intdiv(-1))
assert -(-9223372036854775807g-1) == 9223372036854775808g
println(-(-9223372036854775807g-1))
assert 5000000000000000000g+5000000000000000000g == 10000000000000000000g
println(5000000000000000000g+5000000000000000000g)
assert -9223372036854775807g - 9223372036854775807g == -18446744073709551614g
println(-9223372036854775807g - 9223372036854775807g)
assert 3037000500g * 3037000500g == 9223372037000250000g
println(3037000500g * 3037000500g)
assert (-9223372036854775807g-1g).intdiv(-1) == 9223372036854775808g
println((-9223372036854775807g-1g).intdiv(-1))

Output:

Signed 32-bit (failed):
-2147483648
-294967296
2
-2147479015
-2147483648

Signed 64-bit (passed):
2147483648
4000000000
-4294967294
2147488281
2147483648

Signed 64-bit (failed):
-9223372036854775808
-8446744073709551616
2
-9223372036709301616
-9223372036854775808

Signed unbounded (passed):
2147483648
4000000000
-4294967294
2147488281
2147483648
9223372036854775808
10000000000000000000
-18446744073709551614
9223372037000250000
9223372036854775808

Haskell

Haskell supports both fixed sized signed integers (Int) and unbounded integers (Integer). Various sizes of signed and unsigned integers are available in Data.Int and Data.Word, respectively. The Haskell 2010 Language Report explains the following: "The results of exceptional conditions (such as overflow or underflow) on the fixed-precision numeric types are undefined; an implementation may choose error (⊥, semantically), a truncated value, or a special value such as infinity, indefinite, etc" (http://www.haskell.org/definition/haskell2010.pdf Section 6.4 Paragraph 4).

import Data.Int
import Data.Word
import Control.Exception

f x = do
  catch (print x) (\e -> print (e :: ArithException))

main = do
  f ((- (-2147483647 - 1)) :: Int32)
  f ((2000000000 + 2000000000) :: Int32)
  f (((-2147483647) - 2147483647) :: Int32)
  f ((46341 * 46341) :: Int32)
  f ((((-2147483647) - 1) `div` (-1)) :: Int32)
  f ((- ((-9223372036854775807) - 1)) :: Int64)
  f ((5000000000000000000 + 5000000000000000000) :: Int64)
  f (((-9223372036854775807) - 9223372036854775807) :: Int64)
  f ((3037000500 * 3037000500) :: Int64)
  f ((((-9223372036854775807) - 1) `div` (-1)) :: Int64)
  f ((-4294967295) :: Word32)
  f ((3000000000 + 3000000000) :: Word32)
  f ((2147483647 - 4294967295) :: Word32)
  f ((65537 * 65537) :: Word32)
  f ((-18446744073709551615) :: Word64)
  f ((10000000000000000000 + 10000000000000000000) :: Word64)
  f ((9223372036854775807 - 18446744073709551615) :: Word64)
  f ((4294967296 * 4294967296) :: Word64)
Output:
-2147483648
-294967296
2
-2147479015
arithmetic overflow
-9223372036854775808
-8446744073709551616
2
-9223372036709301616
arithmetic overflow
1
1705032704
2147483648
131073
1
1553255926290448384
9223372036854775808
0

J

J has both 32 bit implementations and 64 bit implementations. Integers are signed and overflow is handled by yielding a floating point result (ieee 754's 64 bit format in both implementations).

Also, negative numbers do not use - for the negative sign in J (a preceding - means to negate the argument on the right - in some cases this is the same kind of result, but in other cases it's different). Instead, use _ to denote negative numbers. Also, J does not use / for division, instead J uses % for division. With those changes, here's what the results look like in a 32 bit version of J:

  -(_2147483647-1)
2.14748e9
   2000000000 + 2000000000
4e9
   _2147483647 - 2147483647
_4.29497e9
   46341 * 46341
2.14749e9
   (_2147483647-1) % -1
2.14748e9
   
   -(_9223372036854775807-1)
9.22337e18
   5000000000000000000+5000000000000000000
1e19
   _9223372036854775807 - 9223372036854775807
_1.84467e19
   3037000500 * 3037000500
9.22337e18
   (_9223372036854775807-1) % -1
9.22337e18
   
   _4294967295
_4.29497e9
   3000000000 + 3000000000
6e9
   2147483647 - 4294967295
_2.14748e9
   65537 * 65537
4.2951e9
   
   _18446744073709551615
_1.84467e19
   10000000000000000000 + 10000000000000000000
2e19
   9223372036854775807 - 18446744073709551615
_9.22337e18
   4294967296 * 4294967296
1.84467e19

And, here's what it looks like in a 64 bit version of J:

   -(_2147483647-1)
2147483648
   2000000000 + 2000000000
4000000000
   _2147483647 - 2147483647
_4294967294
   46341 * 46341
2147488281
   (_2147483647-1) % -1
2.14748e9
   
   -(_9223372036854775807-1)
9.22337e18
   5000000000000000000+5000000000000000000
1e19
   _9223372036854775807 - 9223372036854775807
_1.84467e19
   3037000500 * 3037000500
9.22337e18
   (_9223372036854775807-1) % -1
9.22337e18
   
   _4294967295
_4294967295
   3000000000 + 3000000000
6000000000
   2147483647 - 4294967295
_2147483648
   65537 * 65537
4295098369
   
   _18446744073709551615
_1.84467e19
   10000000000000000000 + 10000000000000000000
2e19
   9223372036854775807 - 18446744073709551615
_9.22337e18
   4294967296 * 4294967296
1.84467e19

That said, note that the above was with default 6 digits of "printing precision". Here's how things look with that limit relaxed:

32 bit J:

   -(_2147483647-1)
2147483648
   2000000000 + 2000000000
4000000000
   _2147483647 - 2147483647
_4294967294
   46341 * 46341
2147488281
   (_2147483647-1) % -1
2147483648
   
   -(_9223372036854775807-1)
9223372036854775800
   5000000000000000000+5000000000000000000
10000000000000000000
   _9223372036854775807 - 9223372036854775807
_18446744073709552000
   3037000500 * 3037000500
9223372037000249300
   (_9223372036854775807-1) % -1
9223372036854775800
   
   _4294967295
_4294967295
   3000000000 + 3000000000
6000000000
   2147483647 - 4294967295
_2147483648
   65537 * 65537
4295098369
   
   _18446744073709551615
_18446744073709552000
   10000000000000000000 + 10000000000000000000
20000000000000000000
   9223372036854775807 - 18446744073709551615
_9223372036854775800
   4294967296 * 4294967296
18446744073709552000

64 bit J:

   -(_2147483647-1)
2147483648
   2000000000 + 2000000000
4000000000
   _2147483647 - 2147483647
_4294967294
   46341 * 46341
2147488281
   (_2147483647-1) % -1
2147483648
   
   -(_9223372036854775807-1)
9223372036854775800
   5000000000000000000+5000000000000000000
10000000000000000000
   _9223372036854775807 - 9223372036854775807
_18446744073709552000
   3037000500 * 3037000500
9223372037000249300
   (_9223372036854775807-1) % -1
9223372036854775800
   
   _4294967295
_4294967295
   3000000000 + 3000000000
6000000000
   2147483647 - 4294967295
_2147483648
   65537 * 65537
4295098369
   
   _18446744073709551615
_18446744073709552000
   10000000000000000000 + 10000000000000000000
20000000000000000000
   9223372036854775807 - 18446744073709551615
_9223372036854775800
   4294967296 * 4294967296
18446744073709552000

Finally, note that both versions of J support arbitrary precision integers. These are not the default, for performance reasons, but are available for cases where their performance penalty is acceptable.

Java

The type int is a signed 32-bit integer and the type long is a 64-bit integer. A Java program does not recognize an integer overflow and the program continues with wrong results.

public class IntegerOverflow {
    public static void main(String[] args) {
        System.out.println("Signed 32-bit:");
        System.out.println(-(-2147483647 - 1));
        System.out.println(2000000000 + 2000000000);
        System.out.println(-2147483647 - 2147483647);
        System.out.println(46341 * 46341);
        System.out.println((-2147483647 - 1) / -1);
        System.out.println("Signed 64-bit:");
        System.out.println(-(-9223372036854775807L - 1));
        System.out.println(5000000000000000000L + 5000000000000000000L);
        System.out.println(-9223372036854775807L - 9223372036854775807L);
        System.out.println(3037000500L * 3037000500L);
        System.out.println((-9223372036854775807L - 1) / -1);
    }
}
Output:
Signed 32-bit:
-2147483648
-294967296
2
-2147479015
-2147483648
Signed 64-bit:
-9223372036854775808
-8446744073709551616
2
-9223372036709301616
-9223372036854775808

Using Java 8

public final class IntegerOverflow {

	public static void main(String[] args) {
		// The following examples show that Java allows integer overflow without warning
		// and calculates an incorrect result.
		
		// From version 8, Java introduced methods which throw an ArithmeticException when overflow occurs,
        // which prevents the calculation of an incorrect result. It also allows the programmer to replace an "int"
        // with a "long" and to replace a "long" with a BigInteger.
		
		// Uncomment the lines below to see the use of the new methods:
		// addExact(), subtractExact(), multiplyExact() and negateExact().
		System.out.println("Signed 32-bit:");
        System.out.println(-(-2_147_483_647 - 1));
//      System.out.println(Math.negateExact(-2_147_483_647 - 1));
        
        System.out.println(2_000_000_000 + 2_000_000_000);
//      System.out.println(Math.addExact(2_000_000_000, 2_000_000_000));
        
        System.out.println(-2_147_483_647 - 2_147_483_647);
//      System.out.println(Math.subtractExact(-2_147_483_647, 2_147_483_647));
        
        System.out.println(46_341 * 46_341);
//      System.out.println(Math.multiplyExact(46_341, 46_341));
        
        System.out.println((-2_147_483_647 - 1) / -1);
//      System.out.println(Math.negateExact(Math.subtractExact(-2_147_483_647, 1) / 1));
        
        System.out.println();
        System.out.println("Signed 64-bit:");
        System.out.println(-(-9_223_372_036_854_775_807L - 1));
//      System.out.println(Math.negateExact(-9_223_372_036_854_775_807L - 1));
        
        System.out.println(5_000_000_000_000_000_000L + 5_000_000_000_000_000_000L);
//      System.out.println(Math.addExact(5_000_000_000_000_000_000L, 5_000_000_000_000_000_000L));
        
        System.out.println(-9_223_372_036_854_775_807L - 9_223_372_036_854_775_807L);
//      System.out.println(Math.subtractExact(-9_223_372_036_854_775_807L, 9_223_372_036_854_775_807L));
        
        System.out.println(3_037_000_500L * 3_037_000_500L);
//      System.out.println(Math.multiplyExact(3_037_000_500L, 3_037_000_500L));
        
        System.out.println((-9_223_372_036_854_775_807L - 1) / -1);
//      System.out.println(Math.negateExact(Math.subtractExact(-9_223_372_036_854_775_807L, 1) / 1));        
	}

}
Output:
Signed 32-bit:
-2147483648
-294967296
2
-2147479015
-2147483648

Signed 64-bit:
-9223372036854775808
-8446744073709551616
2
-9223372036709301616
-9223372036854775808

jq

The C-based implementation of jq uses IEEE 754 64-bit numbers for numeric computations without raising errors except for division by 0.

The Go-based implementation of jq, gojq, uses unbounded precision for integer computations involving infix operators (+, -, %, /), but in the case of division, the result is only guaranteed to be precise if the divisor is a factor of the dividend. Using gojq, the error raised by an attempt to divide a number by 0 is catchable.

In the following section, a jq program that implements the task is presented. The outputs produced by jq and by gojq are then given.

The task

def compare:
  if type == "string" then "\n\(.)\n"
  else map(tostring)
  | .[1] as $s
  | .[0] 
  | if $s == . then . + ": agrees"
    else $s + ": expression evaluates to " + .
    end
  end;
    
[ -(-2147483647-1),"2147483648"],
[2000000000 + 2000000000, "4000000000"],
[-2147483647 - 2147483647,	"-4294967294"],
[46341 * 46341,	"2147488281"],
[(-2147483647-1) / -1,	"2147483648"],

"For 64-bit signed integers:",

[-(-9223372036854775807-1),	"9223372036854775808"],
[5000000000000000000+5000000000000000000,	"10000000000000000000"],
[-9223372036854775807 - 9223372036854775807,	"-18446744073709551614"],
[3037000500 * 3037000500,	"9223372037000250000"],
[(-9223372036854775807-1) / -1, "9223372036854775808"],

"For 32-bit unsigned integers:",

[-4294967295, "-4294967295"],
[3000000000 + 3000000000, "6000000000"],
[2147483647 - 4294967295, "-2147483648"],
[65537 * 65537, "4295098369"],

"For 64-bit unsigned integers:",

[-18446744073709551615, "-18446744073709551615"],
[10000000000000000000 + 10000000000000000000, "20000000000000000000"],
[9223372036854775807 - 18446744073709551615, "-9223372036854775808"],
[4294967296 * 4294967296, "18446744073709551616"]

| compare

jq 1.6

2147483648: agrees
4000000000: agrees
-4294967294: agrees
2147488281: agrees
2147483648: agrees

For 64-bit signed integers:

9223372036854775808: expression evaluates to 9223372036854776000
10000000000000000000: expression evaluates to 1e+19
-18446744073709551614: expression evaluates to -18446744073709552000
9223372037000250000: agrees
9223372036854775808: expression evaluates to 9223372036854776000

For 32-bit unsigned integers:

-4294967295: agrees
6000000000: agrees
-2147483648: agrees
4295098369: agrees

For 64-bit unsigned integers:

-18446744073709551615: expression evaluates to -18446744073709552000
20000000000000000000: expression evaluates to 2e+19
-9223372036854775808: expression evaluates to -9223372036854776000
18446744073709551616: expression evaluates to 18446744073709552000

gojq

gojq -nr -f rc-integer-overflow.jq
2147483648: agrees
4000000000: agrees
-4294967294: agrees
2147488281: agrees
2147483648: agrees

For 64-bit signed integers:

9223372036854775808: agrees
10000000000000000000: agrees
-18446744073709551614: agrees
9223372037000250000: agrees
9223372036854775808: agrees

For 32-bit unsigned integers:

-4294967295: agrees
6000000000: agrees
-2147483648: agrees
4295098369: agrees

For 64-bit unsigned integers:

-18446744073709551615: agrees
20000000000000000000: agrees
-9223372036854775808: agrees
18446744073709551616: agrees


Julia

Plain Integer Types and Their Limits

using Printf
S = subtypes(Signed)
U = subtypes(Unsigned)

println("Integer limits:")
for (s, u) in zip(S, U)
    @printf("%8s: [%s, %s]\n", s, typemin(s), typemax(s))
    @printf("%8s: [%s, %s]\n", u, typemin(u), typemax(u))
end
Output:
Integer limits:
  Int128: [-170141183460469231731687303715884105728, 170141183460469231731687303715884105727]
 UInt128: [0, 340282366920938463463374607431768211455]
   Int16: [-32768, 32767]
  UInt16: [0, 65535]
   Int32: [-2147483648, 2147483647]
  UInt32: [0, 4294967295]
   Int64: [-9223372036854775808, 9223372036854775807]
  UInt64: [0, 18446744073709551615]
    Int8: [-128, 127]
   UInt8: [0, 255]

Add to 1 Signed typemax

Julia does not throw an explicit error on integer overflow.

println("Add one to typemax:")
for t in S
    over = typemax(t) + one(t)
    @printf("%8s%-25s (%s)\n", t, over, typeof(over))
end
Output:
Add one to typemax:
  Int128 →  -170141183460469231731687303715884105728 (Int128)
   Int16 →  -32768                    (Int16)
   Int32 →  -2147483648               (Int32)
   Int64 →  -9223372036854775808      (Int64)
    Int8 →  -128                      (Int8)

Kotlin

A Kotlin program does not recognize a signed integer overflow and the program continues with wrong results.

// The Kotlin compiler can detect expressions of signed constant integers that will overflow.
// It cannot detect unsigned integer overflow, however.
@Suppress("INTEGER_OVERFLOW")
fun main() {
    println("*** Signed 32 bit integers ***\n")
    println(-(-2147483647 - 1))
    println(2000000000 + 2000000000)
    println(-2147483647 - 2147483647)
    println(46341 * 46341)
    println((-2147483647 - 1) / -1)
    println("\n*** Signed 64 bit integers ***\n")
    println(-(-9223372036854775807 - 1))
    println(5000000000000000000 + 5000000000000000000)
    println(-9223372036854775807 - 9223372036854775807)
    println(3037000500 * 3037000500)
    println((-9223372036854775807 - 1) / -1)
    println("\n*** Unsigned 32 bit integers ***\n")
//    println(-4294967295U) // this is a compiler error since unsigned integers have no negation operator
//    println(0U - 4294967295U) // this works
    println((-4294967295).toUInt()) // converting from the signed Int type also produces the overflow; this is intended behavior of toUInt()
    println(3000000000U + 3000000000U)
    println(2147483647U - 4294967295U)
    println(65537U * 65537U)
    println("\n*** Unsigned 64 bit integers ***\n")
    println(0U - 18446744073709551615U) // we cannot convert from a signed type here (since none big enough exists) and have to use subtraction
    println(10000000000000000000U + 10000000000000000000U)
    println(9223372036854775807U - 18446744073709551615U)
    println(4294967296U * 4294967296U)
}
Output:
*** Signed 32 bit integers ***

-2147483648
-294967296
2
-2147479015
-2147483648

*** Signed 64 bit integers ***

-9223372036854775808
-8446744073709551616
2
-9223372036709301616
-9223372036854775808

*** Unsigned 32 bit integers ***

1
1705032704
2147483648
131073

*** Unsigned 64 bit integers ***

1
1553255926290448384
9223372036854775808
0

Ksh

#!/bin/ksh

# Integer overflow

#	# Variables:
#
typeset -si SHORT_INT
typeset -i  INTEGER
typeset -li LONG_INT

 ######
# main #
 ######

(( SHORT_INT = 2**15 -1 )) ; print "SHORT_INT (2^15 -1) = $SHORT_INT"
(( SHORT_INT = 2**15 )) ; print "SHORT_INT (2^15)   : $SHORT_INT"

(( INTEGER = 2**31 -1 )) ; print "  INTEGER (2^31 -1) = $INTEGER"
(( INTEGER = 2**31 )) ; print "  INTEGER (2^31)   : $INTEGER"

(( LONG_INT = 2**63 -1 )) ; print " LONG_INT (2^63 -1) = $LONG_INT"
(( LONG_INT = 2**63 )) ; print " LONG_INT (2^63)   : $LONG_INT"
Output:

SHORT_INT (2^15 -1) = 32767 SHORT_INT (2^15)  : -32768

 INTEGER (2^31 -1) = 2147483647
 INTEGER (2^31)   : -2147483648
LONG_INT (2^63 -1) = 9223372036854775807
LONG_INT (2^63)   : -9223372036854775808

Lingo

Lingo uses 32-bit signed integers. A Lingo program does not recognize a signed integer overflow and the program continues with wrong results.

#include <stdio.h>
put -(-2147483647-1)
-- -2147483648

put 2000000000 + 2000000000
-- -294967296

put -2147483647 - 2147483647
-- 2

put 46341 * 46341
-- -2147479015

put (-2147483647-1) / -1 
--> crashes Director (jeez!)

Lua

Lua 5.3+ supports integer and floating sub-types of its generic number type. The standard implementation is 64-bit signed, under/overflow is not recognized.

assert(math.type~=nil, "Lua 5.3+ required for this test.")
minint, maxint = math.mininteger, math.maxinteger
print("min, max int64  = " .. minint .. ", " .. maxint)
print("min-1 underflow =  " .. (minint-1) .. "  equals max? " .. tostring(minint-1==maxint))
print("max+1 overflow  = " .. (maxint+1) .. "  equals min? " .. tostring(maxint+1==minint))
Output:
min, max int64  = -9223372036854775808, 9223372036854775807
min-1 underflow =  9223372036854775807  equals max? true
max+1 overflow  = -9223372036854775808  equals min? true

M2000 Interpreter

Long A
Try ok {
      A=12121221212121
}
If not ok then Print Error$ 'Overflow Long
Def Integer B
Try ok {
      B=1212121212
}
If not ok then Print Error$  ' Overflow Integer
Def Currency C
Try ok {
      C=121212121232934392898274327927948
}
If not ok then Print Error$  ' return  Overflow Long, but is overflow Currency
Def Decimal D
Try ok {
      D=121212121232934392898274327927948
}
If not ok then Print Error$  ' return  Overflow Long, but is overflow Decimal

\\ No overflow for unsigned numbers in structs
Structure Struct {
      \\ union a1, a2| b
     {
             a1 as integer
             a2 as integer
      }
      b as long
}
\\ structures are type for Memory Block, or other sttructure
\\ we use Clear to erase internal Memory Block
Buffer Clear DataMem as Struct*20
\\ from a1 we get only the low word
Return DataMem, 0!a2:=0xBBBB, 0!a1:=0xFFFFAAAA
Print Hex$(Eval(DataMem, 0!b))="BBBBAAAA"
Print Eval(DataMem, 0!b)=Eval(DataMem, 0!a2)*0x10000+Eval(DataMem, 0!a1)

Mathematica/Wolfram Language

Mathematica and Wolfram Language uses arbitrary number types. There is a $MaxNumber which is approximately 1.60521676193366172702774105306375828321e1355718576299609, but extensive research has shown it to allow numbers up to
$MaxNumber + 
 10^-15.954589770191003298111788092733772206160314 $MaxNumber
I haven't bothered testing it to any more precision. If you try to use any number above that, it returns an Overflow[].

Nim

General behavior regarding overflow

In Nim, overflow during operations on signed integers is detected and raises an exception. Starting from version 1.4, overflows are defects. For now, by default, defects can be caught but in future versions this may and probably would change. Using compile option --panics:on makes defects impossible to catch.

Catching an overflow (when --panics is off) is done this way:

try:
  var x: int32 = -2147483647
  x = -(x - 1)  # Raise overflow.
  echo x
except OverflowDefect:
  echo "Overflow detected"

It is possible to tell the compiler to not generate code to detect overflows by using pragmas “push” and “pop”:

{.push overflowChecks: off.}
try:
  var x: int32 = -2147483647
  x = -(x - 1)
  echo x   # -2147483648 — Wrong result as 2147483648 doesn't fit in an int32.
except OverflowDefect:
  echo "Overflow detected"      # Not executed.
{.pop.}

It is also possible to suppress all overflow checks by using compile option --overflowChecks:off. Also, compiling with option -d:danger suppress these checks and several others.

For unsigned integers, Nim doesn’t check for overflow but uses modular arithmetic.

Program to check behavior when overflow is not detected

This program presents the behavior when overflow checks are suppressed. Remember that for signed integers, this is not the normal behavior and that the result is always wrong when an overflow occurs.

echo "For 32 bits signed integers with overflow check suppressed:"
{.push overflowChecks: off.}
var a: int32
a =  -(-2147483647i32 - 1'i32)
echo "  -(-2147483647-1) gives ", a           # -2147483648.
a =  2000000000i32 + 2000000000i32
echo "  2000000000 + 2000000000 gives ", a    # -294967296.
a =  -2147483647i32 - 2147483647i32
echo "  -2147483647 - 2147483647 gives ", a   # 2.
a = 46341i32 * 46341i32
echo "  46341 * 46341 gives ", a              # -2147479015.
a = (-2147483647i32 - 1i32) div -1i32
echo "  (-2147483647-1) / -1 gives ", a       # -2147483648.
{.pop.}
echo ""

echo "For 64 bits signed integers with overflow check suppressed:"
{.push overflowChecks: off.}
var b: int64
b = -(-9223372036854775807i64 - 1i64)
echo "  -(-9223372036854775807-1) gives ", b                    # -9223372036854775808.
b = 5000000000000000000i64 + 5000000000000000000i64
echo "  5000000000000000000 + 5000000000000000000 gives ", b    # -8446744073709551616.
b = -9223372036854775807i64 - 9223372036854775807i64
echo "  -9223372036854775807 - 9223372036854775807 gives ", b   # 2.
b = 3037000500i64 * 3037000500i64
echo "  3037000500 * 3037000500 gives ", b                      # -9223372036709301616.
b =  (-9223372036854775807i64 - 1i64) div -1i64
echo "  (-9223372036854775807-1) / -1 gives ", b                # -9223372036854775808.
{.pop.}
echo ""

echo "For 32 bits unsigned integers:"
var c: uint32
echo "  -4294967295 doesn’t compile."
c = 3000000000u32 + 3000000000u32
echo "  3000000000 + 3000000000 gives ", c    # 1705032704.
c = 2147483647u32 - 4294967295u32
echo "  2147483647 - 4294967295 gives ", c    # 2147483648.
c = 65537u32 * 65537u32
echo "  65537 * 65537 gives ", c              # 131073.
echo ""

echo "For 64 bits unsigned integers:"
var d: uint64
echo "  -18446744073709551615 doesn’t compile."
d = 10000000000000000000u64 + 10000000000000000000u64
echo "  10000000000000000000 + 10000000000000000000 gives ", d  # 1553255926290448384.
d = 9223372036854775807u64 - 18446744073709551615u64
echo "  9223372036854775807 - 18446744073709551615 gives ", d   # 9223372036854775808.
d = 4294967296u64 * 4294967296u64
echo "  4294967296 * 4294967296 gives ", d                      # 0.
Output:
For 32 bits signed integers with overflow check suppressed:
  -(-2147483647-1) gives -2147483648
  2000000000 + 2000000000 gives -294967296
  -2147483647 - 2147483647 gives 2
  46341 * 46341 gives -2147479015
  (-2147483647-1) / -1 gives -2147483648

For 64 bits signed integers with overflow check suppressed:
  -(-9223372036854775807-1) gives -9223372036854775808
  5000000000000000000 + 5000000000000000000 gives -8446744073709551616
  -9223372036854775807 - 9223372036854775807 gives 2
  3037000500 * 3037000500 gives -9223372036709301616
  (-9223372036854775807-1) / -1 gives -9223372036854775808

For 32 bits unsigned integers:
  -4294967295 doesn’t compile.
  3000000000 + 3000000000 gives 1705032704
  2147483647 - 4294967295 gives 2147483648
  65537 * 65537 gives 131073

For 64 bits unsigned integers:
  -18446744073709551615 doesn’t compile.
  10000000000000000000 + 10000000000000000000 gives 1553255926290448384
  9223372036854775807 - 18446744073709551615 gives 9223372036854775808
  4294967296 * 4294967296 gives 0

Oforth

Oforth handles arbitrary precision integers. There is no integer overflow nor undefined behavior (unless no more memory) :

Output:
5000000000000000000 5000000000000000000 + println
10000000000000000000
ok

PARI/GP

Machine-sized integers can be used inside a Vecsmall:

Vecsmall([1])
Vecsmall([2^64])
Output:
%1 = Vecsmall([1])
  ***   at top-level: Vecsmall([2^64])
  ***                 ^----------------
  *** Vecsmall: overflow in t_INT-->long assignment.
  ***   Break loop: type 'break' to go back to GP prompt

Of course PARI can use the same techniques as C.

Additionally, you can, in principle, overflow a t_INT. The length, in words, of a t_INT is given in a single word. Hence on 32 bit a t_INT cannot have more than 2^32-1 words, limiting it to the range

or

with 64-bit.

(Note that these bounds are different from an IEEE 754-style floating point because the sign bit is stored externally.) It takes > 18 exabytes to overflow a t_INT on 64-bit (roughly Google's total storage as of 2014), but it's doable in 32-bit. Has anyone tried? I imagine you'd get a memory error or the like.

Perl

Using Perl 5.18 on 64-bit Linux with use integer: The Perl 5 program below does not recognize a signed integer overflow and the program continues with wrong results.

#include <stdio.h>
use strict;
use warnings;
use integer;
use feature 'say';

say("Testing 64-bit signed overflow:");
say(-(-9223372036854775807-1));
say(5000000000000000000+5000000000000000000);
say(-9223372036854775807 - 9223372036854775807);
say(3037000500 * 3037000500);
say((-9223372036854775807-1) / -1);
Output:
Testing 64-bit signed overflow:
-9223372036854775808
-8446744073709551616
2
-9223372036709301616
-9223372036854775808

Phix

Library: Phix/basics

Phix has both 32 and 64 bit implementations. Integers are signed and limited to 31 (or 63) bits, ie -1,073,741,824 to +1,073,741,823 (-#40000000 to #3FFFFFFF) on 32 bit, whereas on 64-bit it is -4,611,686,018,427,387,904 to +4,611,686,018,427,387,903 (-#4000000000000000 to #3FFFFFFFFFFFFFFF). Integer overflow is handled by automatic promotion to atom (an IEEE float, 64/80 bit for the 32/64 bit implementations respectively), which triggers a run-time type check if stored in a variable declared as integer, eg:

integer i = 1000000000 + 1000000000
Output:
C:\Program Files (x86)\Phix\test.exw:1
type check failure, i is 2000000000.0

The overflow is automatically caught and the program does not continue with the wrong results. You are always given the exact source file and line number that the error occurs on, and several editors, including Edita which is bundled with the compiler, will automatically jump to the source code line at fault. Alternatively you may declare a variable as atom and get the same performance for small integers, with seamless conversion to floats (with 53 or 64 bits of precision) as needed. Phix has no concept of unsigned numbers, except as user defined types that trigger errors when negative values are detected, but otherwise have the same ranges as above.

You can of course use a standard try/catch statement to avoid termination and resume processing (after the end try) and that way make a program more "robust". However a mute top-level "catch-all" that says and logs nothing will most certainly simply serve to make the program much harder to debug, whereas localising all try/catch statements to cover the least possible amount of code makes it much easier to "do the right thing" should an error occur.

PicoLisp

PicoLisp supports only integers of unlimited size. An overflow does not occur, except when a number grows larger than the available memory.

Pike

Pike transparently promotes int to bignum when needed, so integer overflows do not occur.

PL/M

Works with: 8080 PL/M Compiler
... under CP/M (or an emulator)

8080 PL/M does not check for overflow, incrementing the largest integer values wraps around to 0 (numbers are insigned in 8080 PL/M) and the program continues with wrong results.

100H: /* SHOW INTEGER OVERFLOW */

   /* CP/M SYSTEM CALL */
   BDOS: PROCEDURE( FN, ARG ); DECLARE FN BYTE, ARG ADDRESS; GOTO 5;   END;
   /* CONSOLE I/O ROUTINES */
   PRCHAR:   PROCEDURE( C );   DECLARE C BYTE;      CALL BDOS( 2, C ); END;
   PRSTRING: PROCEDURE( S );   DECLARE S ADDRESS;   CALL BDOS( 9, S ); END;
   PRNL:     PROCEDURE;        CALL PRCHAR( 0DH ); CALL PRCHAR( 0AH ); END;
   PRNUMBER: PROCEDURE( N );
      DECLARE N ADDRESS;
      DECLARE V ADDRESS, N$STR( 6 ) BYTE, W BYTE;
      N$STR( W := LAST( N$STR ) ) = '$';
      N$STR( W := W - 1 ) = '0' + ( ( V := N ) MOD 10 );
      DO WHILE( ( V := V / 10 ) > 0 );
         N$STR( W := W - 1 ) = '0' + ( V MOD 10 );
      END; 
      CALL PRSTRING( .N$STR( W ) );
   END PRNUMBER;

   /* TASK */

   /* THE ONLY TYPES SUPPORTED BY THE ORIGINAL PL/M COMPILER ARE */
   /* UNSIGED, BYTE IS 8 BITS AND ADDRESS IS 16 BITS */
   DECLARE SV BYTE, LV ADDRESS;

   SV =   255;   /* MAXIMUM BYTE VALUE */
   LV = 65535;   /* MAXIMUM ADDRESS VALUE */

   CALL PRSTRING( .'8-BIT: $' );
   CALL PRNUMBER( SV );
   CALL PRSTRING( .' INCREMENTS TO: $' );
   SV = SV + 1;
   CALL PRNUMBER( SV );
   CALL PRNL;

   CALL PRSTRING( .'16-BIT: $' );
   CALL PRNUMBER( LV );
   CALL PRSTRING( .' INCREMENTS TO: $' );
   LV = LV + 1;
   CALL PRNUMBER( LV );
   CALL PRNL;

EOF
Output:
8-BIT: 255 INCREMENTS TO: 0
16-BIT: 65535 INCREMENTS TO: 0

PowerShell

Without explicit casting, as in this example, numbers which are too big are automatically promoted to [decimal] (128 bit, high precision floating point which is save for financial calculations), so no exception is raised.

https://docs.microsoft.com/en-us/dotnet/api/system.decimal?view=netframework-4.8#remarks

try {
	# All of these raise an exception, which is caught below.
	# The try block is aborted after the first exception,
	# so the subsequent lines are never executed.

	[int32] (-(-2147483647-1))
	[int32] (2000000000 + 2000000000)
	[int32] (-2147483647 - 2147483647)
	[int32] (46341 * 46341)
	[int32] ((-2147483647-1) / -1)

	[int64] (-(-9223372036854775807-1))
	[int64] (5000000000000000000+5000000000000000000)
	[int64] (-9223372036854775807 - 9223372036854775807)
	[int64] (3037000500 * 3037000500)
	[int64] ((-9223372036854775807-1) / -1)

	[uint32] (-4294967295)
	[uint32] (3000000000 + 3000000000)
	[uint32] (2147483647 - 4294967295)
	[uint32] (65537 * 65537)

	[uint64] (-18446744073709551615)
	[uint64] (10000000000000000000 + 10000000000000000000)
	[uint64] (9223372036854775807 - 18446744073709551615)
	[uint64] (4294967296 * 4294967296)
}
catch {
	$Error.Exception
}

PureBasic

CPU=x64, OS=Windows7

#MAX_BYTE =127

#MAX_ASCII=255                  ;=MAX_CHAR Ascii-Mode

CompilerIf #PB_Compiler_Unicode=1
#MAX_CHAR =65535                ;Unicode-Mode
CompilerElse
#MAX_CHAR =255
CompilerEndIf

#MAX_WORD =32767

#MAX_UNIC =65535

#MAX_LONG =2147483647

CompilerIf #PB_Compiler_Processor=#PB_Processor_x86
#MAX_INT  =2147483647           ;32-bit CPU
CompilerElseIf #PB_Compiler_Processor=#PB_Processor_x64
#MAX_INT  =9223372036854775807  ;64-bit CPU
CompilerEndIf

#MAX_QUAD =9223372036854775807

Macro say(Type,maxv,minv,sz)
  PrintN(Type+#TAB$+RSet(Str(minv),30,Chr(32))+#TAB$+RSet(Str(maxv),30,Chr(32))+#TAB$+RSet(Str(sz),6,Chr(32))+" Byte")
EndMacro

OpenConsole()
PrintN("TYPE"+#TAB$+RSet("MIN",30,Chr(32))+#TAB$+RSet("MAX",30,Chr(32))+#TAB$+RSet("SIZE",6,Chr(32)))

Define.b b1=#MAX_BYTE, b2=b1+1
say("Byte",b1,b2,SizeOf(b1))

Define.a a1=#MAX_ASCII, a2=a1+1
say("Ascii",a1,a2,SizeOf(a1))

Define.c c1=#MAX_CHAR, c2=c1+1
say("Char",c1,c2,SizeOf(c1))

Define.w w1=#MAX_WORD, w2=w1+1
say("Word",w1,w2,SizeOf(w1))

Define.u u1=#MAX_UNIC, u2=u1+1
say("Unicode",u1,u2,SizeOf(u1))

Define.l l1=#MAX_LONG, l2=l1+1
say("Long   ",l1,l2,SizeOf(l1))

Define.i i1=#MAX_INT, i2=i1+1
say("Int",i1,i2,SizeOf(i1))

Define.q q1=#MAX_QUAD, q2=q1+1
say("Quad",q1,q2,SizeOf(q1))

Input()
Output:
TYPE                               MIN                             MAX    SIZE
Byte                              -128                             127       1 Byte
Ascii                                0                             255       1 Byte
Char                                 0                           65535       2 Byte
Word                            -32768                           32767       2 Byte
Unicode                              0                           65535       2 Byte
Long                       -2147483648                      2147483647       4 Byte
Int               -9223372036854775808             9223372036854775807       8 Byte
Quad              -9223372036854775808             9223372036854775807       8 Byte

Python

Python 2.X

Python 2.X has a 32 bit signed integer type called 'int' that automatically converts to type 'long' on overflow. Type long is of arbitrary precision adjusting its precision up to computer limits, as needed.

Python 2.7.5 (default, May 15 2013, 22:43:36) [MSC v.1500 32 bit (Intel)] on win32
Type "copyright", "credits" or "license()" for more information.
>>> for calc in '''   -(-2147483647-1)
   2000000000 + 2000000000
   -2147483647 - 2147483647
   46341 * 46341
   (-2147483647-1) / -1'''.split('\n'):
	ans = eval(calc)
	print('Expression: %r evaluates to %s of type %s'
	      % (calc.strip(), ans, type(ans)))

	
Expression: '-(-2147483647-1)' evaluates to 2147483648 of type <type 'long'>
Expression: '2000000000 + 2000000000' evaluates to 4000000000 of type <type 'long'>
Expression: '-2147483647 - 2147483647' evaluates to -4294967294 of type <type 'long'>
Expression: '46341 * 46341' evaluates to 2147488281 of type <type 'long'>
Expression: '(-2147483647-1) / -1' evaluates to 2147483648 of type <type 'long'>
>>>

Python 3.x

Python 3.X has the one 'int' type that is of arbitrary precision. Implementations may use 32 bit integers for speed and silently shift to arbitrary precision to avoid overflow.

Python 3.4.1 (v3.4.1:c0e311e010fc, May 18 2014, 10:38:22) [MSC v.1600 32 bit (Intel)] on win32
Type "copyright", "credits" or "license()" for more information.
>>> for calc in '''   -(-2147483647-1)
   2000000000 + 2000000000
   -2147483647 - 2147483647
   46341 * 46341
   (-2147483647-1) / -1'''.split('\n'):
	ans = eval(calc)
	print('Expression: %r evaluates to %s of type %s'
	      % (calc.strip(), ans, type(ans)))

	
Expression: '-(-2147483647-1)' evaluates to 2147483648 of type <class 'int'>
Expression: '2000000000 + 2000000000' evaluates to 4000000000 of type <class 'int'>
Expression: '-2147483647 - 2147483647' evaluates to -4294967294 of type <class 'int'>
Expression: '46341 * 46341' evaluates to 2147488281 of type <class 'int'>
Expression: '(-2147483647-1) / -1' evaluates to 2147483648.0 of type <class 'float'>
>>>

Note: In Python 3.X the division operator used between two ints returns a floating point result, (as this was seen as most often required and expected in the Python community). Use // to get integer division.

Quackery

Quackery only supports bignums.

Racket

The 32-bit version of Racket stores internally the fixnum n as the signed integer 2n+1, to distinguish it from the pointers to objects that are stored as even integers. This is invisible from inside Racket because in the usual operations when the result is not a fixnum, it's promoted to a bignum.

The effect of this representation is that the fixnums have only 31 bits available, and one of them is used for the sign. So all the examples have to be reduced to the half in order to fit into 31-bit signed values.

The unsafe operations expects fixnums in the arguments, and that the result is also a fixnum. They don't autopromote the result. They are faster but they should be used only in special cases, where the values known to be bounded. We can use them to see the behavior after an overflow. In case of a overflow they have undefined behaviour, so they may give different results or change without warning in future versions. (I don't expect that they will change soon, but there is no official guaranty.)

#lang racket
(require racket/unsafe/ops)

(fixnum? -1073741824) ;==> #t
(fixnum? (- -1073741824)) ;==> #f

(- -1073741824) ;==> 1073741824
(unsafe-fx- 0 -1073741824) ;==> -1073741824

(+ 1000000000 1000000000) ;==> 2000000000
(unsafe-fx+ 1000000000 1000000000) ;==> -147483648

(- -1073741823 1073741823) ;==> -2147483646
(unsafe-fx- -1073741823 1073741823) ;==> 2

(* 46341 46341) ;==> 2147488281
(unsafe-fx* 46341 46341) ;==> 4633

(/ -1073741824 -1) ;==> 1073741824
(unsafe-fxquotient -1073741824 -1) ;==> -1073741824

The 64-bit version is similar. The fixnum are effectively 63-bits signed integers.

Raku

(formerly Perl 6)

Translation of: Perl

The Raku program below does not recognize a signed integer overflow and the program continues with wrong results.

my int64 ($a, $b, $c) = 9223372036854775807, 5000000000000000000, 3037000500;
.say for -(-$a - 1), $b + $b, -$a - $a, $c * $c, (-$a - 1)/-1;
Output:
-9223372036854775808
-8446744073709551616
2
-9223372036709301616
9223372036854775808

REXX

The REXX language normally uses a fixed amount (but re-definable) amount of   decimal   digits, the default is   9.
When the value exceeds   9   decimal digits   (or whatever was specified via the     numeric digits NNN     REXX
statement), REXX will quietly automatically change to   exponential format   and round the given number, if necessary.

For newer versions of REXX, the   signal on lostDigits   statement can be used to accomplish the same results  
(for detecting a loss of significance [digits]).

/*REXX program  displays values  when  integers  have an   overflow  or  underflow.     */
numeric digits 9                                 /*the REXX default is 9 decimal digits.*/
call  showResult(  999999997 + 1 )
call  showResult(  999999998 + 1 )
call  showResult(  999999999 + 1 )
call  showResult( -999999998 - 2 )
call  showResult(  40000 * 25000 )
call  showResult( -50000 * 20000 )
call  showResult(  50000 *-30000 )
exit                                             /*stick a fork in it,  we're all done. */
/*──────────────────────────────────────────────────────────────────────────────────────*/
showResult: procedure;  parse arg x,,_;  x=x/1                      /*normalize   X.    */
            if pos(., x)\==0  then  if x>0  then _=' [overflow]'    /*did it  overflow? */
                                            else _=' [underflow]'   /*did it underflow? */
            say right(x, 20) _                                      /*show the result.  */
            return x                                                /*return the value. */

output   using the default input(s):

Output note:   (as it happens, all of the results below are numerically correct)

           999999998
           999999999
       1.00000000E+9  [overflow]
      -1.00000000E+9  [underflow]
       1.00000000E+9  [overflow]
      -1.00000000E+9  [underflow]
      -1.50000000E+9  [underflow]

RPL

RPL can handle unsigned integers, whose size can be set by the user from 2 to 64 bits. This format is provided to help software engineers in low-level programming of a ‘real’ computer, not to speed up calculations: RPL programs go faster when using floating-point numbers. Let’s work with 64-bit integers, displayed in base 10:

64 STWS DEC

and let’s try to comply with the task:

# -18446744073709551615

is rejected by the command line interpreter (syntax error).

#10000000000000000000 #10000000000000000000 +
#9223372036854775807  #18446744073709551615 -
#4294967296 #4294967296 * 
Output:
3: # 9223372036854775807d
2: # 0d
1: # 0d

Ruby

Ruby has unlimited precision integers.

The Integer class is the basis for two concrete classes that hold whole numbers, Bignum and Fixnum. Bignum objects hold integers outside the range of Fixnum. Bignum objects are created automatically when integer calculations would otherwise overflow a Fixnum. When a calculation involving Bignum objects returns a result that will fit in a Fixnum, the result is automatically converted.

2.1.1 :001 > a = 2**62 -1
 => 4611686018427387903 
2.1.1 :002 > a.class
 => Fixnum 
2.1.1 :003 > (b = a + 1).class
 => Bignum 
2.1.1 :004 > (b-1).class
 => Fixnum

Since Ruby 2.4 these different classes have disappeared: all numbers in above code are of class Integer.

Rust

To balance the need to catch bugs with the need to remain performant, Rust declares both unsigned and signed integer overflow to be invalid for the basic mathematical operators, but also guarantees that they will have predictable behaviour, resulting in either a panic! (safe program termination) or two's complement wrapping behaviour, depending on whether debug assertions are enabled.[1][2]

If an integer overflow can be determined at compile-time, an error is raised, though this can be converted to a warning or allowed. e.g.

error: attempt to divide with overflow
 --> src/main.rs:3:23
  |
3 |     let i32_5 : i32 = (-2_147_483_647 - 1) / -1;
  |                       ^^^^^^^^^^^^^^^^^^^^^^^^^
  |
  = note: `#[deny(const_err)]` on by default

If overflow occurs during program execution and overflow checks are enabled (the default for debug builds and an option for release builds), a panic! is raised.

$ ./integer_overflow
thread '<main>' panicked at 'attempted to negate with overflow', integer_overflow.rs:2
note: Run with `RUST_BACKTRACE=1` for a backtrace.

The following code will always panic when run in any mode

    // The following will panic!
    let i32_1 : i32 = -(-2_147_483_647 - 1);
    let i32_2 : i32 = 2_000_000_000 + 2_000_000_000;
    let i32_3 : i32 = -2_147_483_647 - 2_147_483_647;
    let i32_4 : i32 = 46341 * 46341;
    let i32_5 : i32 = (-2_147_483_647 - 1) / -1;

    // These will panic! also
    let i64_1 : i64 = -(-9_223_372_036_854_775_807 - 1);
    let i64_2 : i64 = 5_000_000_000_000_000_000 + 5_000_000_000_000_000_000;
    let i64_3 : i64 = -9_223_372_036_854_775_807 - 9_223_372_036_854_775_807;
    let i64_4 : i64 = 3_037_000_500 * 3_037_000_500;
    let i64_5 : i64 = (-9_223_372_036_854_775_807 - 1) / -1;

In order to declare overflow/underflow behaviour as intended (and, thus, valid in both debug and release modes), Rust provides two mechanisms:

First, the integer types offer methods which provide non-panicking versions of the basic mathematical operators, such as addition, subtraction, multiplication, division, negation, bit-shifting, and so on.

There are three types of functions:

  • checked_...: Return the result or None on overflow/underflow
  • saturating_...: Return the result or the maximum/minimum possible value for the specified type as appropriate.
  • wrapping_...: Return the result of the calculation, according to two's complement wrapping behaviour.


    // The following will never panic!
    println!("{:?}", 65_537u32.checked_mul(65_537));    // None
    println!("{:?}", 65_537u32.saturating_mul(65_537)); // 4294967295
    println!("{:?}", 65_537u32.wrapping_mul(65_537));   // 131073

    // These will never panic! either
    println!("{:?}", 65_537i32.checked_mul(65_537));     // None
    println!("{:?}", 65_537i32.saturating_mul(65_537));  // 2147483647
    println!("{:?}", 65_537i32.wrapping_mul(-65_537));   // -131073

Second, a generic Wrapping<T> one-element tuple type is provided which implements the same basic operations as the wrapping_... methods, but allows you to use normal operators and then use the .0 field accessor to retrieve the value once you are finished.[3]

Scala

Works with: Java version 8

Math.addExact works for both 32-bit unsigned and 64-bit unsigned integers, but Java does not support signed integers.

import Math.{addExact => ++, multiplyExact => **, negateExact => ~~, subtractExact => --}

def requireOverflow(f: => Unit) =
  try {f; println("Undetected overflow")} catch{case e: Exception => /* caught */}

println("Testing overflow detection for 32-bit unsigned integers")
requireOverflow(~~(--(~~(2147483647), 1))) // -(-2147483647-1)
requireOverflow(++(2000000000, 2000000000)) // 2000000000 + 2000000000
requireOverflow(--(~~(2147483647), 2147483647)) // -2147483647 + 2147483647
requireOverflow(**(46341, 46341)) // 46341 * 46341
requireOverflow(**(--(~~(2147483647),1), -1)) // same as (-2147483647-1) / -1

println("Test - Expect Undetected overflow:")
requireOverflow(++(1,1)) // Undetected overflow

Seed7

Seed7 supports unlimited precision integers with the type bigInteger. The type integer is a 64-bit signed integer type. All computations with the type integer are checked for overflow.

$ include "seed7_05.s7i";

const proc: writeResult (ref func integer: expression) is func
  begin
    block
      writeln(expression);
    exception
      catch OVERFLOW_ERROR: writeln("OVERFLOW_ERROR");
    end block;
  end func;

const proc: main is func
  begin
    writeResult(-(-9223372036854775807-1));
    writeResult(5000000000000000000+5000000000000000000);
    writeResult(-9223372036854775807 - 9223372036854775807);
    writeResult(3037000500 * 3037000500);
    writeResult((-9223372036854775807-1) div -1);
  end func;
Output:
OVERFLOW_ERROR
OVERFLOW_ERROR
OVERFLOW_ERROR
OVERFLOW_ERROR
OVERFLOW_ERROR

Sidef

Translation of: Raku

Sidef has unlimited precision integers.

var (a, b, c) = (9223372036854775807, 5000000000000000000, 3037000500);
[-(-a - 1), b + b, -a - a, c * c, (-a - 1)/-1].each { say _ };
Output:
9223372036854775808
10000000000000000000
-18446744073709551614
9223372037000250000
9223372036854775808

Smalltalk

Smalltalk has unlimited precision integers. However, to emulate wrong behavior (eg. when interfacing to external programs or document formats), it can be emulated.

Works with: Smalltalk/X
2147483647 + 1. -> 2147483648
2147483647 add_32: 1 -> -2147483648
4294967295 + 1. -> 4294967296
16rFFFFFFFF add_32u: 1. -> 0
... simular stuff for sub32/mul32 ...

Swift

// By default, all overflows in Swift result in a runtime exception, which is always fatal
// However, you can opt-in to overflow behavior with the overflow operators and continue with wrong results

var int32:Int32
var int64:Int64
var uInt32:UInt32
var uInt64:UInt64

println("signed 32-bit int:")
int32 = -1 &* (-2147483647 - 1)
println(int32)
int32 = 2000000000 &+ 2000000000
println(int32)
int32 = -2147483647 &- 2147483647
println(int32)
int32 = 46341 &* 46341
println(int32)
int32 = (-2147483647-1) &/ -1
println(int32)
println()

println("signed 64-bit int:")
int64 = -1 &* (-9223372036854775807 - 1)
println(int64)
int64 = 5000000000000000000&+5000000000000000000
println(int64)
int64 = -9223372036854775807 &- 9223372036854775807
println(int64)
int64 = 3037000500 &* 3037000500
println(int64)
int64 = (-9223372036854775807-1) &/ -1
println(int64)
println()

println("unsigned 32-bit int:")
println("-4294967295 is caught as a compile time error")
uInt32 = 3000000000 &+ 3000000000
println(uInt32)
uInt32 = 2147483647 &- 4294967295
println(uInt32)
uInt32 = 65537 &* 65537
println(uInt32)
println()

println("unsigned 64-bit int:")
println("-18446744073709551615 is caught as a compile time error")
uInt64 = 10000000000000000000 &+ 10000000000000000000
println(uInt64)
uInt64 = 9223372036854775807 &- 18446744073709551615
println(uInt64)
uInt64 = 4294967296 &* 4294967296
println(uInt64)
Output:
signed 32-bit int:
-2147483648
-294967296
2
-2147479015
0

signed 64-bit int:
-9223372036854775808
-8446744073709551616
2
-9223372036709301616
0

unsigned 32-bit int:
-4294967295 is caught as a compile time error
1705032704
2147483648
131073

unsigned 64-bit int:
-18446744073709551615 is caught as a compile time error
1553255926290448384
9223372036854775808
0

Standard ML

PolyML

~(~9223372036854775807-1) ;
poly: : error: Overflow exception raised while converting ~9223372036854775807 to int
Int.maxInt ;
val it = SOME 4611686018427387903: int option
~(~4611686018427387903 - 1);
Exception- Overflow raised
 (~4611686018427387903 - 1) div ~1;
Exception- Overflow raised
2147483648 * 2147483648 ;
Exception- Overflow raised

Tcl

Tcl (since 8.5) uses logical signed integers throughout that are “large enough to hold the number you are using” (being internally anything from a single machine word up to a bignum). The only way to get 32-bit and 64-bit values in arithmetic is to apply a clamping function at appropriate points:

proc tcl::mathfunc::clamp32 {x} {
    expr {$x<0 ? -((-$x) & 0x7fffffff) : $x & 0x7fffffff}
}
puts [expr { clamp32(2000000000 + 2000000000) }]; # ==> 1852516352

Tcl 8.4 used a mix of 32-bit and 64-bit numbers on 32-bit platforms and 64-bit numbers only on 64-bit platforms. Users are recommended to upgrade to avoid this complexity.

True BASIC

PRINT "Signed 32-bit:"
PRINT -(-2147483647-1)            !-2147483648
PRINT 2000000000 + 2000000000     !4000000000
PRINT -2147483647 - 2147483647    !-4294967294
PRINT 46341 * 46341               !2147488281
!PRINT (-2147483647-1) / -1        !error: Illegal expression
WHEN ERROR IN
     PRINT maxnum * 2             !Run-time error "Overflow"
USE
     PRINT maxnum
     !returns the largest number that can be represented in your computer
END WHEN
END

VBScript

In VBScript, if we declare a variable, there is no type. "As Integer" or "As Long" cannot be specified. Integer is a flexible type, internally it can be Integer (Fixed 16-bits), Long (Fixed 32-bits) or Double (Floating point). So, in VBScript is there an integer overflow? Answer: No and Yes.
- No, because 2147483647+1 is equal to 2147483648.
- Yes, because typename(2147483647)="Long" and typename(2147483648)="Double", so we have switched from fixed binary integer to double floating point. But thanks to mantissa precision there is no harm. The integer overflow is when you reach 10^15, because you are now out of the integer set : (1E+15)+1=1E+15 !?.
A good way to test integer overflow is to use the vartype() or typename() builtin functions.

'Binary Integer overflow - vbs
i=(-2147483647-1)/-1
wscript.echo i
i0=32767 	    '=32767      Integer (Fixed)  type=2
i1=2147483647	    '=2147483647 Long    (Fixed)  type=3
i2=-(-2147483647-1) '=2147483648 Double  (Float)  type=5  
wscript.echo Cstr(i0) & " : " & typename(i0) & " , " & vartype(i0) & vbcrlf _
           & Cstr(i1) & " : " & typename(i1) & " , " & vartype(i1) & vbcrlf _
           & Cstr(i2) & " : " & typename(i2) & " , " & vartype(i2)
ii=2147483648-2147483647
if vartype(ii)<>3 or vartype(ii)<>2 then wscript.echo "Integer overflow type=" & typename(ii)
i1=1000000000000000-1 '1E+15-1
i2=i1+1               '1E+15
wscript.echo Cstr(i1) & " , " & Cstr(i2)
Output:
2147483648
32767 : Integer , 2
2147483647 : Long , 3
2147483648 : Double , 5
Integer overflow type=Double
999999999999999 , 1E+15

Visual Basic

Works with: Visual Basic version VB6 Standard

Overflow is well handled, except for a strange bug in the computation of f the constant -(-2147483648).

    'Binary Integer overflow - vb6 - 28/02/2017
    Dim i As Long '32-bit signed integer
    i = -(-2147483647 - 1)           '=-2147483648   ?! bug
    i = -Int(-2147483647 - 1)        '=-2147483648   ?! bug
    i = 0 - (-2147483647 - 1)        'Run-time error '6' : Overflow
    i = -2147483647: i = -(i - 1)    'Run-time error '6' : Overflow
    i = -(-2147483647 - 2)           'Run-time error '6' : Overflow
    i = 2147483647 + 1               'Run-time error '6' : Overflow
    i = 2000000000 + 2000000000      'Run-time error '6' : Overflow
    i = -2147483647 - 2147483647     'Run-time error '6' : Overflow
    i = 46341 * 46341                'Run-time error '6' : Overflow
    i = (-2147483647 - 1) / -1       'Run-time error '6' : Overflow

Error handling - method 1

    i=0
    On Error Resume Next
    i = 2147483647 + 1
    Debug.Print i                    'i=0

Error handling - method 2

    i=0
    On Error GoTo overflow
    i = 2147483647 + 1
    ...
overflow:
    Debug.Print "Error: " & Err.Description      '-> Error: Overflow

Error handling - method 3

    On Error GoTo 0
    i = 2147483647 + 1               'Run-time error '6' : Overflow
    Debug.Print i

Visual Basic .NET

All the examples for the task are in error before any compilation or execution! The visual studio editor spots the overflow errors with the message:

	Constant expression not representable in type 'Integer/Long/UInteger'

To have an execution time overflow we must have something else than constant expressions.

32-bit signed integer

        Dim i As Integer '32-bit signed integer
 Pre-compilation error:
   'Error: Constant expression not representable in type 'Integer'
 for:
        i = -(-2147483647 - 1)
        i = 0 - (-2147483647 - 1) 
        i = -(-2147483647L - 1)
        i = -(-2147483647 - 2)
        i = 2147483647 + 1 
        i = 2000000000 + 2000000000
        i = -2147483647 - 2147483647
        i = 46341 * 46341 
        i = (-2147483647 - 1) / -1
 Execution error:
   'An unhandled exception of type 'System.OverflowException' occurred
   'Additional information: Arithmetic operation resulted in an overflow.
 for:
        i = -Int(-2147483647 - 1)        
        i = -2147483647: i = -(i - 1)

32-bit unsigned integer
In Visual Basic .Net there is no specific UInteger constants as in C.

        Dim i As UInteger '32-bit unsigned integer
 Pre-compilation error:
   'Error: Constant expression not representable in type 'UInteger'
 for:
        i = -4294967295
        i = 3000000000 + 3000000000
        i = 2147483647 - 4294967295
        i = 65537 * 65537
 Execution error:
   'An unhandled exception of type 'System.OverflowException' occurred
   'Additional information: Arithmetic operation resulted in an overflow.
 for:
        i = 3000000000 : i = i + i

64-bit signed integer

        Dim i As Long '64-bit signed integer
 Pre-compilation error:
   'Error: Constant expression not representable in type 'Long'
 for:
        i = -(-9223372036854775807 - 1)                 
        i = 5000000000000000000 + 5000000000000000000
        i = -9223372036854775807 - 9223372036854775807
        i = 3037000500 * 3037000500
        i = (-9223372036854775807 - 1) / -1
 Execution error:
   'An unhandled exception of type 'System.OverflowException' occurred
   'Additional information: Arithmetic operation resulted in an overflow.
 for:
        i = -9223372036854775807 : i = -(i - 1)

64-bit unsigned integer
In Visual Basic .Net there is no specific ULong constants as in C. And 'Long' constants are not good enough.

        Dim i As ULong '64-bit unsigned integer
 Pre-compilation error:
   'Error: Overflow
 for:
        i = -18446744073709551615
        i = 10000000000000000000 + 10000000000000000000
        i = 9223372036854775807 - 18446744073709551615
 Pre-compilation error:
   'Error: Constant expression not representable in type 'Long'
 for:
        i = 4294967296 * 4294967296
 Execution error:
   'An unhandled exception of type 'System.OverflowException' occurred
   'Additional information: Arithmetic operation resulted in an overflow.
 for:
        i = 4294967296 : i = i * i

how the exception is catched

        Dim i As Integer '32-bit signed integer
        Try
            i = -2147483647 : i = -(i - 1)
            Debug.Print(i)
        Catch ex As Exception
            Debug.Print("Exception raised : " & ex.Message)
        End Try
Output:
Arithmetic operation resulted in an overflow.

Wren

Wren only has a single numeric type, Num, instances of which are represented by 8 byte double precision floating point values.

This means that safe integer arithmetic is only possible up to (plus or minus) 2^53 - 1 (9,007,199,254,740,991) and, whilst there is no integer overflow as such, you are likely to get inaccurate results, without warning, if calculations exceed this limit. Worse still, this inaccuracy is difficult to observe in practice as the standard System.print method switches to scientific notation when printing integers with more than 14 digits.

However, within this overall framework, Wren also has an unsigned 32-bit integer sub-system when dealing with bitwise operations. All values are converted internally to such integers before the corresponding C bitwise operation is performed (Wren's VM is written in C) and can therefore overflow without warning. Fortunately, we can easily observe these effects by performing the operations required by the task and then (for example) right shifting them by 0 places.

var exprs = [-4294967295, 3000000000 + 3000000000, 2147483647 - 4294967295, 65537 * 65537]
System.print("Unsigned 32-bit:")
for (expr in exprs) System.print(expr >> 0)
Output:

Results agree with those for the corresponding C entry above.

Unsigned 32-bit:
1
1705032704
2147483648
131073

XPL0

XPL0 implements integers as signed values and ignores overflows. The original version used 16 bits. Later versions (used on Rosetta Code) use 32 bits. All the following expressions cause overflows, and the program continues with wrong results.

int N;
[N:= -(-2147483647-1);
IntOut(0, N);  CrLf(0);
N:= 2000000000 + 2000000000;
IntOut(0, N);  CrLf(0);
N:= -2147483647 - 2147483647;
IntOut(0, N);  CrLf(0);
N:= 46341 * 46341;
IntOut(0, N);  CrLf(0);
N:= (-2147483647-1)/-1;
IntOut(0, N);  CrLf(0);
]
Output:
-2147483648
-294967296
2
-2147479015
-2147483648

Z80 Assembly

Zilog Z80

The P flag represents overflow after an arithmetic operation, and bit parity after a bitwise or logical operation. Arithmetic operations will result in overflow if the 0x7F-0x80 boundary is crossed (or in the case of 16-bit math, the 0x7FFF-0x8000 boundary.) One quirk of the Z80 instruction set is that program counter relative jumps cannot be done based on overflow; only calls, returns, and jumps to fixed memory locations are allowed. In other words, the instructions JR PE, label and JR PO, label do not exist.

There are no assembler mnemonics for overflow specifically, so we must borrow the ones for parity. Your assembler may have overflow mnemonics but it's not a standard feature of the language.

  • PE parity even, overflow occurred.
  • PO parity odd, no overflow
ld a,&7F
add 1
jp pe,ErrorHandler ;pe = parity even, but in this case it represents overflow set

Like other CPUs, the Z80 has no way of knowing whether a value is intended to be signed or unsigned, and unless you explicitly have a jump, call, or return based on overflow after a calculation, the CPU will continue with the wrong result.

Game Boy

The Game Boy's CPU has no parity/overflow flag, and therefore all control flow structures related to it have been removed. Overflow can still be detected in theory, but it's a difficult process that requires the exclusive or of the carry flag and whether a subtraction changed the sign of the accumulator or not. Since the Game Boy's CPU cannot natively detect overflow, the CPU will continue with a wrong result.

zkl

zkl uses C's 64 bit integer math and the results are OS dependent. Integers are signed. GMP can be used for big ints. A zkl program does not recognize an integer overflow and the program continues with wrong results.

print("Signed 64-bit:\n");
println(-(-9223372036854775807-1));
println(5000000000000000000+5000000000000000000);
println(-9223372036854775807 - 9223372036854775807);
println(3037000500 * 3037000500);
println((-9223372036854775807-1) / -1);
Output:

Linux/BSD/clang

Signed 64-bit:
-9223372036854775808
-8446744073709551616
2
-9223372036709301616
uncatchable floating point exception thrown by OS

Windows XP

Signed 64-bit:
-9223372036854775808
-8446744073709551616
2
-9223372036709301616
-9223372036854775808