Memory allocation: Difference between revisions

m
→‎{{header|Wren}}: Changed to Wren S/H
(Adding C#)
m (→‎{{header|Wren}}: Changed to Wren S/H)
 
(48 intermediate revisions by 26 users not shown)
Line 8:
 
=={{header|360 Assembly}}==
<langsyntaxhighlight lang="360 Assemblyassembly">
* Request to Get Storage Managed by "GETMAIN" Supervisor Call (SVC 4)
LA 1,PLIST Point Reg 1 to GETMAIN/FREEMAIN Parm List
Line 31:
DC A(STG@) Pointer to Address of Storage Area
DC X'0000' (Unconditional Request; Subpool 0)
</syntaxhighlight>
</lang>
 
Example below shows de facto modern day use of HLASM techniques:
* The code is "baseless", meaning no base register has been established for the entry point of the module. This is referred to as Relative addressing. All modern day z/OS compilers generate baseless code, and so should the "raw assembler programmer". The IEABRCX system macro will conveniently convert all based branch instructions to their relative equivalents.
* The STORAGE macro is used (PC call to the storage routine) instead of GETMAIN/FREEMAIN (SVC based. Stabilised (no new functions).
* One of the many functions of STORAGE over GETMAIN/FREEMAIN is illustrated: EXECUTABLE=NO. The code below will execute successfully if EXECUTABLE=YES (or defaulted to), or if running on a pre z14 machine. If on a z14 or newer machine and EXECUTABLE=NO then the module will ABEND S0C4-4. The code copies two instructions to the obtained storage and branches to it.
* The code shows the use of the system supplied linkage stack to save caller's registers (BAKR) and restore them on return (PR), as opposed to STM/LM of the caller's register contents.
* Finally, the code is REENTRANT, meaning it could be loaded in a system module directory (LINKLIST, LPA), and executed simultaneously by multiple callers. Though not a requirement for this type of sample code it is a best practice in assembler coding.
<syntaxhighlight lang="360 assembly">
STOREXNO AMODE 31
STOREXNO RMODE ANY
STOREXNO CSECT ,
SYSSTATE AMODE64=NO,ARCHLVL=3 gen z9+, z/OS 2.1+ bin code
IEABRCX DEFINE convert based to relative branches
BAKR 14,0 callers registers to linkage stack
LARL 12,CONSTANTS load address relative long
USING CONSTANTS,12 using for constants
LA 9,WALEN load memory length in Register 9
STORAGE OBTAIN,LENGTH=(9),EXECUTABLE=NO,LOC=ANY
LR 10,1 Reg1 holds address of mem area
USING DYNAREA,10 using for dynamic memory area
LA 13,SAVEA PC routine convention: ...
MVC SAVEA+4(4),=C'F1SA' ... format 1 savearea: L-stack
*
* copy instruction sequence SR Reg15,Reg15; Branch Reg14 to DATA1
* in obtained storage location, and branch to it
*
MVC DATA1(8),=X'1BFF07FE00000000' SR 15,15; BR 14
LA 7,DATA1
BASR 14,7 This will OC4-4 with EXECUTABLE=NO
STORAGE RELEASE,ADDR=(10),LENGTH=(9),EXECUTABLE=NO
PR , return to caller
CONSTANTS DS 0D constant section, aligned for LARL
DC C'SOMEDATA'
DC C'SOMEOTHERDATA'
LTORG , have assembler build literal pool
DYNAREA DSECT
SAVEA DS 18F
DATA1 DS 2F
DATA2 DS CL256 can receive any value
WALEN EQU *-DYNAREA length of obtained area
END STOREXNO end of module
</syntaxhighlight>
 
=={{header|6502 Assembly}}==
The first 256 bytes in the 6502's address space are collectively referred to as the "zero page" and can be used for any purpose. The next 256 bytes are reserved for the stack. Since this is assembly, there is no structured system for allocating/deallocating memory. It's all there for the programmer to use.
 
The "heap" can be considered the zero page, or any other section of RAM that the hardware allows for general access. Anything besides the zero page is platform-specific. Accessing the heap is as simple as storing values in a specified address.
<syntaxhighlight lang="6502asm">LDA #$FF ;load 255 into the accumulator
STA $00 ;store at zero page memory address $00
STA $0400 ;store at absolute memory address $0400</syntaxhighlight>
 
A byte can be stored to the stack with <code>PHA</code> and retrieved with <code>PLA</code>. Later revisions of the 6502 allowed the X and Y registers to be directly pushed/popped with <code>PHX</code>, <code>PHY</code>, <code>PLX</code>, and <code>PLY</code>. The original 6502 could only access the stack through the accumulator.
 
Shared memory between the CPU and connected hardware is accessed via memory-mapped ports. These appear as memory locations in the CPU's address space. However, they do not necessarily have the same properties as regular memory. Some are read-only, some are write-only, others have unusual behavior.
 
=={{header|68000 Assembly}}==
The <code>LINK</code> and <code>UNLK</code> instructions are designed to create [[C]]-style stack frames. The operands of the <code>LINK</code> instruction are an address register (other than A7) and a displacement (must be even and either 0 or negative.)
<syntaxhighlight lang="68000devpac">MyFunction:
LINK A6,#-16 ;create a stack frame of 16 bytes. Now you can safely write to (SP+0) thru (SP+15).
 
;;;; your code goes here.
 
UNLK A6 ;free the stack frame
RTS</syntaxhighlight>
 
<code>LINK An, #-disp</code> is effectively equivalent to the following:
 
<syntaxhighlight lang="68000devpac">MOVE.L An,-(SP)
MOVEA.L SP,An
LEA (-disp,SP),SP</syntaxhighlight>
 
You can use negative offsets of <code>An</code> or positive offsets of <code>SP</code> to refer to the same memory region.
 
=={{header|Action!}}==
Action! does not have a stack for storing variables. When variable is declared in a procedure or function it is declared globally once at the beginning of the program even if the subroutine is not used. In the Action! Tool Kit there is a module for dynamic allocation and deallocation of memory. The user must type in the monitor the following command after compilation and before running the program!<pre>SET EndProg=*</pre>
{{libheader|Action! Tool Kit}}
<syntaxhighlight lang="action!">CARD EndProg ;required for ALLOCATE.ACT
 
INCLUDE "D2:ALLOCATE.ACT" ;from the Action! Tool Kit. You must type 'SET EndProg=*' from the monitor after compiling, but before running this program!
 
PROC Main()
DEFINE SIZE="1000"
BYTE POINTER ptr
 
AllocInit(EndProg) ;required before any memory allocation
 
ptr=Alloc(SIZE) ;allocate memory of 1000 bytes
SetBlock(ptr,SIZE,$FF) ;fill the memory block with $FF
Free(ptr,SIZE) ;free allocated memory
RETURN</syntaxhighlight>
{{out}}
[https://gitlab.com/amarok8bit/action-rosetta-code/-/raw/master/images/Memory_allocation.png Screenshot from Atari 8-bit computer]
 
=={{header|Ada}}==
===Stack===
[[Stack]] in [[Ada]] is allocated by declaration of an object in some scope of a block or else a subprogram:
<langsyntaxhighlight lang="ada">declare
X : Integer; -- Allocated on the stack
begin
...
end; -- X is freed</langsyntaxhighlight>
===Heap===
[[Heap]] is allocated with the allocator '''new''' on the context where a pool-unspecific pointer is expected:
<langsyntaxhighlight lang="ada">declare
type Integer_Ptr is access Integer;
Ptr : Integer_Ptr := new Integer; -- Allocated in the heap
begin
...
end; -- Memory is freed because Integer_Ptr is finalized</langsyntaxhighlight>
The memory allocated by '''new''' is freed when:
* the type of the pointer leaves the scope;
* the memory pool is finalized
* an instance of Ada.Unchecked_Deallocation is explicitly called on the pointer
<langsyntaxhighlight lang="ada">declare
type Integer_Ptr is access Integer;
procedure Free is new Ada.Unchecked_Deallocation (Integer, Integer_Ptr)
Line 60 ⟶ 152:
Free (Ptr); -- Explicit deallocation
...
end;</langsyntaxhighlight>
===User pool===
The allocator '''new''' also allocates memory in the user-defined storage pool when the pointer bound to the pool.
Line 68 ⟶ 160:
===Implicit allocation===
Elaboration of compilation units may result in allocation of the objects declared in these units. For example:
<langsyntaxhighlight lang="ada">package P is
X : Integer; -- Allocated in the result the package elaboration
end P;</langsyntaxhighlight>
The memory required by the object may be allocated statically or dynamically depending on the time of elaboration and its context. Objects declared in the library level packages are equivalent to what in some languages is called ''static'' object.
 
Line 78 ⟶ 170:
{{works with|ELLA ALGOL 68|Any (with appropriate job cards) - tested with release [http://sourceforge.net/projects/algol68/files/algol68toc/algol68toc-1.8.8d/algol68toc-1.8-8d.fc9.i386.rpm/download 1.8-8d]}}
Given:
<langsyntaxhighlight lang="algol68">MODE MYSTRUCT = STRUCT(INT i, j, k, REAL r, COMPL c);</langsyntaxhighlight>
===Stack===
<langsyntaxhighlight lang="algol68">REF MYSTRUCT l = LOC MYSTRUCT;</langsyntaxhighlight>
===Heap===
<langsyntaxhighlight lang="algol68">REF MYSTRUCT h = HEAP MYSTRUCT;</langsyntaxhighlight>
===User pool===
<langsyntaxhighlight lang="algol68">[666]MYSTRUCT pool;
INT new pool := LWB pool-1;
REF MYSTRUCT p = pool[new pool +:=1];</langsyntaxhighlight>
===External memory===
Without extensions it is not possible to access external memory. However most implementations have such an extension!
===Implicit allocation===
<syntaxhighlight lang ="algol68">MYSTRUCT i;</langsyntaxhighlight>
 
=={{header|ALGOL W}}==
Algol W has garbage collected dynamic allocation for record structures.
<langsyntaxhighlight lang="algolw">begin
% define a record structure - instances must be created dynamically %
record Element ( integer atomicNumber; string(16) name );
Line 103 ⟶ 195:
X := Element( 2, "Helium" )
% the memory allocated will now be garbage collected - there is no explicit de-allocation %
end.</langsyntaxhighlight>
 
=={{header|Arturo}}==
 
In Arturo, memory allocation is handled totally and exclusively by the VM, who is responsible for allocating and de-allocatiing memory when no longer needed, via the garbage collector.
 
The only way a programmer can "allocate" more memory is by flexible structures, like Blocks, and adding more elements to one of the pre-allocated structures:
 
<syntaxhighlight lang="arturo">
myBlock: @[1 2 3]
'myBlock ++ [4 5 6]
</syntaxhighlight>
 
=={{header|AutoHotkey}}==
<langsyntaxhighlight AutoHotkeylang="autohotkey">VarSetCapacity(Var, 10240000) ; allocate 10 megabytes
VarSetCapacity(Var, 0) ; free it</langsyntaxhighlight>
 
=={{header|Axe}}==
Axe does not provide runtime support for a heap, so memory must be allocated statically.
<langsyntaxhighlight lang="axe">Buff(100)→Str1
.Str1 points to a 100-byte memory region allocated at compile time</langsyntaxhighlight>
 
The optional second parameter to Buff() allows you to specify the byte to be filled with (default is zero).
 
=={{header|BBC BASIC}}==
===Heap{{header|BBC BASIC}}===
====Heap====
<lang bbcbasic> size% = 12345
<syntaxhighlight lang="bbcbasic"> size% = 12345
DIM mem% size%-1
PRINT ; size% " bytes of heap allocated at " ; mem%</langsyntaxhighlight>
Memory allocated from the heap is only freed on program termination or CLEAR.
====Stack====
<langsyntaxhighlight lang="bbcbasic"> size% = 12345
PROCstack(size%)
END
Line 131 ⟶ 235:
DIM mem% LOCAL s%-1
PRINT ; s% " bytes of stack allocated at " ; mem%
ENDPROC</langsyntaxhighlight>
Memory allocated from the stack is freed on exit from the FN or PROC.
 
Line 140 ⟶ 244:
 
The Bracmat functions <code>alc$</code> and <code>fre$</code> call the C-functions <code>malloc()</code> and <code>free()</code>, respectively. Writing and reading to and from allocated memory is done with the poke and peek functions <code>pok$</code> and <code>pee$</code>. These funtions write and read in chunks of 1 (default), 2 or 4 bytes. No need to say that all these low-level functions easily can create havoc and should be disabled in serious applications that don't need them. (There are compiler preprocessor macros to do that.)
<langsyntaxhighlight lang="bracmat">( alc$2000:?p {allocate 2000 bytes}
& pok$(!p,123456789,4) { poke a large value as a 4 byte integer }
& pok$(!p+4,0,4) { poke zeros in the next 4 bytes }
Line 149 ⟶ 253:
& out$(pee$(!p+1000,2)) { peek some uninitialized data }
& fre$!p { free the memory }
&);</langsyntaxhighlight>
{{out}}
<pre>21
Line 160 ⟶ 264:
The functions <tt>malloc</tt>, <tt>calloc</tt> and <tt>realloc</tt> take memory from the heap. This memory ''should'' be released with <tt>free</tt> and it's suitable for sharing memory among threads.
 
<langsyntaxhighlight lang="c">#include <stdlib.h>
 
/* size of "members", in bytes */
Line 177 ⟶ 281:
free(ints); free(int2);
return 0;
}</langsyntaxhighlight>
 
Variables declared inside a block (a function or inside a function) take room from the stack and survive until the "block" is in execution (and their scope is local).
 
<langsyntaxhighlight lang="c">int func()
{
int ints[NMEMB]; /* it resembles malloc ... */
Line 197 ⟶ 301:
 
return 0;
}</langsyntaxhighlight>
 
{{works with|gcc}}
Line 203 ⟶ 307:
The libc provided by [[gcc]] (and present on other "systems" too) has the <tt>alloca</tt> function which allows to ask for memory on the stack explicitly; the memory is deallocated when the function that asked for the memory ends (it is, in practice, the same behaviour for automatic variables). The usage is the same as for functions like <tt>malloc</tt>
 
<langsyntaxhighlight lang="c">#include <alloca.h>
int *funcA()
{
Line 210 ⟶ 314:
return ints; /* BUT THIS IS WRONG! It is not like malloc: the memory
does not "survive"! */
}</langsyntaxhighlight>
 
Variables declared outside any block and function or inside a function but prepended with the attribute <tt>static</tt> live as long as the program lives and the memory for them is statically given (e.g. through a .bss block).
 
<langsyntaxhighlight lang="c">/* this is global */
int integers[NMEMB]; /* should be initialized with 0s */
 
Line 227 ⟶ 331:
{
integers[0] = a;
}</langsyntaxhighlight>
 
=={{header|C sharp|C#}}==
C# is a managed language, so memory allocation is usually not done manually. However, in unsafe code it is possible to declare and operate on pointers.
<langsyntaxhighlight lang="csharp">using System;
using System.Runtime.InteropServices;
 
Line 267 ⟶ 371:
static extern int HeapSize(int hHeap, int flags, void* block);
 
}</langsyntaxhighlight>
 
=={{header|C++}}==
While the C allocation functions are also available in C++, their use is discouraged. Instead, C++ provides <code>new</code> and <code>delete</code> for memory allocation and deallocation. Those function don't just allocate memory, but also initialize objects. Also, deallocation is coupled with destruction.
<langsyntaxhighlight lang="cpp">#include <string>
 
int main()
Line 293 ⟶ 397:
p2 = new std::string[10]; // allocate an array of 10 strings, default-initialized
delete[] p2; // deallocate it
}</langsyntaxhighlight>
Note that memory allocated with C allocation functions (<code>malloc</code>, <code>calloc</code>, <code>realloc</code>) must always be deallocated with <code>free</code>, memory allocated with non-array <code>new</code> must always be deallocated with <code>delete</code>, and memory allocated with array <code>new</code> must always deallocated with <code>delete[]</code>. Memory allocated with new also cannot be resized with <code>realloc</code>.
 
Line 299 ⟶ 403:
 
Besides the new expressions shown above, pure memory allocation/deallocation without object initialization/destruction can also be done through <code>operator new</code>:
<langsyntaxhighlight lang="cpp">int main()
{
void* memory = operator new(20); // allocate 20 bytes of memory
operator delete(memory); // deallocate it
}</langsyntaxhighlight>
 
There's also a placement form of new, which allows to construct objects at an arbitrary adress (provided it is correctly aligned, and there's enough memory):
<langsyntaxhighlight lang="cpp">#include <new>
 
int main()
Line 317 ⟶ 421:
int* p = new(&data) int(3); // construct an int at the beginning of data
new(p+1) int(5); // construct another int directly following
}</langsyntaxhighlight>
Indeed, code like <code>int* p = new int(3);</code> is roughly (but not exactly) equivalent to the following sequence:
<langsyntaxhighlight lang="cpp">void* memory_for_p = operator new(sizeof(int));
int* p = new(memory_for_p) int(3);</langsyntaxhighlight>
 
Normally, new throws an exception if the allocation fails. there's a non-throwing variant which returns a null pointer instead:
<langsyntaxhighlight lang="cpp">#include <new>
 
int* p = new(std::nothrow) int(3);</langsyntaxhighlight>
Note that the nothrow variant does <em>not</em> prevent any exceptions to be thrown from the constructor of an object created with new. It only prevents exceptions due to memory allocation failure.
 
It is also possible to implement user-defined variations of operator new. One possibility is to define class-based operator new/operator delete:
<langsyntaxhighlight lang="cpp">#include <cstddef>
#include <cstdlib>
#include <new>
Line 355 ⟶ 459:
int* p2 = new int; // uses default operator new
delete p2; // uses default operator delete
}</langsyntaxhighlight>
 
Another possibility is to define new arguments for placement new syntax, e.g.
<langsyntaxhighlight lang="cpp">class arena { /* ... */ };
 
void* operator new(std::size_t size, arena& a)
Line 372 ⟶ 476:
arena whatever(/* ... */);
 
int* p = new(whatever) int(3); // uses operator new from above to allocate from the arena whatever</langsyntaxhighlight>
Note that there is ''no'' placement delete syntax; the placement operator delete is invoked by the compiler only in case the constructor of the newed object throws. Therefore for placement newed object deletion the two steps must be done explicitly:
<langsyntaxhighlight lang="cpp">class MyClass { /*...*/ };
 
int main()
Line 382 ⟶ 486:
p->~MyClass(); // explicitly destruct *p
operator delete(p, whatever); // explicitly deallocate the memory
}</langsyntaxhighlight>
 
=={{header|COBOL}}==
Line 388 ⟶ 492:
 
Manual memory allocation is primarily done using <code>ALLOCATE</code> and <code>FREE</code>. They are used with data items with a <code>BASED</code> clause, which indicates that the data will be allocated at runtime. A <code>BASED</code> data item cannot be used before it has been allocated or after it has been freed. Example usage:
<langsyntaxhighlight lang="cobol"> PROGRAM-ID. memory-allocation.
 
DATA DIVISION.
Line 402 ⟶ 506:
 
GOBACK
.</langsyntaxhighlight>
 
{{out}}
Line 414 ⟶ 518:
This behavior can be observed with the (not good for actual use) code:
 
<langsyntaxhighlight lang="lisp">(defun show-allocation ()
(let ((a (cons 1 2))
(b (cons 1 2)))
(declare (dynamic-extent b))
(list a b)))</langsyntaxhighlight>
 
<syntaxhighlight lang ="lisp">(show-allocation)</langsyntaxhighlight>
produces
<pre>
Line 427 ⟶ 531:
 
=={{header|D}}==
<langsyntaxhighlight lang="d">// D is a system language so its memory management is refined.
// D supports thread-local memory on default, global memory, memory
// allocated on the stack, the C heap, or the D heap managed by a
Line 561 ⟶ 665:
 
GC.free(ptr4); // This is optional.
}</langsyntaxhighlight>
{{out}}
<pre>Test destructor
Line 569 ⟶ 673:
Test destructor
</pre>
=={{header|Delphi}}==
 
See [[#Pascal]].
=={{header|E}}==
E is a memory-safe language and does not generally work with explicit deallocation. As in Python and Java, you can create arrays of specific data types which will, by any decent implementation, be compactly represented.
 
<langsyntaxhighlight lang="e">? <elib:tables.makeFlexList>.fromType(<type:java.lang.Byte>, 128)
# value: [].diverge()</langsyntaxhighlight>
The above creates an array with an initial capacity of 128 bytes (1 kilobit) of storage (though it does not have any elements). (The Java type name is left-over from E's Java-scripting history and will eventually be deprecated in favor of a more appropriate name.) The array will be deallocated when there are no references to it.
 
Line 591 ⟶ 696:
 
To just allocate some bytes, <code>malloc</code> is used. This memory has to be <code>free</code>d again of course.
<langsyntaxhighlight lang="factor">2000 malloc (...do stuff..) free</langsyntaxhighlight>
 
To increase safety and reduce memory leaks, there are specialized words available to help you manage your memory. If you use <code>&free</code> together with <code>with-destructors</code> your memory gets freed even in the presence of exceptions.
<langsyntaxhighlight lang="factor">STRUCT: foo { a int } { b foo* } ;
 
[
foo malloc-struct &free ! gets freed at end of the current with-destructors scope
! do stuff
] with-destructors</langsyntaxhighlight>
Memory allocated with any of these malloc variants resides in the (non-garbage-collected) heap.
 
Line 607 ⟶ 712:
===Dictionary===
All Forth implementations have a stack-like memory space called the ''dictionary''. It is used both for code definitions and data structures.
<langsyntaxhighlight lang="forth">unused . \ memory available for use in dictionary
here . \ current dictionary memory pointer
: mem, ( addr len -- ) here over allot swap move ;
Line 616 ⟶ 721:
create struct 0 , 10 , char A c, ," string"
unused .
here .</langsyntaxhighlight>
 
Dictionary space is meant for static code definitions and supporting data structures, so it is not as easy to deallocate from it. For ad-hoc allocations without intervening definitions, you may give a negative value to ALLOT to reclaim the space. You may also lay down a named MARKER to reclaim the space used by all subsequent definitions.
<langsyntaxhighlight lang="forth">marker foo
: temp ... ;
create dummy 300 allot
-150 allot \ trim the size of dummy by 150 bytes
foo \ removes foo, temp, and dummy from the list of definitions</langsyntaxhighlight>
 
===Heap===
Most Forth implementations also give access to a larger random-access memory heap.
<langsyntaxhighlight lang="forth">4096 allocate throw ( addr )
dup 4096 erase
( addr ) free throw</langsyntaxhighlight>
 
=={{header|Fortran}}==
<langsyntaxhighlight lang="fortran">program allocation_test
implicit none
real, dimension(:), allocatable :: vector
Line 646 ⟶ 751:
deallocate(matrix) ! Deallocate a matrix
deallocate(ptr) ! Deallocate a pointer
end program allocation_test</langsyntaxhighlight>
 
 
=={{header|FreeBASIC}}==
{{trans|BBC BASIC}}
===Heap===
<syntaxhighlight lang="freebasic">Dim As Integer size = 12345
Dim As Integer mem = size-1
Print size; " bytes of heap allocated at " ; mem
Clear (mem, , 10)
Print size; " bytes of heap allocated at " ; mem</syntaxhighlight>
Memory allocated from the heap is only freed on program termination or CLEAR
 
===Stack===
<syntaxhighlight lang="freebasic">Dim As Integer size = 12345
Dim As Integer mem = size-1
 
Sub Stack(s As Integer)
Dim As Integer mem = s-1
Print s; " bytes of stack allocated at " ; mem
End Sub
 
Stack(size)
Print size; " bytes of stack allocated at " ; mem</syntaxhighlight>
Memory allocated from the stack is freed on exit from the Sub/Function
 
 
=={{header|Go}}==
All memory in Go is transparently managed by the runtime and the language specification does not even contain the words stack or heap. Behind the scenes it has a single shared heap and a stack for each goroutine. Stacks for goroutines are initially 4K, but grow dyanamically as needed. Function parameters and variables declared within a function typically live on the stack, but the runtime will freely move them to the heap as needed. For example, in
<langsyntaxhighlight lang="go">func inc(n int) {
x := n + 1
println(x)
}</langsyntaxhighlight>
Parameter n and variable x will exist on the stack.
<langsyntaxhighlight lang="go">func inc(n int) *int {
x := n + 1
return &x
}</langsyntaxhighlight>
In the above, however, storage for x will be allocated on the heap because this storage is still referenced after inc returns.
 
In general, taking the address of an object allocates it on the heap. A conseqence is that given
<langsyntaxhighlight lang="go">type s struct{a, b int}</langsyntaxhighlight>
the following two expressions are equivalent.
<syntaxhighlight lang ="go">&s{}</langsyntaxhighlight>
<syntaxhighlight lang ="go">new(s)</langsyntaxhighlight>
Yes, new allocates on the heap.
 
Line 671 ⟶ 801:
 
Examples,
<langsyntaxhighlight lang="go">make([]int, 3)
make(map[int]int)
make(chan int)</langsyntaxhighlight>
 
=={{header|Haskell}}==
Line 679 ⟶ 809:
You usually only need to do low-level memory management in Haskell when interfacing with code written in other languages (particularly C). At its most basic level, Haskell provides malloc()/free()-like operations in the IO monad.
 
<langsyntaxhighlight Haskelllang="haskell">import Foreign
 
bytealloc :: IO ()
Line 687 ⟶ 817:
allocaBytes 100 $ \a -> -- Allocate 100 bytes; automatically
-- freed when closure finishes
poke (a::Ptr Word32) 0</langsyntaxhighlight>
 
Slightly more sophisticated functions are available for automatically determining the amount of memory necessary to store a value of a specific type. The type is determined by type inference (in this example I use explicit manual type annotations, because poke is polymorphic).
 
<langsyntaxhighlight Haskelllang="haskell">import Foreign
 
typedalloc :: IO ()
Line 698 ⟶ 828:
poke w (100 :: Word32)
free w
alloca $ \a -> poke a (100 :: Word32)</langsyntaxhighlight>
 
By the typing rules of Haskell, w must have the type 'Ptr Word32' (pointer to 32-bit word), which is how malloc knows how much memory to allocate.
Line 706 ⟶ 836:
Icon and Unicon provide fully automatic memory allocation. Memory is allocated when each
structure is created and reclaimed after it is no longer referenced. For example:
<langsyntaxhighlight lang="unicon">
t := table() # The table's memory is allocated
#... do things with t
t := &null # The table's memory can be reclaimed
</syntaxhighlight>
</lang>
For structures whose only reference is held in a local variable, that reference is removed
when the local context is exited (i.e. when procedures return) and the storage is then
Line 721 ⟶ 851:
Example of explicit [http://www.jsoftware.com/help/user/memory_management.htm memory allocation]:
 
<langsyntaxhighlight Jlang="j"> require 'dll'
mema 1000
57139856</langsyntaxhighlight>
 
Here, 57139856 is the result of mema -- it refers to 1000 bytes of memory.
Line 729 ⟶ 859:
To free it:
 
<syntaxhighlight lang J="j">memf 57139856</langsyntaxhighlight>
 
=={{header|Java}}==
You don't get much control over memory in Java, but here's what you can do:
<langsyntaxhighlight lang="java">//All of these objects will be deallocated automatically once the program leaves
//their scope and there are no more pointers to the objects
Object foo = new Object(); //Allocate an Object and a reference to it
int[] fooArray = new int[size]; //Allocate all spaces in an array and a reference to it
int x = 0; //Allocate an integer and set its value to 0</langsyntaxhighlight>
There is no real destructor in Java as there is in C++, but there is the <tt>finalize</tt> method. From the [http://java.sun.com/javase/6/docs/api/java/lang/Object.html#finalize() Java 6 JavaDocs]:
 
''The general contract of finalize is that it is invoked if and when the JavaTM virtual machine has determined that there is no longer any means by which this object can be accessed by any thread that has not yet died, except as a result of an action taken by the finalization of some other object or class which is ready to be finalized. The finalize method may take any action, including making this object available again to other threads; the usual purpose of finalize, however, is to perform cleanup actions before the object is irrevocably discarded. For example, the finalize method for an object that represents an input/output connection might perform explicit I/O transactions to break the connection before the object is permanently discarded.''
<langsyntaxhighlight lang="java">public class Blah{
//...other methods/data members...
protected void finalize() throws Throwable{
Line 747 ⟶ 877:
}
//...other methods/data members...
}</langsyntaxhighlight>
 
Note, though, that there is '''''no guarantee''''' that the <tt>finalize</tt> method will ever be called, as this trivial program demonstrates:
<langsyntaxhighlight lang="java">public class NoFinalize {
public static final void main(String[] params) {
NoFinalize nf = new NoFinalize();
Line 761 ⟶ 891:
System.out.println("finalized");
}
}</langsyntaxhighlight>
 
When run using Sun's JVM implementation, the above simply outputs "created". Therefore, you cannot rely on <tt>finalize</tt> for cleanup.
 
=={{header|Julia}}==
Julia has memory management. Objects are freed from memory automatically. Because arrays such as vectors and matrices, unlike lists, can have fixed size allocations in memory, these can be allocated implicitly with a call to a function returning a vector, or explicitly by assigning the memory to a variable:
<syntaxhighlight lang="julia">
matrix = Array{Float64,2}(100,100)
matrix[31,42] = pi
</syntaxhighlight>
 
=={{header|Kotlin}}==
In the version of Kotlin which targets the JVM, the latter takes care of memory allocation when objects are created together with the automatic deallocation of heap objects which there are no longer used via its garbage collector.
 
Consequently, manual intervention in the allocation or deallocation of objects is not possible though, as in Java (and subject to the problems mentioned in the entry therefor), it is possible to override the finalize() method to provide custom clean-up preceding garbage collection.
 
Variables of primitive types (Byte, Short, Int, Long, Float, Double, Char and Boolean) hold their values directly and variables of other types contain a reference to where the corresponding object is allocated on the heap.
 
All types (including primitive types) are either non-nullable (no suffix) or nullable (use a suffix of '?'). Only the latter can be assigned a value of 'null'. Values of nullable primitive types are 'boxed' i.e. stored as heap objects and variables of those types therefore contain a reference to the heap object rather than the value itself.
 
In addition, Kotlin has a Nothing type which has no instances and is a sub-type of every other type. There is also a nullable Nothing? type whose only value is 'null' and so, technically, this is the type of 'null' itself.
 
Some examples may help to make all this clear. In the interests of clarity, types have been specified for all variables though, in practice, this would be unnecessary in those cases where the variable's type can be inferred from the value assigned to it when it is declared. 'val' variables are read-only but 'var' variables are read/write.
<syntaxhighlight lang="scala">// version 1.1.2
 
class MyClass(val myInt: Int) {
// in theory this method should be called automatically prior to GC
protected fun finalize() {
println("MyClass being finalized...")
}
}
 
fun myFun() {
val mc: MyClass = MyClass(2) // new non-nullable MyClass object allocated on the heap
println(mc.myInt)
var mc2: MyClass? = MyClass(3) // new nullable MyClass object allocated on the heap
println(mc2?.myInt)
mc2 = null // allowed as mc2 is nullable
println(mc2?.myInt)
// 'mc' and 'mc2' both become eligible for garbage collection here as no longer used
}
 
fun main(args: Array<String>) {
myFun()
Thread.sleep(3000) // allow time for GC to execute
val i: Int = 4 // new non-nullable Int allocated on stack
println(i)
var j: Int? = 5 // new nullable Int allocated on heap
println(j)
j = null // allowed as 'j' is nullable
println(j)
// 'j' becomes eligible for garbage collection here as no longer used
}</syntaxhighlight>
When this is run, notice that finalize() is not called - at least on my machine (running UBuntu 14.04, Oracle JDK 8):
{{out}}
<pre>
2
3
null
4
5
null
</pre>
 
=={{header|Lingo}}==
Lingo does not allow direct memory allocation and has no direct access to memory types like heap, stack etc. But indirectly the ByteArray data type can be used to allocate memory that then later can be filled with custom data:
<langsyntaxhighlight lang="lingo">-- Create a ByteArray of 100 Kb (pre-filled with 0 bytes)
ba = byteArray(102400)
 
-- Lingo uses garbage-collection, so allocated memory is released when no more references exist.
-- For the above variable ba, this can be achieved by calling:
ba = VOID</langsyntaxhighlight>
 
=={{header|M2000 Interpreter}}==
Buffer is an object which hold a block of memory in heap. There are two types, the default and the Code type. In code type we can execute code, but at execution time we can't write to that block. So to get results from machine code we have to use a default type buffer (for data). Buffers used to read/write to binary files too.
 
[http://www.rosettacode.org/wiki/Machine_code#M2000_Interpreter See example for Machine Code]
 
If we use a wrong offset, buffer return error and locked (can't be used until erased)
 
Variable which hold the buffer is a pointer to buffer. The buffer erased when no more pointer points to it. We can use pointer as return value, or pushing to stack of values. We can use buffers as members of groups. A copy of group just copy the pointer. We can use buffers as closures in lambda functions, and a copy of lambda which have a closure of a buffer make a copy of pointer too (so two or more lambda function may use same memory allocation to read/write)
 
Buffers used with a type as a meter of bytes for each element, here in the example we say we have bytes. We can use Byte (1 bytes), Integer (2 bytes), Long (4 bytes), Double (8 bytes), or a structure (we can define structures, with pointer to strings also, as BSTR type). So if we use Integer as meter then Mem1(1)-Mem1(0) return 2 (2 bytes).
Data Byte, Integer, Long are unsigned.
 
We can redim buffers, but we can't change the meter. Structures can have unions to use different same data.
 
We can use Eval$(Mem1) to get a copy of a buffer in a string.
Statement Return used to place in many offsets data in one statement.
<syntaxhighlight lang="m2000 interpreter">
Module Checkit {
Buffer Clear Mem1 as Byte*12345
Print Len(Mem1)
Hex Mem1(0) ' print in Hex address of first element
Print Mem1(Len(Mem1)-1)-Mem1(0)+1=12345
Buffer Mem1 as Byte*20000 ' redim block
Print Mem1(Len(Mem1)-1)-Mem1(0)+1=20000
Try {
Print Mem1(20000) ' it is an error
}
Print Error$ ' return message: Buffer Locked, wrong use of pointer
}
Checkit
</syntaxhighlight>
 
=={{header|Maple}}==
Line 778 ⟶ 1,000:
 
You can allocate a large block of memory by creating an Array
<langsyntaxhighlight Maplelang="maple">a := Array( 1 .. 10^6, datatype = integer[1] ):
</syntaxhighlight>
</lang>
Now you can use the storage in the Array assigned to <tt>a</tt> as you see fit. To ensure that <tt>a</tt> is garbage collected at the earliest opportunity, unassign the name <tt>a</tt>:
<syntaxhighlight lang Maple="maple">unassign( a ):</langsyntaxhighlight>
or
<langsyntaxhighlight Maplelang="maple">a := 'a':</langsyntaxhighlight>
 
=={{header|Mathematica}}/{{header|Wolfram Language}}==
Mathematica allocates memory and garbage collects it.
 
Line 794 ⟶ 1,016:
Matlab and Octave allocate memory when a new variable or a local variable is generated. Arrays are automatically extended as needed. However, extending the array might require to re-allocate the whole array. Therefore, pre-allocating memory can provide a significant performance improvement.
 
<syntaxhighlight lang="matlab">
<lang MATLAB>
A = zeros(1000); % allocates memory for a 1000x1000 double precision matrix.
clear A; % deallocates memory
Line 802 ⟶ 1,024:
b(k) = 5*k*k-3*k+2;
end
</langsyntaxhighlight>
 
=={{header|Maxima}}==
<langsyntaxhighlight lang="maxima">/* Maxima allocates memory dynamically and uses a garbage collector.
Here is how to check available memory */
 
Line 824 ⟶ 1,046:
97138 pages available
11399 pages in heap but not gc'd + pages needed for gc marking
131072 maximum pages</langsyntaxhighlight>
 
=={{header|Nanoquery}}==
Even though the Nanoquery interpreter is implemented in Java, it is possible to use the native library to directly allocate and access memory located off the heap. In this example, 26 bytes are allocated, filled with the uppercase alphabet, then output to the console. Even though this code is running in the JVM, the allocated memory must still be freed at the end or a leak would occur.
<syntaxhighlight lang="nanoquery">import native
 
// allocate 26 bytes
ptr = native.allocate(26)
 
// store the uppercase alphabet
for i in range(0, 25)
native.poke(ptr + i, ord("A") + i)
end
 
// output the allocated memory
for i in range(0, 25)
print chr(native.peek(ptr + i))
end
 
// free the allocated memory
native.free(ptr)</syntaxhighlight>
{{out}}
<pre>ABCDEFGHIJKLMNOPQRSTUVWXYZ</pre>
 
=={{header|Nim}}==
Usually in Nim we have automated memory allocation and garbage collection, but we can still manually get a block of memory:
<langsyntaxhighlight lang="nim"># Allocate thread local heap memory
var a = alloc(1000)
dealloc(a)
Line 834 ⟶ 1,078:
# Allocate memory block on shared heap
var b = allocShared(1000)
deallocShared(b)</lang>
 
# Allocate and Dellocate a single int on the thread local heap
var p = create(int, sizeof(int)) # allocate memory
# create zeroes memory; createU does not.
echo p[] # 0
p[] = 123 # assign a value
echo p[] # 123
discard resize(p, 0) # deallocate it
# p is now invalid. Let's set it to nil
p = nil # set pointer to nil
echo isNil(p) # true
</syntaxhighlight>
 
=={{header|Objeck}}==
In Objeck space for local variables is allocated when a method/function is called and deallocated when a method/function exits. Objects and arrays are allocated from the heap and their memory is managed by the memory manager. The memory manager attempts to collect memory when an allocation threshold is met or exceeded. The memory manger uses a mark and sweep [[Garbage collection|garbage collection]] algorithm.
<langsyntaxhighlight lang="objeck">foo := Object->New(); // allocates an object on the heap
foo_array := Int->New[size]; // allocates an integer array on the heap
x := 0; // allocates an integer on the stack</langsyntaxhighlight>
 
=={{header|Odin}}==
 
<syntaxhighlight lang="odin">package main
 
import "core:mem"
 
main :: proc() {
ptr := mem.alloc(1000) // Allocate heap memory
mem.free(ptr)
}</syntaxhighlight>
 
=={{header|Oforth}}==
Line 851 ⟶ 1,118:
 
You can allocate a block of memory (on the heap) by creating a MemBuffer object, which is an array of bytes.
 
=={{header|OxygenBasic}}==
<syntaxhighlight lang="text">
'ALLOCATING MEMORY FROM DIFFERENT MEMORY SOURCES
 
sys p
 
 
static byte b[0x1000] 'global memory
p=@b
 
 
function f()
local byte b[0x1000] 'stack memory in a procedure
p=@b
end function
 
 
p=getmemory 0x1000 'heap memory
...
freememory p 'to disallocate
 
 
sub rsp,0x1000 'stack memory direct
p=rsp
...
rsp=p 'to disallocate
 
 
'Named Memory shared between processes is
'also available using the Windows API (kernel32.dll)
'see MSDN:
'CreateFileMapping
'OpenFileMapping
'MapViewOfFile
'UnmapViewOfFile
'CloseHandle</syntaxhighlight>
 
=={{header|PARI/GP}}==
All accessible memory in GP is on PARI's heap. Its size can be changed:
<syntaxhighlight lang ="parigp">allocatemem(100<<20)</langsyntaxhighlight>
to allocate 100 MB.
 
Line 868 ⟶ 1,172:
=== Heap ===
 
DynamiclyDynamically created objects (dynamic arrays, class instantiations, ...) are allocated on the heap.<br>
Their creation and destruction is done explicitly.
<langsyntaxhighlight lang="pascal">type
TByteArray = array of byte;
var
Line 878 ⟶ 1,182:
...
setLength(A,0);
end;</langsyntaxhighlight>
 
<langsyntaxhighlight lang="pascal">type
Tcl = class
dummy: longint;
Line 890 ⟶ 1,194:
...
c1.destroy;
end;</langsyntaxhighlight>
 
=={{header|Perl}}==
In general, memory allocation and de-allocation isn't something you can or should be worrying about much in Perl.
Perl manages its own heap quite well, and it is exceedingly rare that anything goes wrong. As long as the OS has memory to give,
a perl process can use as much as it needs.
 
Memory allocated to lexicals, i.e. <tt>my()</tt>, variables cannot be reclaimed or reused even if they go out of scope
(it is reserved in case the variables come back into scope). You can 'hint' that memory allocated to global variables
can be reused (within your program) by using <tt>undef</tt> and <tt>delete</tt>, but you really have little control over
when/if that happens.
 
=={{header|Perl 6}}==
Like Perl 5, Perl 6 is intended to run largely stackless, so all allocations are really on the heap, including activation records. Allocations are managed automatically. It is easy enough to allocate a memory buffer of a particular size however, if you really need it:
<lang perl6>my $buffer = Buf.new(0 xx 1024);</lang>
=={{header|Phix}}==
{{libheader|Phix/basics}}
In normal use, memory management is fully automatic in Phix. However you may need to explicitly allocate memory when interfacing to C etc.
By default (for compatibility with legacy code) cleanup must be performed manually, but there is an optional flag on both the memory
allocation routines (allocate and allocate_string) to automate that for you, or you could even roll your own via delete_routine().
 
<lang Phix>atom addr = allocate(512) -- limit is 1,610,612,728 bytes on 32-bit systems
<!--<syntaxhighlight lang="phix">-->
...
<span style="color: #004080;">atom</span> <span style="color: #000000;">addr</span> <span style="color: #0000FF;">=</span> <span style="color: #7060A8;">allocate</span><span style="color: #0000FF;">(</span><span style="color: #000000;">512</span><span style="color: #0000FF;">)</span> <span style="color: #000080;font-style:italic;">-- limit is 1,610,612,728 bytes on 32-bit systems</span>
free(addr)
<span style="color: #0000FF;">...</span>
atom addr2 = allocate(512,1) -- automatically freed when addr2 drops out of scope or re-assigned
<span style="color: #7060A8;">free</span><span style="color: #0000FF;">(</span><span style="color: #000000;">addr</span><span style="color: #0000FF;">)</span>
atom addr3 = allocate_string("a string",1) -- automatically freed when addr3 drops out of scope or re-assigned</lang>
<span style="color: #004080;">atom</span> <span style="color: #000000;">addr2</span> <span style="color: #0000FF;">=</span> <span style="color: #7060A8;">allocate</span><span style="color: #0000FF;">(</span><span style="color: #000000;">512</span><span style="color: #0000FF;">,</span><span style="color: #000000;">1</span><span style="color: #0000FF;">)</span> <span style="color: #000080;font-style:italic;">-- automatically freed when addr2 drops out of scope or re-assigned</span>
<span style="color: #004080;">atom</span> <span style="color: #000000;">addr3</span> <span style="color: #0000FF;">=</span> <span style="color: #7060A8;">allocate_string</span><span style="color: #0000FF;">(</span><span style="color: #008000;">"a string"</span><span style="color: #0000FF;">,</span><span style="color: #000000;">1</span><span style="color: #0000FF;">)</span> <span style="color: #000080;font-style:italic;">-- automatically freed when addr3 drops out of scope or re-assigned</span>
<!--</syntaxhighlight>-->
 
Behind the scenes, the Phix stack is actually managed as a linked list of virtual stack blocks allocated on the heap, and as such it
would be utterly pointless and quite probably extremely tricky to mess with.
Line 917 ⟶ 1,233:
* On the stack: this happens with variables declared as automatic in procedures and begin blocks. The variables are automatically freed when at exit of the procedure or block. If an ''init'' clause is specified, the variable will also be (re-)initialized upon entry to the procedure or block. If no ''init'' clause is specified, the variable will most likely contain garbage. An example:
 
<langsyntaxhighlight lang="pli">
mainproc: proc options(main) reorder;
 
Line 929 ⟶ 1,245:
call subproc();
call subproc();
end mainproc;</langsyntaxhighlight>
 
Result:
Line 937 ⟶ 1,253:
* On the heap: if a variable is declared with the ''ctl'' (or in full, ''controlled'') attribute, it will be allocated from the heap and multiple generations of the same variable can exist, although only the last one allocated can be accessed directly. An example:
 
<langsyntaxhighlight lang="pli">
mainproc: proc options(main) reorder;
dcl ctlvar char ctl;
Line 954 ⟶ 1,270:
put skip data(ctlvar);
free ctlvar;
end mainproc;</langsyntaxhighlight>
 
Result:
Line 963 ⟶ 1,279:
* On the heap: if a variable is declared with the ''based'' attribute, it will be allocated from the heap and multiple generations of the same variable can exist. This type of variable is often used in linked lists. An example:
 
<langsyntaxhighlight lang="pli">
mainproc: proc options(main) reorder;
dcl list_ptr ptr init (sysnull());
Line 990 ⟶ 1,306:
put skip list(list_data);
end;
end mainprog;</langsyntaxhighlight>
Result:
Line 999 ⟶ 1,315:
 
=={{header|PureBasic}}==
<langsyntaxhighlight PureBasiclang="purebasic">*buffer=AllocateMemory(20)
 
*newBuffer = ReAllocateMemory(*buffer, 2000) ;increase size of buffer
Line 1,013 ⟶ 1,329:
; allocate an image for use with image functions
CreateImage(1,size,size)
FreeImage(1)</langsyntaxhighlight>
 
Memory for strings is handled automatically from a separate memory heap. The automatic handling of string memory includes garbage collection and the freeing of string memory.
Line 1,105 ⟶ 1,421:
 
'''Example'''
<langsyntaxhighlight lang="python">>>> from array import array
>>> argslist = [('l', []), ('c', 'hello world'), ('u', u'hello \u2641'),
('l', [1, 2, 3, 4, 5]), ('d', [1.0, 2.0, 3.14])]
Line 1,119 ⟶ 1,435:
array('l', [1, 2, 3, 4, 5])
array('d', [1.0, 2.0, 3.1400000000000001])
>>></langsyntaxhighlight>
 
=={{header|R}}==
<langsyntaxhighlight lang="rsplus">x=numeric(10) # allocate a numeric vector of size 10 to x
rm(x) # remove x
 
x=vector("list",10) #allocate a list of length 10
x=vector("numeric",10) #same as x=numeric(10), space allocated to list vector above now freed
rm(x) # remove x</langsyntaxhighlight>
 
=={{header|Racket}}==
Racket doesn't allow direct memory allocation, although it supports some things
<langsyntaxhighlight Racketlang="racket">#lang racket
(collect-garbage) ; This function forces a garbage collection
 
Line 1,141 ⟶ 1,457:
; If amount of bytes can't be reached, <stop-custodian> is shutdown
 
(custodian-limit-memory <custodian> <amount>) ; Register a limit on memory for the <custodian></langsyntaxhighlight>
 
Custodians manage threads, ports, sockets, etc.
A bit of information about them is available [http://docs.racket-lang.org/reference/eval-model.html?q=memory&q=custodian&q=computer&q=pointer#%28part._custodian-model%29 here]
 
=={{header|Raku}}==
(formerly Perl 6)
 
Like Perl 5, Raku is intended to run largely stackless, so all allocations are really on the heap, including activation records. Allocations are managed automatically. It is easy enough to allocate a memory buffer of a particular size however, if you really need it:
<syntaxhighlight lang="raku" line>my $buffer = Buf.new(0 xx 1024);</syntaxhighlight>
 
=={{header|Retro}}==
Retro's memory is directly accessible via '''@fetch''' and '''!store'''. This is used for all functions and data structures. A variable, '''heapHeap''', points to the next free address. '''allot''' can be used to allocate or free memory. The amount of memory varies by the runtime, and can be accessed via the '''memoryEOM''' variableconstant.
 
<syntaxhighlight lang Retro="retro">( display total memory available )
@memory putn
 
~~~
( display unused memory )
EOM n:put
@memory here - putn
~~~
 
( display next free addressunused )memory
here putn
 
~~~
( allocate 1000 cells )
EOM here - n:put
1000 allot
~~~
 
(display next free 500 cells )address
 
-500 allot</lang>
~~~
here n:put
~~~
 
allocate 1000 cells
 
~~~
#1000 allot
~~~
 
free 500 cells
 
~~~
#-500 allot
~~~</syntaxhighlight>
 
=={{header|REXX}}==
There is no explicit memory allocation in the REXX language, &nbsp; variables are allocated as they are assigned &nbsp; (or re-assigned).
<br>Most REXX interpreters will obtain a chunk (block) of free storage, and then allocate storage out of that pool if possible, if not, obtain more storage.
<br>One particular implementation of REXX has predefined and limited amount of free storage.
<langsyntaxhighlight lang="rexx">Axtec_god.3='Quetzalcoatl ("feathered serpent"), god of learning, civilization, regeneration, wind and storms'</langsyntaxhighlight>
There is no explicit way to de-allocate memory, but there is a DROP statement that "un-defines" a REXX variable and it's memory is then free to be used for other variables, provided that free memory isn't too fragmented.
<langsyntaxhighlight lang="rexx">drop xyz NamesRoster j k m caves names. Axtec_god. Hopi Hopi
 
/* it's not considered an error to DROP a variable that isn't defined.*/</langsyntaxhighlight>
Any variables (that are non-exposed) which are defined (allocated) in a procedure/subroutine/function will be un-allocated at the termination (completion, when it RETURNS or EXITs) of the procedure/subroutine/function.
<br><br>
 
=={{header|Ring}}==
<langsyntaxhighlight lang="ring">
cVar = " " # create variable contains string of 5 bytes
cVar = NULL # destroy the 5 bytes string !
</syntaxhighlight>
</lang>
 
=={{header|Ruby}}==
Class#allocate explicitly allocates memory for a new object, inside the [[garbage collection|garbage-collected heap]]. Class#allocate never calls #initialize.
 
<langsyntaxhighlight lang="ruby">class Thingamajig
def initialize
fail 'not yet implemented'
end
end
t = Thingamajig.allocate</langsyntaxhighlight>
 
=={{header|Rust}}==
The method shown below will be deprecated soon in favor of the `std::alloc::Alloc` trait. Follow the progress on github's issue tracker:
 
https://github.com/rust-lang/rust/issues/32838
 
<syntaxhighlight lang="rust">// we have to use `unsafe` here because
// we will be dereferencing a raw pointer
unsafe {
use std::alloc::{Layout, alloc, dealloc};
// define a layout of a block of memory
let int_layout = Layout::new::<i32>();
 
// memory is allocated here
let ptr = alloc(int_layout);
 
// let us point to some data
*ptr = 123;
assert_eq!(*ptr, 123);
 
// deallocate `ptr` with associated layout `int_layout`
dealloc(ptr, int_layout);
}</syntaxhighlight>
 
=={{header|Scala}}==
The same as Java applies to Scala, because the VM will take of memory allocation by means of the Memory Manager. In Scala it's not a programmer concern.
 
=={{header|Sinclair ZX81 BASIC}}==
Ordinary variables spring into existence when they are first assigned to; arrays need to be <code>DIM</code>ensioned first. There is no easy way to remove a <i>particular</i> named variable, but a <code>CLEAR</code> statement removes <i>all</i> user-defined variables. This can be useful under two sets of circumstances: (1) if the program has, say, a setting-up section whose variables will not be needed again, so that their storage space can be reclaimed, or (2) if you are editing a larger program—variables persist even after the program has finished running, so <code>CLEAR</code>ing them frees up some memory and may make viewing and editing your program more comfortable.
 
If you want to allocate an arbitrary block of bytes that the interpreter will not interfere with, there are two ways to do it. The first is by altering the system variable <tt>RAMTOP</tt>: this is a 16-bit value stored in little-endian format at addresses 16388 and 16389, and tells BASIC the highest byte it can use. On a 1k ZX81, <tt>RAMTOP</tt> equals 17408 (until you change it); to find it on your system, use
<syntaxhighlight lang="basic">PRINT PEEK 16388+256*16389</syntaxhighlight>
You can then use <code>POKE</code> statements to reset <tt>RAMTOP</tt> to a lower value, thus reserving the space above it.
 
The second approach, suitable especially if you want to reserve a few bytes for a small machine code subroutine, is to hide the storage space you want inside a comment. When you enter a <code>REM</code> statement, the interpreter sets aside sufficient bytes to store the text of your comment <i>and then doesn't care what you put in them</i>: if you know the address where the comment is stored, therefore, you can <code>POKE</code> whatever values you like into that space. If the comment is the first line in the program, the <code>REM</code> itself will be at address 16513 and the comment text will begin at 16514 and take up one byte per character. An example, with a trivial machine code routine (it adds 3 and 4):
<syntaxhighlight lang="basic">10 REM ABCDEFGH
20 LET P$="3E03010400814FC9"
30 LET ADDR=16514
40 POKE ADDR,CODE P$*16+CODE P$(2)-476
50 LET P$=P$(3 TO )
60 LET ADDR=ADDR+1
70 IF P$<>"" THEN GOTO 40
80 CLEAR
90 PRINT USR 16514</syntaxhighlight>
The <tt>ABCDEFGH</tt> is arbitrary: any other eight characters would work just as well. The string in line <tt>20</tt> is the hex representation of the Z80 code, which could be disassembled as:
<syntaxhighlight lang="z80asm">3e 03 ld a, 3
01 04 00 ld bc,0004
81 add a, c
4f ld c, a
c9 ret</syntaxhighlight>
Line <tt>40</tt> reads a two-digit hex number and pokes its value into memory. <code>USR</code> ("user sub routine"), in line <tt>90</tt>, is a function that takes the address of a machine language routine, calls it, and returns the contents of the <tt>BC</tt> register pair when the routine terminates. Under normal circumstances, once you were satisfied the machine code program was working correctly you would remove lines <tt>20</tt> to <tt>80</tt>, leaving just the machine code subroutine and the call to it. Note that if you list the program once you have run it, the first line will look something like this:
<syntaxhighlight lang="basic">10 REM Y▀▀:▖ ▟?TAN</syntaxhighlight>
Unfortunately, you cannot type that in directly: not all 256 possible values are accessible from the keyboard, so there is no point trying to learn to enter machine code in that form.
 
=={{header|Smalltalk}}==
Smalltalk does automatic memory management and garbage collection.
So in normal use, all you do is allocate by instantiating objects (eg. <tt>ByteArray new:size</tt>).
 
However, to support passing data in and out to external functions (typically: C-functions or data for a GPU or similar),
a number of additional APIs are present (which may differ slightly among Smalltalk dialects):
 
{{works with|Smalltalk/X}}
To allocate a non-movable, non garbage collected block of memory, eg. to hand out a block of memory on which an external C-function keeps a reference. The memory must be eventually explicitly freed by the programmer:
<syntaxhighlight lang="smalltalk">handle := ExternalBytes new:size
...
handle free</syntaxhighlight>
To allocate a non-movable block of memory, which is garbage collected as soon as the reference is no longer reachable by Smalltalk (to hand out a block of memory to an external function which does NOT keep a reference on it):
<syntaxhighlight lang="smalltalk">handle := ExternalBytes unprotectedNew:size
...
handle := nil "or no longer reachable"
...
memory will be freed by the garbage collector eventually</syntaxhighlight>
 
Of course, both are to be used with great care, as memory leaks are possible. Thus, it is only used by core parts of the system, eg. for async I/O buffers, shared memory, mapped I/O devices etc. Normal programs would not use them.
 
=={{header|SNOBOL4}}==
In SNOBOL4, simple data values are just created and assigned to variables. Here, three separate strings are concatenated and stored as a newly allocated string variable:
 
<langsyntaxhighlight lang="snobol4"> newstring = "This is creating and saving" " a new single string " "starting with three separate strings."</langsyntaxhighlight>
 
Empty arrays are created by using the built-in function (the size is determined when it is created):
 
<langsyntaxhighlight lang="snobol4"> newarray = array(100)</langsyntaxhighlight>
 
Empty tables are similarly created using the built-in function (new entries can be simply added at any later time by just storing them into the table):
 
<langsyntaxhighlight lang="snobol4"> newtable = table()</langsyntaxhighlight>
 
User-defined datatypes (usually, multi-field structures) are defined using the data() built-in function (which creates the constructor and field access functions):
 
<langsyntaxhighlight lang="snobol4"> data("listnode(next,prev,datafield1,datafield2)")</langsyntaxhighlight>
 
Then you allocate a new example of the defined listnode data item by using the constructor the data() function created:
 
<langsyntaxhighlight lang="snobol4"> newnode = listnode(,,"string data value1",17)</langsyntaxhighlight>
 
The example thus created can be updated using the field access functions also created by the data() function:
 
<langsyntaxhighlight lang="snobol4"> datafield1(newnode) = "updated data value 1"</langsyntaxhighlight>
 
You don't need to explicitly de-allocate memory. When you leave a function which has declared local variables, data stored in those local variables is released upon return. You can also just store a null string into a variable, releasing the value that was stored in that variable previously:
 
<langsyntaxhighlight lang="snobol4"> newnode = </langsyntaxhighlight>
 
SNOBOL4 automatically garbage collects released data items on an as-needed basis, and moves allocated items to consolidate all released space (so memory fragmentation is never a problem). You can explicitly garbage collect if you really want to:
 
<syntaxhighlight lang ="snobol4"> collect()</langsyntaxhighlight>
 
=={{header|Tcl}}==
Line 1,235 ⟶ 1,643:
 
More commonly, a package written in [[C]] will be used to manage the memory on behalf of Tcl, with explicit memory management. Here is an example of such a package:
<langsyntaxhighlight lang="c">#include <tcl.h>
 
/* A data structure used to enforce data safety */
Line 1,429 ⟶ 1,837:
/* Register the package */
return Tcl_PkgProvide(interp, "memalloc", "1.0");
}</langsyntaxhighlight>
 
The package would then be used like this:
<langsyntaxhighlight lang="tcl">package require memalloc
 
set block [memalloc 1000]
Line 1,439 ⟶ 1,847:
someOtherCommand [memaddr $block]
puts "$block\[42\] is now [memget $block 42]"
memfree $block</langsyntaxhighlight>
 
Other methods of performing things like memory allocation are also possible.
* Using <code>string repeat</code> or <code>lrepeat</code> to make a group of entities that work like a block of memory (despite not being); these need marshalling code to bridge to foreign function interfaces.
* Using [[:Category:SWIG|SWIG]] or [[:Category:critcl|critcl]] to write a bridge to a standard [[C]] allocator.
 
=={{header|Wren}}==
In Wren memory for any object is allocated automatically by the VM and de-allocated automatically when there are no longer any references to it by the garbage collector.
 
The only type of memory allocation where the programmer has any control is the initial number of elements of a List though even here additional elements can be allocated by simply adding them.
<syntaxhighlight lang="wren">// create a list with 10 elements all initialized to zero
var squares = List.filled(10, 0)
// give them different values and print them
for (i in 0..9) squares[i] = i * i
System.print(squares)
// add another element to the list dynamically and print it again
squares.add(10 * 10)
System.print(squares)
squares = null // make eligible for GC </syntaxhighlight>
 
{{out}}
<pre>
[0, 1, 4, 9, 16, 25, 36, 49, 64, 81]
[0, 1, 4, 9, 16, 25, 36, 49, 64, 81, 100]
</pre>
 
=={{header|X86 Assembly}}==
This is a bare-bones implementation of a heap memory allocator for x86_64 Linux. We alloctate memory page at a time using brk and divide up the memory in chunks of requested size using a linked list-like block struct. Not optimized for speed or efficiency.
 
<syntaxhighlight lang="x86asm">
; linux x86_64
 
struc block
free: resb 1 ; whether or not this block is free
size: resb 2 ; size of the chunk of memory
next: resb 8 ; the next chunk after this one
mem:
endstruc
 
section .data
hStart: dq 0 ; the beginning of our heap space
break: dq 0 ; the current end of our heap space
 
 
section .text
 
Allocate:
 
push rdi ; save the size argument
 
cmp qword [break], 0 ; if breakpoint is zero this
je firstAlloc ; is the first call to allocate
 
mov rdi, qword [hStart] ; else address of heap start
 
findBlock: ; look for a suitable block of memory
 
cmp byte [rdi + free], 2
je newBlock ; end of heap reached, create new block
 
cmp byte [rdi + free], 0
je skipBlock ; this block taken
 
; this block is free, make
; sure it's big enough
mov bx, word [rdi + size]
mov rcx, qword [rsp] ; compare against our size arg
cmp cx, bx
jg skipBlock ; keep looking if not big enough
 
mov byte [rdi + free], 0 ; else mark as taken
add rdi, mem
add rsp, 8 ; discard size arg, we didn't need it
mov rax, rdi ; return pointer to this block
ret
 
skipBlock:
mov rsi, qword [rdi + next] ; load next
mov rdi, rsi ' block address
jmp findBlock
 
newBlock:
mov rax, rdi
add rdi, 1024
cmp rdi, qword [break]
jl initAndAllocate
push rax
mov rdi, qword [break] ; if we are getting low on
add rdi, 4096 ; heap space, we ask OS for
mov rax, 12 ; more memory with brk syscall
syscall
cmp rax, qword [break] ; if breakpoint has not increased,
je allocFail ; then memory could not be allocated
mov qword [break], rax
pop rax
jmp initAndAllocate
 
firstAlloc: ; extra work has to be done on first
mov rax, 12 ; call to this subroutine
mov rdi, 0
syscall
mov qword [hStart], rax ; init heap start
add rax, 4096
mov rdi, rax
mov rax, 12 ; get heap memory with sys brk
syscall
cmp rax, qword [hStart]
je allocFail
mov qword [break], rax
mov rax, qword [hStart]
 
initAndAllocate:
mov byte [rax + free], 0 ; mark block free
pop rdi ; pop size arg off stack
mov word [rax + size], di ; mark it's size
lea rsi, [rax + mem + rdi]
mov byte [rsi + free], 2 ; mark heap end block
mov qword [rax + next], rsi ; mark next block
add rax, mem ; return pointer to block's memory space
ret
 
allocFail: ; exit(1) when allocation fails
mov rax, 60
mov rdi, 1
syscall
ret
 
; free this block so it can be
; reused in a subsequent call to allocate
Release:
sub rdi, mem
mov byte [rdi + free], 1
ret
 
</syntaxhighlight>
 
=={{header|XPL0}}==
<langsyntaxhighlight XPL0lang="xpl0">int Array(10); \allocates 10 integers (40 bytes) of heap space
Array2:= Reserve(10*4); \another way to allocate 10 integers of heap space
Array3:= MAlloc(4); \allocate 4 paragraphs (64 bytes) of conventional memory
...
Release(Array3); \release this memory so it can be used elsewhere</langsyntaxhighlight>
 
Heap space (for 32-bit XPL0) resides in extended memory (on an
Line 1,475 ⟶ 2,014:
 
Extension libraries/DLLs (written in C) can add this capability (the FFI library does this).
<langsyntaxhighlight lang="zkl">Data(123); // this bit bucket expects hold 123 bytes
List.createLong(123); // this list expects to hold 123 elements</langsyntaxhighlight>
 
=={{header|ZX Spectrum Basic}}==
Line 1,482 ⟶ 2,021:
On the ZX Spectrum, memory above the ramtop is protected, so the usual way to reserve memory is to move the ramtop to a lower address using a clear command. The following example assumes that a 48k ZX Spectrum is being used:
 
<langsyntaxhighlight lang="zxbasic">10 REM This code is written for a 48k spectrum
20 CLEAR 65535 - 8192: REM reserve 8192 bytes of memory
30 CLEAR 65535: REM unreserve the memory, moving the ramtop back to the top of the ram</langsyntaxhighlight>
 
{{omit from|ACL2}}
9,476

edits