Read entire file

From Rosetta Code
Read entire file
You are encouraged to solve this task according to the task description, using any language you may know.

Load the entire contents of some text file as a single string variable.

If applicable, discuss: encoding selection, the possibility of memory-mapping.

Of course, one should avoid reading an entire file at once if the file is large and the task can be accomplished incrementally instead (in which case check File IO); this is for those cases where having the entire file is actually what is wanted.


The "slurp" word will read the entire contents of the file into memory, as-is, and give a "buffer". The ">s" converts that to a string, again "as-is"

"somefile.txt" f:slurp >s



Works with: Ada 2005

Using Ada.Directories to first ask for the file size and then Ada.Direct_IO to read the whole file in one chunk:

with Ada.Directories,
procedure Whole_File is
File_Name : String  := "whole_file.adb";
File_Size : Natural := Natural (Ada.Directories.Size (File_Name));
subtype File_String is String (1 .. File_Size);
package File_String_IO is new Ada.Direct_IO (File_String);
File  : File_String_IO.File_Type;
Contents : File_String;
File_String_IO.Open (File, Mode => File_String_IO.In_File,
Name => File_Name);
File_String_IO.Read (File, Item => Contents);
File_String_IO.Close (File);
Ada.Text_IO.Put (Contents);
end Whole_File;

This kind of solution is limited a bit by the fact that the GNAT implementation of Ada.Direct_IO first allocates a copy of the read object on the stack inside Ada.Direct_IO.Read. On Linux you can use the command "limit stacksize 1024M" to increase the available stack for your processes to 1Gb, which gives your program more freedom to use the stack for allocating objects.


Works with: POSIX
Works with: Ada 95

Mapping the whole file into the address space of your process and then overlaying the file with a String object.

with Ada.Text_IO,
procedure Read_Entire_File is
use POSIX, POSIX.IO, POSIX.Memory_Mapping;
use System.Storage_Elements;
Text_File  : File_Descriptor;
Text_Size  : System.Storage_Elements.Storage_Offset;
Text_Address : System.Address;
Text_File := Open (Name => "read_entire_file.adb",
Mode => Read_Only);
Text_Size := Storage_Offset (File_Size (Text_File));
Text_Address := Map_Memory (Length => Text_Size,
Protection => Allow_Read,
Mapping => Map_Shared,
File => Text_File,
Offset => 0);
Text : String (1 .. Natural (Text_Size));
for Text'Address use Text_Address;
Ada.Text_IO.Put (Text);
Unmap_Memory (First => Text_Address,
Length => Text_Size);
Close (File => Text_File);
end Read_Entire_File;

Character encodings and their handling are not really specified in Ada. What Ada does specify is three different character types (and corresponding string types):

  • Character - containing the set of ISO-8859-1 characters.
  • Wide_Character - containing the set of ISO-10646 BMP characters.
  • Wide_Wide_Character - containing the full set of ISO-10646 characters.

The GNU Ada compiler (GNAT) seems to read in text files as bytes, completely ignoring any operating system information on character encoding. You can use -gnatW8 in Ada 2005 mode to use UTF-8 characters in identifier names.


fileread, varname, C:\filename.txt ; adding "MsgBox %varname%" (no quotes) to the next line will display the file contents.

This script works fine as-is provided C:\filename.txt exists.


$fileOpen = FileOpen("file.txt")
$fileRead = FileRead($fileOpen)

ALGOL 68[edit]

In official ALGOL 68 a file is composed of pages, lines and characters, however for ALGOL 68 Genie and ELLA ALGOL 68RS this concept is not supported as they adopt the Unix concept of files being "flat", and hence contain only characters.

The book can contain new pages and new lines, are not of any particular character set, hence are system independent. The character set is set by a call to make conv, eg make conv(tape, ebcdic conv); - c.f. Character_codes for more details.

In official/standard ALGOL 68 only:

MODE BOOK = FLEX[0]FLEX[0]FLEX[0]CHAR; ¢ pages of lines of characters ¢
BOOK book;
FILE book file;
INT errno = open(book file, "book.txt", stand in channel);
get(book file, book)

Once a "book" has been read into a book array it can still be associated with a virtual file and again be accessed with standard file routines (such as readf, printf, putf, getf, new line etc). This means data can be directly manipulated from a array cached in "core" using transput (stdio) routines.

In official/standard ALGOL 68 only:

FILE cached book file;
associate(cached book file, book)


set pathToTextFile to ((path to desktop folder as string) & "testfile.txt")
-- short way: open, read and close in one step
set fileContent to read file pathToTextFile
-- long way: open a file reference, read content and close access
set fileRef to open for access pathToTextFile
set fileContent to read fileRef
close access fileRef


#!/usr/bin/awk -f
## empty record separate,
## read line (i.e. whole file) into $0
## print line number and content of line
print "=== line "NR,":",$0;
## no further line is read printed
print "=== line "NR,":",$0;
Works with: gawk
#!/usr/bin/awk -f
@include "readfile"
str = readfile("file.txt")
print str


Whether or not various encodings are supported is implementation-specific.

Works with: QBasic
OPEN "file.txt" FOR BINARY AS 1
f = SPACE$(LOF(1))
GET #1, 1, f


In BBC BASIC for Windows and Brandy BASIC the maximum string length is 65535 characters.

      file% = OPENIN("input.txt")
strvar$ = ""
strvar$ += CHR$(BGET#file%)
CLOSE #file%

API version:

      file% = OPENIN("input.txt")
strvar$ = STRING$(EXT#file%, " ")
SYS "ReadFile", @hfile%(file%), !^strvar$, EXT#file%, ^temp%, 0
CLOSE #file%




While the language certainly doesn't support strings in the traditional sense, relaxing the definition to mean any contiguous sequence of null-terminated bytes permits a reasonable facsimile. This cat program eschews the simpler byte-by-byte approach (,[.,]) to demonstrate the technique.

>     Keep cell 0 at 0 as a sentinel value
,[>,] Read into successive cells until EOF
<[<] Go all the way back to the beginning
>[.>] Print successive cells while nonzero
$ curl -Ls | bf ">,[>,]<[<]>[.>]"
<!DOCTYPE html>
Tape: [0, 60, 33, 68, 79, 67, 84, 89, 80, 69, 32, 104, 116, 109, 108, 62, 10 ... 60, 47, 104, 116, 109, 108, 62, 10, 0]


include :file file_name


It is not possible to specify encodings: the file is read as binary data (on some system, the b flag is ignored and there's no difference between "r" and "rb"; on others, it changes the way the "new lines" are treated, but this should not affect fread)

#include <stdio.h>
#include <stdlib.h>
int main()
char *buffer;
FILE *fh = fopen("readentirefile.c", "rb");
if ( fh != NULL )
fseek(fh, 0L, SEEK_END);
long s = ftell(fh);
buffer = malloc(s);
if ( buffer != NULL )
fread(buffer, s, 1, fh);
// we can now close the file
fclose(fh); fh = NULL;
// do something, e.g.
fwrite(buffer, s, 1, stdout);
if (fh != NULL) fclose(fh);
Works with: POSIX

We can memory-map the file.

#include <stdio.h>
#include <stdlib.h>
#include <sys/mman.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <unistd.h>
#include <fcntl.h>
int main()
char *buffer;
struct stat s;
int fd = open("readentirefile_mm.c", O_RDONLY);
if (fd < 0 ) return EXIT_FAILURE;
fstat(fd, &s);
/* PROT_READ disallows writing to buffer: will segv */
buffer = mmap(0, s.st_size, PROT_READ, MAP_PRIVATE, fd, 0);
if ( buffer != (void*)-1 )
/* do something */
fwrite(buffer, s.st_size, 1, stdout);
munmap(buffer, s.st_size);


#include <iostream>
#include <fstream>
#include <string>
#include <iterator>
int main( )
if (std::ifstream infile("sample.txt"))
// construct string from iterator range
std::string fileData(std::istreambuf_iterator<char>(infile), std::istreambuf_iterator<char>());
cout << "File has " << fileData.size() << "chars\n";
// don't need to manually close the ifstream; it will release the file when it goes out of scope
return 0;
std::cout << "file not found!\n";
return 1;


Works with: C sharp version 3.0
using System.IO;
class Program
static void Main(string[] args)
var fileContents = File.ReadAllText("c:\\autoexec.bat");


The core function slurp does the trick; you can specify an encoding as an optional second argument:

(slurp "myfile.txt")
(slurp "my-utf8-file.txt" "UTF-8")


Sets a variable named string.

file(READ /etc/passwd string)

This works with text files, but fails with binary files that contain NUL characters. CMake truncates the string at the first NUL character, and there is no way to detect this truncation.

The only way to read binary files is to use the HEX keyword to convert the entire file to a hexadecimal string.

file(READ /etc/pwd.db string HEX)

Common Lisp[edit]

(defun file-string (path)
(with-open-file (stream path)
(let ((data (make-string (file-length stream))))
(read-sequence data stream)


import std.file: read, readText;
void main() {
// To read a whole file into a dynamic array of unsigned bytes:
auto data = cast(ubyte[])read("unixdict.txt");
// To read a whole file into a validated UTF-8 string:
string txt = readText("unixdict.txt");


Using TStringList

program ReadAll;
uses Classes;
i: Integer;
lList: TStringList;
lList := TStringList.Create;
// Write everything at once
// Write one line at a time
for i := 0 to lList.Count - 1 do

Works with: Delphi 2010 and above

program ReadAll;
SysUtils, IOUtils;
// with default encoding:
// with encoding specified:
Writeln(TFile.ReadAllText('C:\autoexec.bat', TEncoding.ASCII));

Déjà Vu[edit]

To get a string from a file, you need to explicitly decode the binary blob that is read. Currently only UTF-8 is supported by vu.

local :filecontents !decode!utf-8 !read "file.txt"



The file is assumed to be in the default encoding.


Two solutions in the FileReader namespace. File returns a tuple: {:ok, file} is successful or {:error, reason} if unsuccessful. Errors can be caught and turned into error strings via Erlang's :file.format_error function.

defmodule FileReader do
# Read in the file
def read(path) do
case do
{:ok, body} ->
IO.inspect body
{:error,reason} ->
# Open the file path, then read in the file
def bit_read(path) do
case do
{:ok, file} ->
# :all can be replaced with :line, or with a positive integer to specify the number of characters to read.,:all)
|> IO.inspect
{:error,reason} ->

Emacs Lisp[edit]

insert-file-contents does all Emacs' usual character coding, magic file names, decompression, format decoding, etc. (insert-file-contents-literally can avoid that if unwanted.)

(setq my-variable (with-temp-buffer
(insert-file-contents "foo.txt")

(If an existing buffer is visiting the file, perhaps yet unsaved, it may be helpful to take its contents instead of re-reading the file. find-buffer-visiting can locate such a buffer.)


{ok, B} = file:read_file("myfile.txt").

This reads the entire file into a binary object.


Euphoria cannot natively handle multibyte character encodings. The openEuphoria team is/was working on supporting it. It may have been implemented by now.

function load_file(sequence filename)
integer fn,c
sequence data
fn = open(filename,"r") -- "r" for text files, "rb" for binary files
if (fn = -1) then return {} end if -- failed to open the file
data = {} -- init to empty sequence
c = getc(fn) -- prime the char buffer
while (c != -1) do -- while not EOF
data &= c -- append each character
c = getc(fn) -- next char
end while
return data
end function


// read entire file into variable using default system encoding or with specified encoding
open System.IO
let data = File.ReadAllText(filename)
let utf8 = File.ReadAllText(filename, System.Text.Encoding.UTF8)


USING: io.encodings.ascii io.encodings.binary io.files ;
! to read entire file as binary
"foo.txt" binary file-contents
! to read entire file as lines of text
"foo.txt" ascii file-lines


Provide the filename to read from as a command-line parameter.

class ReadString
public static Void main (Str[] args)
Str contents := File(args[0].toUri).readAllStr
echo ("contents: $contents")


Works with: GNU Forth
s" foo.txt" slurp-file   ( str len )


Suppose F is an integer with a value such as 10 - it is the I/O unit number, and STUFF is a CHARACTER variable. The basic idea is simple:


By opening the file as UNFORMATTED, the line separators (in ASCII, one of CR, CRLF, LFCR or CR) will not be acted upon and all is grist, all the way to the end of the file. But alas, there is no protocol for arranging that STUFF be the right size for the file, nor is there a standard means to ascertain just how long the file is, as by some aspect of an INQUIRE statement, and anyway there will likely be error reports from the I/O subsystem. In short, it just won't work.

The only way is to define some large data structure corresponding the the stuff in the file, and read the file line-by-line to the end. If simple text is expected, and, no line exceeds ENUFF in length, and there are no more than MANY lines,
      INTEGER MANY,ENUFF	!Some sizes.
PARAMETER (MANY = 12345,ENUFF = 666) !Sufficient?
CHARACTER*(ENUFF) STUFF(MANY) !Lots of memory these days.
INTEGER F,N,L !Assistants.
F = 10 !Chooose a unit number.
N = 0
Chew through the file.
10 READ (F,11,END = 20) L,STUFF(N + 1)(1:MIN(L,ENUFF)) !Cautious read.
11 FORMAT (Q,A) !The length of the record, then its text.
N = N + 1 !Count it in.
IF (L.GT.ENUFF) STOP "Record too long!" !Not a very helpful message.
IF (N.GT.MANY) STOP "Too many lines!" !But it's better than crashing.
LS(N) = MIN(L,ENUFF) !A protected length.
GO TO 10 !Tray again.
20 CLOSE (F) !Finished.
DO I = 1,N !Proof of life.
WRITE (6,21) STUFF(I)(1:LS(I)) !One line at a time.
21 FORMAT (A) !No character count associate.
END DO !On to the next line.
END !That was easy.

This will read lines of text (omitting whichever of CR, CRLF, etc. is in use) until the end of the file. Although "text" is spoken of, actually any bit pattern is grist for the input, except for the bit pattern corresponding to the record separator (the CR, etc.) which is a troublesome context violation if one is actually dealing with arbitrary bit patterns as with binary data from integer and floating-point variables. Should a component byte contain a CR (or whichever) pattern, there will be a line break!

Modern computers offer large memories, but also large files. If the STUFF can be processed without the necessity for all of the file's content to be on hand, this problem is eased.


The read[URL] function reads the entire contents of a URL. The encoding can be specified if necessary.

a = read["file:yourfile.txt"]
b = read["file:yourfile.txt", "UTF-8"]


s := ReadAll(f);; # two semicolons to hide the result, which may be long


Go has good support for working with strings as UTF-8, but there is no requirement that strings be UTF-8 and in fact they can hold arbitrary data. ioutil.ReadFile returns the contents of the file unaltered as a byte array. The conversion in the next line from byte array to string also makes no changes to the data. In the example below sv will have an exact copy of the data in the file, without regard to encoding.

import "io/ioutil"
data, err := ioutil.ReadFile(filename)
sv := string(data)

Go also supports memory mapped files on OSes with a mmap syscall (e.g. Unix-like). The following prints the contents of "file". (The included "build constraint" prevents this from being compiled on architectures known to lack syscall.Mmap, another source file with the opposite build constraint could use ioutil.ReadFile as above).

// +build !windows,!plan9,!nacl // These lack syscall.Mmap
package main
import (
func main() {
f, err := os.Open("file")
if err != nil {
fi, err := f.Stat()
if err != nil {
data, err := syscall.Mmap(int(f.Fd()), 0, int(fi.Size()),
syscall.PROT_READ, syscall.MAP_PRIVATE)
if err != nil {


def fileContent = new File("c:\\file.txt").text




In the IO monad:

do text <- readFile filepath
-- do stuff with text

Note that readFile is lazy. If you want to ensure the entire file is read in at once, before any other IO actions are run, try:

eagerReadFile :: FilePath -> IO String
eagerReadFile filepath = do
text <- readFile filepath
last text `seq` return text

Icon and Unicon[edit]

The first code snippet below reads from stdin directly into the string fs, preserving line separators (if any) and reading in large chunks.

every (fs := "") ||:= |reads(1000000)

The second code snippet below performs the same operation using an intermediate list fL and applying a function (e.g. FUNC) to each line. Use this form when you need to perform additional string functions such as 'trim' or 'map' on each line. This avoids unnecessary garbage collections which will occur with larger files. The list can be discarded when done. Line separators are mapped into newlines.

every put(fL := [],|FUNC(read()))
every (fs := "") ||:= !fL || "\n"
fL := &null

Inform 7[edit]

File access is sandboxed by the interpreter, so this solution essentially requires that the file have been previously written by an Inform program running from the same location under the same interpreter.

Home is a room.
The File of Testing is called "test".
When play begins:
say "[text of the File of Testing]";
end the story.


   require 'files'         NB. not needed for J7 & later
var=: freads 'foo.txt'

To memory map the file:

   require 'jmf'
JCHAR map_jmf_ 'var';'foo.txt'

Caution: updating the value of the memory mapped variable will update the file, and this characteristic remains when the variable's value is passed, unmodified, to a verb which modifies its own local variables.


There is no single method to do this in Java 6 and below (probably because reading an entire file at once could fill up your memory quickly), so to do this you could simply append the contents as you read them into a buffer.

public class ReadFile {
public static void main(String[] args) throws IOException{
String fileContents = readEntireFile("./foo.txt");
private static String readEntireFile(String filename) throws IOException {
FileReader in = new FileReader(filename);
StringBuilder contents = new StringBuilder();
char[] buffer = new char[4096];
int read = 0;
do {
contents.append(buffer, 0, read);
read =;
} while (read >= 0);
return contents.toString();

One can memory-map the file in Java, but there's little to gain if one is to create a String out of the file:

import java.nio.channels.FileChannel.MapMode;
import java.nio.MappedByteBuffer;
public class MMapReadFile {
public static void main(String[] args) throws IOException {
MappedByteBuffer buff = getBufferFor(new File(args[0]));
String results = new String(buff.asCharBuffer());
public static MappedByteBuffer getBufferFor(File f) throws IOException {
RandomAccessFile file = new RandomAccessFile(f, "r");
MappedByteBuffer buffer = file.getChannel().map(MapMode.READ_ONLY, 0, f.length());
return buffer;

or one can take a shortcut:

String content = new Scanner(new File("foo"), "UTF-8").useDelimiter("\\A").next();

this works because Scanner will search the file for a delimiter and return everything before that. \A is the beginning of the file, which it will never find until the end of the file is reached.

Works with: Java version 7+

Java 7 added java.nio.file.Files which has two methods for accomplishing this task: Files.readAllLines and Files.readAllBytes:

import java.util.List;
import java.nio.charset.Charset;
import java.nio.file.*;
public class ReadAll {
public static List<String> readAllLines(String filesname){
Path file = Paths.get(filename);
return Files.readAllLines(file, Charset.defaultCharset());
public static byte[] readAllBytes(String filename){
Path file = Paths.get(filename);
return Files.readAllBytes(file);


This works in IExplorer or a standalone js file. Note the similarity to the VBScript code.

var fso=new ActiveXObject("Scripting.FileSystemObject");
var f=fso.OpenTextFile("c:\\myfile.txt",1);
var s=f.ReadAll();

The following works in all browsers, including IE10.

var file = document.getElementById("fileInput").files.item(0); //a file input element
if (file) {
var reader = new FileReader();
reader.readAsText(file, "UTF-8");
reader.onload = loadedFile;
reader.onerror = errorHandler;
function loadedFile(event) {
var fileString =;
function errorHandler(event) {


The . filter will read in a file of raw text, e.g. if the file is named input.txt and we wanted to emit it as a single JSON string:

jq -R -s . input.txt

In practice, this is probably not very useful. It would be more typical to collect the raw lines into an array of JSON strings.

If it is known that the lines are delimited by a single "newline" character, then one could simply pipe from one jq command to another:
jq -R . input.txt | jq -s .
jq -R -s 'split("\n")' input.txt 

Other cases can be similarly handled.


The built-in function readall reads into a string (assuming UTF8 encoding), or you can also read into an array of bytes:

readall("/devel/myfile.txt") # read file into a string
open(readbytes, "/devel/myfile.txt") # read file into an array of bytes

Alternatively, there are a variety of ways to memory-map the file, here as an array of bytes:

f = open("/devel/myfile.txt", "r")
A = mmap_array(Uint8, (filesize("/devel/myfile.txt"),), f)


fun readText() {
val string = File("unixdict.txt").readText(charset = Charsets.UTF_8)


This image is a VI Snippet, an executable image of LabVIEW code. The LabVIEW version is shown on the top-right hand corner. You can download it, then drag-and-drop it onto the LabVIEW block diagram from a file browser, and it will appear as runnable, editable code.
LabVIEW Read entire file.png


'foo.txt slurp


By default, string objects, which are always Unicode, are created with the assumption that the file contains UTF-8 encoded data. This assumption can be changed by settings the file objects’s character encoding value. When reading the data as a bytes object, the unaltered file data is returned.

local(f) = file('foo.txt')


(set `#(ok ,data) (file:read_file "myfile.txt"))

Liberty BASIC[edit]

filedialog "Open a Text File","*.txt",file$
if file$<>"" then
open file$ for input as #1
entire$ = input$(#1, lof(#1))
close #1
print entire$
end if


Livecode offers 2 ways:

Using URL

put URL "file:///usr/share/dict/words" into tVar
put the number of lines of tVar

Using file open + read + close

local tFile,tLinecount
put "/usr/share/dict/words" into tFile
open file tFile for text read
read from file tFile until EOF
put the number of lines of it -- file contents held in "it" variable
close file tFile


--If the file opens with no problems, will return a
--handle to the file with methods attached.
--If the file does not exist, will return nil and
--an error message.
--assert will return the handle to the file if present, or
--it will throw an error with the message returned second
local file = assert(
--Without wrapping in an assert, local file would be nil,
--which would cause an 'attempt to index a nil value' error when
--calling file:read.
--file:read takes the number of bytes to read, or a string for
--special cases, such as "*a" to read the entire file.
local contents = file:read'*a'
--If the file handle was local to the expression
--(ie. "assert('a'"),
--the file would remain open until its handle was
--garbage collected.


An approximation to file reading can be had by include() which reads a file as M4 input. If it's inside a define() then the input is captured as a definition. But this is extremely limited since any macro names, parens, commas, quote characters etc in the file will expand and upset the capture.



Works with: GNU make
contents := $(shell cat foo.txt)

This is from the GNU Make manual. As noted there, newlines are converted to spaces in the $(contents) variable. This might be acceptable for files which are a list of words anyway.


First solution:

s1 := readbytes( "file1.txt", infinity, TEXT ):

Second solution:

s2 := FileTools:-Text:-ReadFile( "file2.txt" ):



MATLAB / Octave[edit]

  fid = fopen('filename','r');
[str,count] = fread(fid, [1,inf], 'uint8=>char'); % s will be a character array, count has the number of bytes


:- module read_entire_file.
:- interface.
:- import_module io.
:- pred main(io::di, io::uo) is det.
:- implementation.
:- import_module string.
main(!IO) :-
io.open_input("file.txt", OpenResult, !IO),
OpenResult = ok(File),
io.read_file_as_string(File, ReadResult, !IO),
ReadResult = ok(FileContents),
io.write_string(FileContents, !IO)
ReadResult = error(_, IO_Error),
io.stderr_stream(Stderr, !IO),
io.write_string(Stderr, io.error_message(IO_Error) ++ "\n", !IO)
OpenResult = error(IO_Error),
io.stderr_stream(Stderr, !IO),
io.write_string(Stderr, io.error_message(IO_Error) ++ "\n", !IO)


/* NetRexx */
options replace format comments java crossref symbols nobinary
parse arg inFileName .
if inFileName = '' | inFileName = '.' then inFileName = './data/dwarfs.json'
fileContents = slurp(inFileName)
say fileContents
-- Slurp a file and return contents as a Rexx string
method slurp(inFileName) public static returns Rexx
slurped = Rexx null
slurpStr = StringBuilder()
ioBuffer = byte[1024]
inBytes = int 0
inFile = File(inFileName)
inFileIS = BufferedInputStream(FileInputStream(inFile))
loop label ioLoop until inBytes = -1
slurpStr.append(String(ioBuffer, 0, inBytes))
inBytes =
end ioLoop
catch exFNF = FileNotFoundException
catch exIO = IOException
catch ex = IOException
slurped = Rexx(slurpStr.toString)
return slurped


(read-file "filename")




string := FileReader->ReadFile("in.txt");


/*** 0. PREPARATION */
// We need a text file to read; let's redirect a C string to a new file
// using the shell by way of the stdlib system() function.
system ("echo \"Hello, World!\" > ~/HelloRosetta");
/*** 1. THE TASK */
// Instantiate an NSString which describes the filesystem location of
// the file we will be reading.
NSString *filePath = [NSHomeDirectory() stringByAppendingPathComponent:@"HelloRosetta"];
// The selector we're going to use to complete this task,
// stringWithContentsOfFile:encoding:error, has an optional `error'
// parameter which can be used to return information about any
// errors it might run into. It's optional, but we'll create an NSError
// anyways to demonstrate best practice.
NSError *anError;
// And finally, the task: read and store the contents of a file as an
// NSString.
NSString *aString = [NSString stringWithContentsOfFile:filePath
// If the file read was unsuccessful, display the error description.
// Otherwise, display the NSString.
if (!aString) {
NSLog(@"%@", [anError localizedDescription]);
} else {
NSLog(@"%@", aString);


For most uses we can use this function:

let load_file f =
let ic = open_in f in
let n = in_channel_length ic in
let s = String.create n in
really_input ic s 0 n;
close_in ic;

There is no problem reading an entire file with the function really_input because this function is implemented appropriately with an internal loop, but it can only load files which size is equal or inferior to the maximum length of an ocaml string. This maximum size is available with the variable Sys.max_string_length. On 32 bit machines this size is about 16Mo.

To load bigger files several solutions exist, for example create a structure that contains several strings where the contents of the file can be split. Or another solution that is often used is to use a bigarray of chars instead of a string:

type big_string =
(char, Bigarray.int8_unsigned_elt, Bigarray.c_layout) Bigarray.Array1.t

The function below returns the contents of a file with this type big_string, and it does so with "memory-mapping":

let load_big_file filename =
let fd = Unix.openfile filename [Unix.O_RDONLY] 0o640 in
let len = Unix.lseek fd 0 Unix.SEEK_END in
let _ = Unix.lseek fd 0 Unix.SEEK_SET in
let shared = false in (* modifications are done in memory only *)
let bstr = Bigarray.Array1.map_file fd
Bigarray.char Bigarray.c_layout shared len in
Unix.close fd;

Then the length of the data can be get with Bigarray.Array1.dim instead of String.length, and we can access to a given char with the syntactic sugar bstr.{i} (instead of str.[i]) as shown in the small piece of code below (similar to the cat command):

let () =
let bstr = load_big_file Sys.argv.(1) in
let len = Bigarray.Array1.dim bstr in
for i = 0 to pred len do
let c = bstr.{i} in
print_char c


version 1[edit]

file = 'c:\test.txt'
myStream = .stream~new(file)
myString = myStream~charIn(,myStream~chars)

Streams are opened on demand and closed when the script finishes. It is possible if you wish to open and close the streams explicitly

file = 'c:\test.txt'
myStream = .stream~new(file)
if mystream~open('read') = 'READY:'
then do
myString = myStream~charIn(,myStream~chars)

version 2 EXECIO[edit]

One can also use EXECIO as it is known from VM/CMS and MVS/TSO:

address hostemu 'execio * diskr "./" (finis stem in.'
Say in.0 'lines in file'
Do i=1 To in.0
Say i '>'in.i'<'
say 'v='v
::requires "hostemu" LIBRARY
E:\>rexx ref
6 lines in file
1 >address hostemu 'execio * diskr "./" (finis stem in.'<
2 >Say in.0<
3 >Do i=1 To in.0<
4 >  Say i '>'in.i'<'<
5 >  End<
6 >::requires "hostemu" LIBRARY<
v=address hostemu 'execio * diskr "./" (finis stem in.'Say in.0Do i=1 To in
.0  Say i '>'in.i'<'  End::requires "hostemu" LIBRARY


Two Formats:

string s


s=GetFile "t.txt"


Getfile "t.txt",s


The interface for file operations is object-oriented.

FileHandle = {New Open.file init(name:"test.txt")}
FileContents = {FileHandle read(size:all list:$)}
{FileHandle close}
{System.printInfo FileContents}

FileContents is a list of bytes. The operation does not assume any particular encoding.


The GP interpreter's ability to read files is extremely limited; reading an entire file is almost all that it can do. The C code PARI library is not similarly limited.

readstr() returns a vector of strings which are the file lines, without newlines. They can be concatenated to make a single string.

str = concat(apply(s->concat(s,"\n"), readstr("file.txt")))

Since readstr() returns strings without newlines there's no way to tell whether the last line had a newline or not. This is fine for its intended use on text files, but not good for reading binary files.


It returns a unicode string of type 'text'.

file:readme.txt .text


See TStrignList example of Delphi


my $text = do { local( @ARGV, $/ ) = ( $filename ); <> };


open my $fh, $filename;
my $text; read $fh, $text, -s $filename;
close $fh;


use File::Slurp;
my $text = read_file($filename);


use Perl6::Slurp qw(slurp);
my $text = slurp($filename);

or the IO::All module provides several ways:

use IO::All;
$text = io($filename)->all;
$text = io($filename)->utf8->all;
@text = io($filename)->slurp;
$text < io($filename);
io($filename) > $text;

For a one-liner from shell, use -0[code]. It normally specifies the oct char code of record separator ($/), so for example perl -n -040 would read chunks of text ending at each space ($/ = ' '). However, -0777 has special meaning: $/ = undef, so the whole file is read in at once (chr 0777 happens to be "ǿ", but Larry doesn't think one should use that as record separator).

perl -n -0777 -e 'print "file len: ".length' stuff.txt


use File::Map 'map_file';
map_file(my $str, "foo.txt");
print $str;
use Sys::Mmap;
Sys::Mmap->new(my $str, 0, 'foo.txt')
or die "Cannot Sys::Mmap->new: $!";
print $str;

File::Map has the advantage of not requiring an explicit munmap(). Its tie is faster than the tie form of Sys::Mmap too.

Perl 6[edit]

Works with: Rakudo version 2010.07
my $string = slurp 'sample.txt';


constant fn = open(command_line()[2],"rb")
{} = wait_key()
"constant fn = open(command_line()[2],\"rb\")\r\n?get_text(fn)\r\nclose(fn)\r\n{} = wait_key()\r\n"

The value returned by get_text is actually a string containing raw binary data (no \r\n -> \n substitution, even if the file is opened in text mode) and is not limited to text files.
There is no builtin method for handling different encodings, but demo\edita handles all such files with ease, including the nifty little encoding drop-down on the open/close dialog.




Using 'till' is the shortest way:

(in "file" (till NIL T))

To read the file into a list of characters:

(in "file" (till NIL))

or, more explicit:

(in "file" (make (while (char) (link @))))

Encoding is always assumed to be UTF-8.


string content=Stdio.File("foo.txt")->read();


get file (in) edit ((substr(s, i, 1) do i = 1 to 32767)) (a);


Get-Content foo.txt

This will only detect Unicode correctly with a BOM in place (even for UTF-8). With explicit selection of encoding:

Get-Content foo.txt -Encoding UTF8

However, both return an array of strings which is fine for pipeline use but if a single string is desired the array needs to be joined:

(Get-Content foo.txt) -join "`n"


A file can be read with any of the built in commands

Number.b = ReadByte(#File)
Length.i = ReadData(#File, *MemoryBuffer, LengthToRead)
Number.c = ReadCharacter(#File)
Number.d = ReadDouble(#File)
Number.f = ReadFloat(#File)
Number.i = ReadInteger(#File)
Number.l = ReadLong(#File)
Number.q = ReadQuad(#File)
Text$ = ReadString(#File [, Flags])
Number.w = ReadWord(#File)

If the file is s pure text file (no CR/LF etc.), this will work and will read each line untill EOL is found.

If ReadFile(0, "RC.txt")       

Since PureBasic terminates strings with a #NULL and also split the ReadString() is encountering new line chars, any file containing these must be treated as a data stream.

Title$="Select a file"
Pattern$="Text (.txt)|*.txt|All files (*.*)|*.*"
fileName$ = OpenFileRequester(Title$,"",Pattern$,0)
If fileName$
If ReadFile(0, fileName$)
length = Lof(0)
*MemoryID = AllocateMemory(length)
If *MemoryID
bytes = ReadData(0, *MemoryID, length)
MessageRequester("Info",Str(bytes)+" was read")



This returns a byte string and does not assume any particular encoding.

In Python 3 strings are in unicode, you can specify encoding when reading:

open(filename, encoding='utf-8').read()

Python docs recommend dealing with files using the with statement:

with open(filename) as f:
data =


"First line of file"
"Second line of file"


fname <- "notes.txt"
contents <- readChar(fname,$size)


(file->string "foo.txt")


'myfile.txt' read as $content_as_string


'file://r:/home/me/myfile.txt' open as $handle
$handle read as $content_as_string
$handle close


This function accepts a file (FolderItem object) and an optional TextEncoding class. If the TextEncoding is not defined, then REALbasic defaults to UTF-8. Since it is intended for cross-platform development, REALbasic has a number of built-in tools for working with different text encodings, line terminators, etc. [1]

Function readFile(theFile As FolderItem, txtEncode As TextEncoding = Nil) As String
Dim fileContents As String
Dim tis As TextInputStream
tis = tis.Open(theFile)
fileContents = tis.ReadAll(txtEncode)
Return fileContents
Exception err As NilObjectException
MsgBox("File Not Found.")
End Function


read %my-file  ; read as text
read/binary %my-file ; preserve contents exactly


with files'
here "input.txt" slurp


using LINEIN[edit]

/*REXX program reads a file and stores it as a continuous character str.*/
iFID = 'a_file' /*name of the input file. */
aString = /*value of file's contents so far*/
/* [↓] read file line-by-line. */
do while lines(iFID) \== 0 /*read file's lines 'til finished*/
aString = aString || linein(iFID) /*append a (file) line to aString*/
end /*while*/
/*stick a fork in it, we're done.*/

using CHARIN[edit]

Note that CRLF are in the resulting string.

/*REXX program reads a file and stores it as a continuous character str.*/
Parse Version v
iFID = '' /*name of the input file. */
If left(v,11)='REXX-Regina' |,
left(v,11)='REXX-ooRexx' Then Do
len=chars(iFid) /*size of the file */
v = charin(iFid,,len) /*read entire file */
Else Do /* for other Rexx Interpreters */
Do while chars(iFid)>0 /* read the file chunk by chunk */
say 'v='v
say 'length(v)='length(v)
E:\>rexx refc
v=line 1 of 3
line 2 of 3
line 3 of 3



# Read the file
cStr = read("myfile.txt")
# print the file content
See cStr

Also in one line we can read and print the file content.

cStr = read("myfile.txt") See cStr

We can avoid the string, but it's required in the task.

See read("myfile.txt")

Ruby[edit] is for text files. It uses the default text encodings, and on Microsoft Windows, it also converts "\r\n" to "\n".

# Read entire text file.
str = "foobar.txt"
# It can also read a subprocess.
str = "| grep ftp /etc/services"

Caution! and take a portname. To open an arbitrary path (which might start with "|"), you must use, then IO#read.

path = "|strange-name.txt"
str = {|f|}

To read a binary file, open it in binary mode.

# Read entire binary file.
str =, "rb") {|f|}

Ruby 1.9 can read text files in different encodings.

Works with: Ruby version 1.9
# Read EUC-JP text from file.
str =, "r:euc-jp") {|f|}
# Read EUC-JP text from file; transcode text from EUC-JP to UTF-8.
str =, "r:euc-jp:utf-8") {|f|}

Run BASIC[edit]

open DefaultDir$ + "/public/test.txt" for binary as #f
fileLen = LOF(#f)
a$ = input$(#f, fileLen)
print a$
close #f


use std::fs::File;
use std::io::Read;
fn main() {
let mut file = File::open("somefile.txt").unwrap();
let mut contents: Vec<u8> = Vec::new();
// Returns amount of bytes read and append the result to the buffer
let result = file.read_to_end(&mut contents).unwrap();
println!("Read {} bytes", result);
// To print the contents of the file
let filestr = String::from_utf8(contents).unwrap();
println!("{}", filestr);


Library: Scala
object TextFileSlurper extends App {
val fileLines =
try"my_file.txt", "UTF-8").mkString catch {
case e: => e.getLocalizedMessage()


Uses SRFI-13:

(with-input-from-file "foo.txt"
(lambda ()
(let loop ((char (read-char))
(result '()))
(if (eof-object? char)
(loop (read-char) (cons char result)))))))

Works with Chicken Scheme:

(with-input-from-file "foo.txt" read-string)


The library getf.s7i defines the function getf, which reads a whole file into a string:

$ include "seed7_05.s7i";
include "getf.s7i";
const proc: main is func
var string: fileContent is "";
fileContent := getf("text.txt");
end func;


Reading an entire file as a string, can be achieved with the FileHandle.slurp() method, as illustrated bellow:

var file =;
var content = file.open_r.slurp;
print content;


Works with: Pharo
(StandardFileStream oldFileNamed: 'foo.txt') contents
Works with: Smalltalk/X
'foo.txt' asFilename contentsAsString


In SNOBOL4, file I/O is done by associating a variable with the desired file, via the input() built-in function. After the association, each reference to the named variable provides as the variable's value the next block or line of data from the corresponding file. The exact format of the input() function parameters tends to vary based on the implementation in use. In this example, the code reads the file in blocks of 512k bytes (or less) until the entire file has been read into one long string in memory.

      input(.inbin,21,"filename.txt [-r524288]")     :f(end)
rdlp buf = inbin  :s(rdlp)
* now process the 'buf' containing the file


let contents = readfile("foo.txt");


import Foundation
let path = "~/input.txt".stringByExpandingTildeInPath
if let string = String(contentsOfFile: path, encoding: NSUTF8StringEncoding) {
println(string) // print contents of file


This reads the data in as text, applying the default encoding translations.

set f [open $filename]
set data [read $f]
close $f

To read the data in as uninterpreted bytes, either use fconfigure to put the handle into binary mode before reading, or (from Tcl 8.5 onwards) do this:

set f [open $filename "rb"]
set data [read $f]
close $f


ERROR/STOP OPEN ("rosetta.txt",READ,-std-)
var=FILE ("rosetta.txt")


@(next "foo.txt")

The freeform directive in TXR causes the remaining lines of the text stream to be treated as one big line, catenated together. The default line terminator is the newline "\n". This lets the entire input be captured into a single variable as a whole-line match.

UNIX Shell[edit]

We start a 'cat' process to read the entire file, and use '$(...)' to grab the output of 'cat'. We use 'printf' which might be more portable than 'echo'. Because '$(...)' can chop off a newline at the end of the file, we tell 'printf' to add an extra newline.

f=`cat foo.txt`    # f will contain the entire contents of the file
printf '%s\n' "$f"
f=$(cat foo.txt)
printf '%s\n' "$f"

Some shells provide a shortcut to read a file without starting a 'cat' process.

Works with: bash
Works with: pdksh
echo -E "$f"
Works with: zsh
print $file


zmodload zsh/mapfile
print $mapfile[foo.txt]


string file_contents;
FileUtils.get_contents("foo.txt", out file_contents);


Read text file with default encoding into variable and display

dim s
s = createobject("scripting.filesystemobject").opentextfile("slurp.vbs",1).readall
wscript.echo s

Read text file with UTF-16 encoding into memory and display

wscript.echo createobject("scripting.filesystemobject").opentextfile("utf16encoded.txt",1,-1).readall

Vedit macro language[edit]

In Vedit Macro Language, a "string variable" can be either an edit buffer or a text register.
Text registers can hold only a limited amount of data (about 120 KB each in current version).
Edit buffers can handle files of unlimited size (even larger than the size of virtual memory). For large files, only a part of the file is kept in memory, but from users point of view there is no practical difference to having the whole file in memory.

Read file into edit buffer. The buffer is allocated automatically:


Read file into text register 10:

Reg_Load(10, "example.txt")

Visual Basic .NET[edit]

Imports System.IO
Public Class Form1
' Read all of the lines of a file.
' Function assumes that the file exists.
Private Sub ReadLines(ByVal FileName As String)
Dim oReader As New StreamReader(FileName)
Dim sLine As String = oReader.ReadToEnd()
End Sub
End Class


with infile "x"
with outstring
whilet line (read_line)
prn line


This example reads its own source code file and displays it as a string. The command line is: readfile <readfile.xpl

include c:\cxpl\codes;  \intrinsic 'code' declarations
string 0; \use zero-terminated string convention
int I;
char Str;
[Str:= GetHp; \starting address of block of local "heap" memory
I:= 0; \ [does the exact same thing as Reserve(0)]
loop [Str(I):= ChIn(1);
if Str(I) = $1A\EOF\ then [Str(I):= 0; quit];
I:= I+1;
SetHp(Str+I+1); \set heap pointer beyond Str (not really needed here)
Text(0, Str); \show file as a string
include c:\cxpl\codes;  \intrinsic 'code' declarations
string 0;               \use zero-terminated string convention
int  I;
char Str;
[Str:= GetHp;           \starting address of block of local "heap" memory
I:= 0;                  \ [does the exact same thing as Reserve(0)]
loop    [Str(I):= ChIn(1);
        if Str(I) = $1A\EOF\ then [Str(I):= 0;  quit];
        I:= I+1;
SetHp(Str+I+1);         \set heap pointer beyond Str (not really needed here)
Text(0, Str);           \show file as a string


This loads foo.txt into lines as an array of strings. Each array element is one line. Each line's trailing newline is removed.

lines = rdfile("foo.txt");

This loads foo.txt into content as a single scalar string, without losing newlines.

f = open("foo.txt", "rb");
raw = array(char, sizeof(f));
_read, f, 0, raw;
close, f;
content = strchar(raw);


data := File("foo.txt","r").read()

The file parameters are the same as C's