Anagrams: Difference between revisions

m
Replace deprecated function
m (syntax highlighting fixup automation)
m (Replace deprecated function)
(20 intermediate revisions by 11 users not shown)
Line 3:
When two or more words are composed of the same characters, but in a different order, they are called [[wp:Anagram|anagrams]].
 
;Task
{{task heading}}
Using the word list at   http://wiki.puzzlers.org/pub/wordlists/unixdict.txt,
<br>find the sets of words that share the same characters that contain the most words in them.
 
{{task heading|;Related tasks}}
 
{{Related tasks/Word plays}}
 
Line 16 ⟶ 17:
=={{header|11l}}==
{{trans|Python}}
<syntaxhighlight lang="11l">DefaultDict[String, Array[String]] anagram
L(word) File(‘unixdict.txt’).read().split("\n")
anagram[sorted(word).join(‘’)].append(word)
 
V count = max(anagram.values().map(ana -> ana.len))
Line 36 ⟶ 37:
 
=={{header|8th}}==
<syntaxhighlight lang="8th">
\
\ anagrams.8th
Line 172 ⟶ 173:
=={{header|AArch64 Assembly}}==
{{works with|as|Raspberry Pi 3B version Buster 64 bits <br> or android 64 bits with application Termux }}
<syntaxhighlight lang=AArch64"aarch64 Assemblyassembly">
/* ARM assembly AARCH64 Raspberry PI 3B */
/* program anagram64.s */
Line 571 ⟶ 572:
</pre>
=={{header|ABAP}}==
<syntaxhighlight lang=ABAP"abap">report zz_anagrams no standard page heading.
define update_progress.
call function 'SAPGUI_PROGRESS_INDICATOR'
Line 682 ⟶ 683:
 
=={{header|Ada}}==
<syntaxhighlight lang="ada">with Ada.Text_IO; use Ada.Text_IO;
 
with Ada.Containers.Indefinite_Ordered_Maps;
Line 763 ⟶ 764:
=={{header|ALGOL 68}}==
{{works with|ALGOL 68G|Any - tested with release 2.8.3.win32}} Uses the "read" PRAGMA of Algol 68 G to include the associative array code from the [[Associative_array/Iteration]] task.
<syntaxhighlight lang="algol68"># find longest list(s) of words that are anagrams in a list of words #
# use the associative array in the Associate array/iteration task #
PR read "aArray.a68" PR
Line 865 ⟶ 866:
alger|glare|lager|large|regal
caret|carte|cater|crate|trace
</pre>
 
=={{header|Amazing Hopper}}==
<syntaxhighlight lang="c">
#include <basico.h>
 
#define MAX_LINE 30
 
algoritmo
fd=0, filas=0
word={}, 2da columna={}
old_word="",new_word=""
dimensionar (1,2) matriz de cadenas 'result'
pos=0
token.separador'""'
 
abrir para leer("basica/unixdict.txt",fd)
 
iterar mientras ' no es fin de archivo (fd) '
usando 'MAX_LINE', leer línea desde(fd),
---copiar en 'old_word'---, separar para 'word '
word, ---retener--- ordenar esto,
encadenar en 'new_word'
 
matriz.buscar en tabla (1,new_word,result)
copiar en 'pos'
si ' es negativo? '
new_word,old_word, pegar fila en 'result'
sino
#( result[pos,2] = cat(result[pos,2],cat(",",old_word) ) )
fin si
 
reiterar
 
cerrar archivo(fd)
guardar 'filas de (result)' en 'filas'
#( 2da columna = result[2:filas, 2] )
fijar separador '","'
tomar '2da columna'
contar tokens en '2da columna' ---retener resultado,
obtener máximo valor,es mayor o igual?, replicar esto
compactar esto
 
fijar separador 'NL', luego imprime todo
terminar
</syntaxhighlight>
{{out}}
<pre>
abel,able,bale,bela,elba
alger,glare,lager,large,regal
angel,angle,galen,glean,lange
caret,carte,cater,crate,trace
elan,lane,lean,lena,neal
evil,levi,live,veil,vile
</pre>
 
Line 872 ⟶ 931:
This is a rough translation of the J version, intermediate values are kept and verb trains are not used for clarity of data flow.
 
<syntaxhighlight lang=APL"apl">
anagrams←{
tie←⍵ ⎕NTIE 0
Line 886 ⟶ 945:
 
'''Example:'''
<syntaxhighlight lang=APL"apl">
⎕SH'wget http://wiki.puzzlers.org/pub/wordlists/unixdict.txt'
]display anagrams 'unixdict.txt'
Line 915 ⟶ 974:
 
=={{header|AppleScript}}==
<syntaxhighlight lang="applescript">use AppleScript version "2.3.1" -- OS X 10.9 (Mavericks) or later.
 
use sorter : script ¬
<syntaxhighlight lang=applescript>use AppleScript version "2.3.1" -- OS X 10.9 (Mavericks) or later — for these 'use' commands!
"Custom Iterative Ternary Merge Sort" -- <www.macscripter.net/t/timsort-and-nigsort/71383/3>
-- Uses the customisable AppleScript-coded sort shown at <https://macscripter.net/viewtopic.php?pid=194430#p194430>.
-- It's assumed scripters will know how and where to install it as a library.
use sorter : script "Custom Iterative Ternary Merge Sort"
use scripting additions
 
on join(lst, delim)
set astid to AppleScript's text item delimiters
set AppleScript's text item delimiters to delim
set txt to lst as text
set AppleScript's text item delimiters to astid
return txt
end join
 
on largestAnagramGroups(listOfWords)
script o
property wordList : listOfWords
property doctoredWordsgroupingTexts : {}wordList's items
property longestRangeslargestGroupSize : {}0
property outputlargestGroupRanges : {}
on judgeGroup(i, j)
set groupSize to j - i + 1
if (groupSize < largestGroupSize) then -- Most likely.
else if (groupSize = largestGroupSize) then -- Next most likely.
set end of largestGroupRanges to {i, j}
else -- Largest group so far.
set largestGroupRanges to {{i, j}}
set largestGroupSize to groupSize
end if
end judgeGroup
on isGreater(a, b)
return a's beginning > b's beginning
end isGreater
end script
set wordCount to (count o's wordList)
ignoring case
-- BuildReplace anotherthe listwords containing doctored versions ofin the inputgroupingTexts wordslist with theirsorted-character characters lexically sortedversions.
setrepeat astidwith toi AppleScript'sfrom text1 itemto delimiterswordCount
set AppleScriptchrs to o's textgroupingTexts's item delimiters toi's ""characters
tell sorter to sort(chrs, 1, -1, {})
repeat with thisWord in o's wordList
set theseCharso's to thisWordgroupingTexts's charactersitem i to join(chrs, "")
-- A straight ascending in-place sort here.
tell sorter to sort(theseChars, 1, -1, {}) -- Params: (list, start index, end index, customisation spec.).
set end of o's doctoredWords to theseChars as text
end repeat
-- Sort the list to group its contents and echo the moves in the original word list.
set AppleScript's text item delimiters to astid
tell sorter to sort(o's groupingTexts, 1, wordCount, {slave:{o's wordList}})
-- Sort the list of doctored words to group them, rearranging the original-word list in parallel.
tell sorter to sort(o's doctoredWords, 1, -1, {slave:{o's wordList}})
-- Find the list range(s) of the longest run(s) of equal grouping texts in the doctored-word list.
set longestRunLength to 1
set i to 1
set currentText to beginning of o's doctoredWordsgroupingTexts
repeat with j from 2 to (count o's doctoredWords)wordCount
set thisText to item j of o's doctoredWordsgroupingTexts's item j
if (thisText is not currentText) then
settell thisRunLengtho to judgeGroup(i, j - i1)
if (thisRunLength > longestRunLength) then
set o's longestRanges to {{i, j - 1}}
set longestRunLength to thisRunLength
else if (thisRunLength = longestRunLength) then
set end of o's longestRanges to {i, j - 1}
end if
set currentText to thisText
set i to j
end if
end repeat
set finalRunLength toif (j -> i) +then 1tell o to judgeGroup(i, j)
if (finalRunLength > longestRunLength) then
set o's longestRanges to {{i, j}}
else if (finalRunLength = longestRunLength) then
set end of o's longestRanges to {i, j}
end if
-- GetExtract the group(s) of words occupying the same range(s) in the original- word list.
set output to {}
-- The stable parallel sort above will have kept each group's words in the same order with respect to each other.
repeat with thisRange in o's longestRangeslargestGroupRanges
set {i, j} to thisRange
set-- endAdd ofthis o's outputgroup to itemsthe i thru j of o's wordListoutput.
set thisGroup to o's wordList's items i thru j
tell sorter to sort(thisGroup, 1, -1, {}) -- Not necessary with unixdict.txt. But hey.
set end of output to thisGroup
end repeat
-- As a final flourish, sort the list of groups byon their first items.
tell sorter to sort(output, 1, -1, {comparer:o})
script byFirstItem
on isGreater(a, b)
return (a's beginning > b's beginning)
end isGreater
end script
tell sorter to sort(o's output, 1, -1, {comparer:byFirstItem})
end ignoring
return o's output
end largestAnagramGroups
 
-- The closing values of AppleScript 'run handler' variables not explicity declared local are
-- saved back to the script file afterwards — and "unixdict.txt" contains 25,104 words!
local wordFile, wordList
set wordFile to ((path to desktop as text) & "www.rosettacode.org:unixdict.txt") as «class furl»
-- The words in "unixdict.txt" are arranged one per line in alphabetical order.
-- Some contain punctuation characters, so they're best extracted as 'paragraphs' rather than as 'words'.
set wordFile to ((path to desktop as text) & "unixdict.txt") as «class furl»
set wordList to paragraphs of (read wordFile as «class utf8»)
return largestAnagramGroups(wordList)</syntaxhighlight>
 
{{output}}
<syntaxhighlight lang="applescript">{{"abel", "able", "bale", "bela", "elba"}, {"alger", "glare", "lager", "large", "regal"}, {"angel", "angle", "galen", "glean", "lange"}, {"caret", "carte", "cater", "crate", "trace"}, {"elan", "lane", "lean", "lena", "neal"}, {"evil", "levi", "live", "veil", "vile"}}</syntaxhighlight>
 
=={{header|ARM Assembly}}==
{{works with|as|Raspberry Pi <br> or android 32 bits with application Termux}}
<syntaxhighlight lang=ARM"arm Assemblyassembly">
/* ARM assembly Raspberry PI */
/* program anagram.s */
Line 1,367 ⟶ 1,426:
=={{header|Arturo}}==
 
<syntaxhighlight lang="rebol">wordset: map read.lines relative "unixdict.txt" => strip
 
anagrams: #[]
Line 1,393 ⟶ 1,452:
=={{header|AutoHotkey}}==
Following code should work for AHK 1.0.* and 1.1* versions:
<syntaxhighlight lang=AutoHotkey"autohotkey">FileRead, Contents, unixdict.txt
Loop, Parse, Contents, % "`n", % "`r"
{ ; parsing each line of the file we just read
Line 1,435 ⟶ 1,494:
 
=={{header|AWK}}==
<syntaxhighlight lang=AWK"awk"># JUMBLEA.AWK - words with the most duplicate spellings
# syntax: GAWK -f JUMBLEA.AWK UNIXDICT.TXT
{ for (i=1; i<=NF; i++) {
Line 1,473 ⟶ 1,532:
Alternatively, non-POSIX version:
{{works with|gawk}}
<syntaxhighlight lang="awk">#!/bin/gawk -f
 
{ patsplit($0, chars, ".")
Line 1,491 ⟶ 1,550:
}</syntaxhighlight>
 
=={{header|BaConBASIC}}==
==={{header|BaCon}}===
<syntaxhighlight lang=freebasic>OPTION COLLAPSE TRUE
<syntaxhighlight lang="freebasic">OPTION COLLAPSE TRUE
 
DECLARE idx$ ASSOC STRING
Line 1,524 ⟶ 1,584:
</pre>
 
==={{header|BBC BASIC}}===
{{works with|BBC BASIC for Windows}}
<syntaxhighlight lang="bbcbasic"> INSTALL @lib$+"SORTLIB"
sort% = FN_sortinit(0,0)
Line 1,602 ⟶ 1,662:
=={{header|BQN}}==
 
<syntaxhighlight lang="bqn">words ← •FLines "unixdict.txt"
•Show¨{𝕩/˜(⊢=⌈´)≠¨𝕩} (⊐∧¨)⊸⊔ words</syntaxhighlight>
<syntaxhighlight lang="bqn">⟨ "abel" "able" "bale" "bela" "elba" ⟩
⟨ "alger" "glare" "lager" "large" "regal" ⟩
⟨ "angel" "angle" "galen" "glean" "lange" ⟩
Line 1,619 ⟶ 1,679:
This solution makes extensive use of Bracmat's computer algebra mechanisms. A trick is needed to handle words that are merely repetitions of a single letter, such as <code>iii</code>. That's why the variabe <code>sum</code> isn't initialised with <code>0</code>, but with a non-number, in this case the empty string. Also te correct handling of characters 0-9 needs a trick so that they are not numerically added: they are prepended with a non-digit, an <code>N</code> in this case. After completely traversing the word list, the program writes a file <code>product.txt</code> that can be visually inspected.
The program is not fast. (Minutes rather than seconds.)
<syntaxhighlight lang="bracmat">( get$("unixdict.txt",STR):?list
& 1:?product
& whl
Line 1,656 ⟶ 1,716:
 
=={{header|C}}==
<syntaxhighlight lang="c">#include <stdio.h>
#include <stdlib.h>
#include <string.h>
Line 1,825 ⟶ 1,885:
</pre>
A much shorter version with no fancy data structures:
<syntaxhighlight lang="c">#include <stdio.h>
#include <stdlib.h>
#include <string.h>
Line 1,937 ⟶ 1,997:
 
=={{header|C sharp|C#}}==
<syntaxhighlight lang="csharp">using System;
using System.IO;
using System.Linq;
Line 1,978 ⟶ 2,038:
 
=={{header|C++}}==
<syntaxhighlight lang="cpp">#include <iostream>
#include <fstream>
#include <string>
Line 2,023 ⟶ 2,083:
=={{header|Clojure}}==
Assume ''wordfile'' is the path of the local file containing the words. This code makes a map (''groups'') whose keys are sorted letters and values are lists of the key's anagrams. It then determines the length of the longest list, and prints out all the lists of that length.
<syntaxhighlight lang="clojure">(require '[clojure.java.io :as io])
 
(def groups
Line 2,034 ⟶ 2,094:
(println wordlist))</syntaxhighlight>
 
<syntaxhighlight lang="clojure">
(->> (slurp "http://wiki.puzzlers.org/pub/wordlists/unixdict.txt")
clojure.string/split-lines
Line 2,052 ⟶ 2,112:
 
=={{header|CLU}}==
<syntaxhighlight lang="clu">% Keep a list of anagrams
anagrams = cluster is new, add, largest_size, sets
anagram_set = struct[letters: string, words: array[string]]
Line 2,148 ⟶ 2,208:
Tested with GnuCOBOL 2.0. ALLWORDS output display trimmed for width.
 
<syntaxhighlight lang=COBOL"cobol"> *> TECTONICS
*> wget http://wiki.puzzlers.org/pub/wordlists/unixdict.txt
*> or visit https://sourceforge.net/projects/souptonuts/files
Line 2,422 ⟶ 2,482:
 
=={{header|CoffeeScript}}==
<syntaxhighlight lang="coffeescript">http = require 'http'
 
show_large_anagram_sets = (word_lst) ->
Line 2,454 ⟶ 2,514:
get_word_list show_large_anagram_sets</syntaxhighlight>
{{out}}
<syntaxhighlight lang="coffeescript">> coffee anagrams.coffee
[ 'abel', 'able', 'bale', 'bela', 'elba' ]
[ 'alger', 'glare', 'lager', 'large', 'regal' ]
Line 2,464 ⟶ 2,524:
=={{header|Common Lisp}}==
{{libheader|DRAKMA}} to retrieve the wordlist.
<syntaxhighlight lang="lisp">(defun anagrams (&optional (url "http://wiki.puzzlers.org/pub/wordlists/unixdict.txt"))
(let ((words (drakma:http-request url :want-stream t))
(wordsets (make-hash-table :test 'equalp)))
Line 2,488 ⟶ 2,548:
finally (return (values maxwordsets maxcount)))))</syntaxhighlight>
Evalutating
<syntaxhighlight lang="lisp">(multiple-value-bind (wordsets count) (anagrams)
(pprint wordsets)
(print count))</syntaxhighlight>
Line 2,500 ⟶ 2,560:
5</pre>
Another method, assuming file is local:
<syntaxhighlight lang="lisp">(defun read-words (file)
(with-open-file (stream file)
(loop with w = "" while w collect (setf w (read-line stream nil)))))
Line 2,529 ⟶ 2,589:
=={{header|Component Pascal}}==
BlackBox Component Builder
<syntaxhighlight lang="oberon2">
MODULE BbtAnagrams;
IMPORT StdLog,Files,Strings,Args;
Line 2,722 ⟶ 2,782:
=={{header|Crystal}}==
{{trans|Ruby}}
<syntaxhighlight lang="ruby">require "http/client"
 
response = HTTP::Client.get("http://wiki.puzzlers.org/pub/wordlists/unixdict.txt")
Line 2,758 ⟶ 2,818:
=={{header|D}}==
===Short Functional Version===
<syntaxhighlight lang="d">import std.stdio, std.algorithm, std.string, std.exception, std.file;
 
void main() {
Line 2,778 ⟶ 2,838:
===Faster Version===
Less safe, same output.
<syntaxhighlight lang="d">void main() {
import std.stdio, std.algorithm, std.file, std.string;
 
Line 2,798 ⟶ 2,858:
{{libheader| System.Classes}}
{{libheader| System.Diagnostics}}
<syntaxhighlight lang=Delphi"delphi">
program AnagramsTest;
 
Line 2,944 ⟶ 3,004:
 
=={{header|E}}==
<syntaxhighlight lang="e">println("Downloading...")
when (def wordText := <http://wiki.puzzlers.org/pub/wordlists/unixdict.txt> <- getText()) -> {
def words := wordText.split("\n")
Line 2,969 ⟶ 3,029:
=={{header|EchoLisp}}==
For a change, we will use the french dictionary - '''(lib 'dico.fr)''' - delivered within EchoLisp.
<syntaxhighlight lang="scheme">
(require 'struct)
(require 'hash)
Line 3,007 ⟶ 3,067:
</syntaxhighlight>
{{out}}
<syntaxhighlight lang="scheme">
(length mots-français)
→ 209315
Line 3,020 ⟶ 3,080:
 
=={{header|Eiffel}}==
<syntaxhighlight lang=Eiffel"eiffel">
class
ANAGRAMS
Line 3,121 ⟶ 3,181:
=={{header|Ela}}==
{{trans|Haskell}}
<syntaxhighlight lang="ela">open monad io list string
 
groupon f x y = f x == f y
Line 3,145 ⟶ 3,205:
 
=={{header|Elena}}==
ELENA 56.0x:
<syntaxhighlight lang="elena">import system'routines;
import system'calendar;
import system'io;
Line 3,153 ⟶ 3,213:
import extensions'routines;
import extensions'text;
import algorithms;
 
extension op
Line 3,166 ⟶ 3,227:
auto dictionary := new Map<string,object>();
 
File.assign("unixdict.txt").forEachLine::(word)
{
var key := word.normalized();
Line 3,176 ⟶ 3,237:
};
item.append:(word)
};
 
dictionary.Values
.sortquickSort::(former,later => former.Item2.Length > later.Item2.Length )
.top:(20)
.forEach::(pair){ console.printLine(pair.Item2) };
var end := now;
Line 3,194 ⟶ 3,255:
{{out}}
<pre>
alger,glare,lager,large,regal
angel,angle,galen,glean,lange
abel,able,bale,bela,elba
alger,glare,lager,large,regal
caret,carte,cater,crate,trace
evil,levi,live,veil,vile
elan,lane,lean,lena,neal
caret,carte,cater,crate,trace
angel,angle,galen,glean,lange
are,ear,era,rae
dare,dear,erda,read
diet,edit,tide,tied
cereus,recuse,rescue,secure
ames,mesa,same,seam
emit,item,mite,time
amen,mane,mean,name
enol,leon,lone,noel
esprit,priest,sprite,stripe
beard,bread,debar,debra
hare,hear,hera,rhea
apt,pat,pta,tap
aden,dane,dean,edna
aires,aries,arise,raise
keats,skate,stake,steak
are,ear,era,rae
lament,mantel,mantle,mental
beard,bread,debar,debra
lascar,rascal,sacral,scalar
cereus,recuse,rescue,secure
latus,sault,talus,tulsa
diet,edit,tide,tied
leap,pale,peal,plea
resin,rinse,risen,siren
</pre>
 
=={{header|Elixir}}==
<syntaxhighlight lang=Elixir"elixir">defmodule Anagrams do
def find(file) do
File.read!(file)
Line 3,245 ⟶ 3,306:
 
The same output, using <code>File.Stream!</code> to generate <code>tuples</code> containing the word and it's sorted value as <code>strings</code>.
<syntaxhighlight lang=Elixir"elixir">File.stream!("unixdict.txt")
|> Stream.map(&String.strip &1)
|> Enum.group_by(&String.codepoints(&1) |> Enum.sort)
Line 3,266 ⟶ 3,327:
=={{header|Erlang}}==
The function fetch/2 is used to solve [[Anagrams/Deranged_anagrams]]. Please keep backwards compatibility when editing. Or update the other module, too.
<syntaxhighlight lang="erlang">-module(anagrams).
-compile(export_all).
 
Line 3,309 ⟶ 3,370:
 
=={{header|Euphoria}}==
<syntaxhighlight lang="euphoria">include sort.e
 
function compare_keys(sequence a, sequence b)
Line 3,369 ⟶ 3,430:
=={{header|F Sharp|F#}}==
Read the lines in the dictionary, group by the sorted letters in each word, find the length of the longest sets of anagrams, extract the longest sequences of words sharing the same letters (i.e. anagrams):
<syntaxhighlight lang="fsharp">let xss = Seq.groupBy (Array.ofSeq >> Array.sort) (System.IO.File.ReadAllLines "unixdict.txt")
Seq.map snd xss |> Seq.filter (Seq.length >> ( = ) (Seq.map (snd >> Seq.length) xss |> Seq.max))</syntaxhighlight>
Note that it is necessary to convert the sorted letters in each word from sequences to arrays because the groupBy function uses the default comparison and sequences do not compare structurally (but arrays do in F#).
 
Takes 0.8s to return:
<syntaxhighlight lang="fsharp">val it : string seq seq =
seq
[seq ["abel"; "able"; "bale"; "bela"; "elba"];
Line 3,384 ⟶ 3,445:
 
=={{header|Fantom}}==
<syntaxhighlight lang="fantom">class Main
{
// take given word and return a string rearranging characters in order
Line 3,439 ⟶ 3,500:
=={{header|Fortran}}==
This program:
<syntaxhighlight lang="fortran">!***************************************************************************************
module anagram_routines
!***************************************************************************************
Line 3,631 ⟶ 3,692:
=={{header|FBSL}}==
'''A little bit of cheating: literatim re-implementation of C solution in FBSL's Dynamic C layer.'''
<syntaxhighlight lang=C"c">#APPTYPE CONSOLE
 
DIM gtc = GetTickCount()
Line 3,805 ⟶ 3,866:
 
=={{header|Factor}}==
<syntaxhighlight lang="factor"> "resource:unixdict.txt" utf8 file-lines
[ [ natural-sort >string ] keep ] { } map>assoc sort-keys
[ [ first ] compare +eq+ = ] monotonic-split
dup 0 [ length max ] reduce '[ length _ = ] filter [ values ] map .</syntaxhighlight>
<syntaxhighlight lang="factor">{
{ "abel" "able" "bale" "bela" "elba" }
{ "caret" "carte" "cater" "crate" "trace" }
Line 3,819 ⟶ 3,880:
 
=={{header|FreeBASIC}}==
<syntaxhighlight lang="freebasic">' FB 1.05.0 Win64
 
Type IndexedWord
Line 3,972 ⟶ 4,033:
 
=={{header|Frink}}==
<syntaxhighlight lang="frink">
d = new dict
for w = lines["http://wiki.puzzlers.org/pub/wordlists/unixdict.txt"]
Line 3,992 ⟶ 4,053:
 
=={{header|FutureBasic}}==
Applications in the latest versions of Macintosh OS X 10.x are sandboxed and require setting special permissions to link to internet files. For illustration purposes here, this code uses the internal Unix dictionary file available isin all versions of OS X.
 
<syntaxhighlight lang="futurebasic">
This first example is a hybrid using FB's native dynamic global array combined with Core Foundation functions:
include "NSLog.incl"
<syntaxhighlight lang=futurebasic>
include "ConsoleWindow"
 
local fn Dictionary as CFArrayRef
def tab 9
CFURLRef url = fn URLFileURLWithPath( @"/usr/share/dict/words" )
CFStringRef string = fn StringWithContentsOfURL( url, NSUTF8StringEncoding, NULL )
end fn = fn StringComponentsSeparatedByString( string, @"\n" )
 
local fn IsAnagram( wrd1 as CFStringRef, wrd2 as CFStringRef ) as BOOL
begin globals
NSUInteger i
dim dynamic gDictionary(_maxLong) as Str255
BOOL result = NO
end globals
 
if ( len(wrd1) != len(wrd2) ) then exit fn
local fn IsAnagram( word1 as Str31, word2 as Str31 ) as Boolean
if ( fn StringCompare( wrd1, wrd2 ) == NSOrderedSame ) then exit fn
dim as long i, j, h, q
CFMutableArrayRef mutArr1 = fn MutableArrayWithCapacity(0) : CFMutableArrayRef mutArr2 = fn MutableArrayWithCapacity(0)
dim as Boolean result
for i = 0 to len(wrd1) - 1
 
MutableArrayAddObject( mutArr1, fn StringWithFormat( @"%C", fn StringCharacterAtIndex( wrd1, i ) ) )
if word1[0] != word2[0] then result = _false : exit fn
MutableArrayAddObject( mutArr2, fn StringWithFormat( @"%C", fn StringCharacterAtIndex( wrd2, i ) ) )
 
for i = 0 to word1[0]
h = 0 : q = 0
for j = 0 to word1[0]
if word1[i] == word1[j] then h++
if word1[i] == word2[j] then q++
next
SortDescriptorRef sd = fn SortDescriptorWithKeyAndSelector( NULL, YES, @"caseInsensitiveCompare:" )
if h != q then result = _false : exit fn
if ( fn ArrayIsEqual( fn ArraySortedArrayUsingDescriptors( mutArr1, @[sd] ), fn ArraySortedArrayUsingDescriptors( mutArr2, @[sd] ) ) ) then result = YES
next
result = _true
end fn = result
 
void local fn FindAnagramsInDictionary( wd as CFStringRef, dict as CFArrayRef )
local fn LoadDictionaryToArray
CFStringRef string, temp
'~'1
dim as CFURLRef url
CFMutableArrayRef words = fn MutableArrayWithCapacity(0)
dim as CFArrayRef arr
dim as CFStringReffor temp, cfStrin dict
if ( fn IsAnagram( lcase( wd ), temp ) ) then MutableArrayAddObject( words, temp )
dim as CFIndex elements
next
dim as Handle h
string = fn ArrayComponentsJoinedByString( words, @", " )
dim as Str255 s
NSLogSetTextColor( fn ColorText ) : NSLog( @"Anagrams for %@:", lcase(wd) )
dim as long fileLen, i
NSLogSetTextColor( fn ColorSystemBlue ) : NSLog(@"%@\n",string)
 
kill dynamic gDictionary
url = fn CFURLCreateWithFileSystemPath( _kCFAllocatorDefault, @"/usr/share/dict/words", _kCFURLPOSIXPathStyle, _false )
open "i", 2, url
fileLen = lof(2, 1)
h = fn NewHandleClear( fileLen )
if ( h )
read file 2, [h], fileLen
cfStr = fn CFStringCreateWithBytes( _kCFAllocatorDefault, #[h], fn GetHandleSize(h), _kCFStringEncodingMacRoman, _false )
if ( cfStr )
arr = fn CFStringCreateArrayBySeparatingStrings( _kCFAllocatorDefault, cfStr, @"\n" )
CFRelease( cfStr )
elements = fn CFArrayGetCount( arr )
for i = 0 to elements - 1
temp = fn CFArrayGetValueAtIndex( arr, i )
fn CFStringGetPascalString( temp, @s, 256, _kCFStringEncodingMacRoman )
gDictionary(i) = s
next
CFRelease( arr )
end if
fn DisposeH( h )
end if
close #2
CFRelease( url )
end fn
 
void local fn DoIt
local fn FindAnagrams( whichWord as Str31 )
CFArrayRef dictionary = fn Dictionary
dim as long elements, i
 
dispatchglobal
print "Anagrams for "; UCase$(whichWord); ":",
CFStringRef string
elements = fn DynamicNextElement( dynamic( gDictionary ) )
CFArrayRef words = @[@"bade",@"abet",@"beast",@"tuba",@"mace",@"scare",@"marine",@"antler",@"spare",@"leading",@"alerted",@"allergy",@"research",@"hustle",@"oriental",@"creationism",@"resistance",@"mountaineer"]
for i = 0 to elements - 1
for string in words
if ( len( gDictionary(i) ) == whichWord[0] )
fn FindAnagramsInDictionary( string, dictionary )
if ( fn IsAnagram( whichWord, gDictionary(i) ) == _true )
next
print gDictionary(i),
dispatchend
end if
end if
next
print
end fn
 
fn DoIt
fn LoadDictionaryToArray
 
HandleEvents
fn FindAnagrams( "bade" )
fn FindAnagrams( "abet" )
fn FindAnagrams( "beast" )
fn FindAnagrams( "tuba" )
fn FindAnagrams( "mace" )
fn FindAnagrams( "scare" )
fn FindAnagrams( "marine" )
fn FindAnagrams( "antler" )
fn FindAnagrams( "spare" )
fn FindAnagrams( "leading" )
fn FindAnagrams( "alerted" )
fn FindAnagrams( "allergy" )
fn FindAnagrams( "research")
fn FindAnagrams( "hustle" )
fn FindAnagrams( "oriental")
def tab 3
print
fn FindAnagrams( "creationism" )
fn FindAnagrams( "resistance" )
fn FindAnagrams( "mountaineer" )
</syntaxhighlight>
Output:
Line 4,117 ⟶ 4,129:
</pre>
 
This version fulfils the task description.
This second example is pure Core Foundation:
<pre>
include "ConsoleWindow"
include "Tlbx CFBag.incl"
 
<syntaxhighlight lang="futurebasic">
local fn Dictionary as CFArrayRef
'~'1
dim as CFURLRef      url
dim as CFStringRef   string
dim as Handle        h
dim as long          fileLen
 
begin globals
dim as CFArrayRef sDictionary// static
end globals
 
include "NSLog.incl"
if ( sDictionary == NULL )
url = fn CFURLCreateWithFileSystemPath( _kCFAllocatorDefault, @"/usr/share/dict/words", _kCFURLPOSIXPathStyle, _false )
open "i", 2, url
fileLen = lof(2,1)
h = fn NewHandleClear( fileLen )
if ( h )
read file 2, [h], fileLen
string = fn CFStringCreateWithBytes( _kCFAllocatorDefault, #[h], fn GetHandleSize(h), _kCFStringEncodingMacRoman, _false )
if ( string )
sDictionary = fn CFStringCreateArrayBySeparatingStrings( _kCFAllocatorDefault, string, @"\n" )
CFRelease( string )
end if
fn DisposeH( h )
end if
close #2
CFRelease( url )
end if
end fn = sDictionary
 
#plist NSAppTransportSecurity @{NSAllowsArbitraryLoads:YES}
local fn IsAnagram( wd1 as CFStringRef, wd2 as CFStringRef ) as Boolean
'~'1
dim as CFMutableBagRef   bag1, bag2
dim as CFStringRef       chr1, chr2
dim as CFIndex           length1, length2, i
dim as Boolean           result : result = _false
 
local fn Dictionary as CFArrayRef
length1 = fn CFStringGetLength( wd1 )
CFURLRef url = fn URLWithString( @"http://wiki.puzzlers.org/pub/wordlists/unixdict.txt" )
length2 = fn CFStringGetLength( wd2 )
CFStringRef string = fn StringWithContentsOfURL( url, NSUTF8StringEncoding, NULL )
if ( length1 == length2 )
end fn = fn StringComponentsSeparatedByCharactersInSet( string, fn CharacterSetNewlineSet )
bag1 = fn CFBagCreateMutable( _kCFAllocatorDefault, 0, @kCFCopyStringBagCallBacks )
bag2 = fn CFBagCreateMutable( _kCFAllocatorDefault, 0, @kCFCopyStringBagCallBacks )
 
local fn TestIndexes( array as CFArrayRef, obj as CFTypeRef, index as NSUInteger, stp as ^BOOL, userData as ptr ) as BOOL
for i = 0 to length1 - 1
end fn = fn StringIsEqual( obj, userData )
chr1 = fn CFStringCreateWithSubstring( _kCFAllocatorDefault, wd1, fn CFRangeMake(i,1) )
chr2 = fn CFStringCreateWithSubstring( _kCFAllocatorDefault, wd2, fn CFRangeMake(i,1) )
CFBagAddValue( bag1, chr1 )
CFBagAddValue( bag2, chr2 )
CFRelease( chr1 )
CFRelease( chr2 )
next
 
void local fn IndexSetEnumerator( set as IndexSetRef, index as NSUInteger, stp as ^BOOL, userData as ptr )
result = fn CFEqual( bag1, bag2 )
NSLog(@"\t%@\b",fn ArrayObjectAtIndex( userData, index ))
CFRelease( bag1 )
end fn
CFRelease( bag2 )
end if
end fn = result
 
void local fn FindAnagrams( wd as CFStringRef )DoIt
CFArrayRef words
'~'1
dim as CFMutableArrayRef    words sortedWords, letters
CFStringRef string, sortedString
dim as CFMutableStringRef   wdUC
IndexSetRef indexes
dim as CFLocaleRef          locale
long i, j, count, indexCount, maxCount = 0, length
dim as CFStringRef          string
CFMutableDictionaryRef anagrams
dim as CFIndex              count, index
CFTimeInterval ti
dim as CFArrayRef           dict
ti = fn CACurrentMediaTime
NSLog(@"Searching...")
// create another word list with sorted letters
words = fn Dictionary
count = len(words)
sortedWords = fn MutableArrayWithCapacity(count)
for string in words
length = len(string)
letters = fn MutableArrayWithCapacity(length)
for i = 0 to length - 1
MutableArrayAddObject( letters, mid(string,i,1) )
next
MutableArraySortUsingSelector( letters, @"compare:" )
sortedString = fn ArrayComponentsJoinedByString( letters, @"" )
MutableArrayAddObject( sortedWords, sortedString )
next
// search for identical sorted words
anagrams = fn MutableDictionaryWithCapacity(0)
for i = 0 to count - 2
j = i + 1
indexes = fn ArrayIndexesOfObjectsAtIndexesPassingTest( sortedWords, fn IndexSetWithIndexesInRange( fn CFRangeMake(j,count-j) ), NSEnumerationConcurrent, @fn TestIndexes, (ptr)sortedWords[i] )
indexCount = len(indexes)
if ( indexCount > maxCount )
maxCount = indexCount
MutableDictionaryRemoveAllObjects( anagrams )
end if
if ( indexCount == maxCount )
MutableDictionarySetValueForKey( anagrams, indexes, words[i] )
end if
next
// show results
NSLogClear
for string in anagrams
NSLog(@"%@\b",string)
indexes = anagrams[string]
IndexSetEnumerateIndexes( indexes, @fn IndexSetEnumerator, (ptr)words )
NSLog(@"")
next
NSLog(@"\nCalculated in %0.6fs",fn CACurrentMediaTime - ti)
end fn
 
dispatchglobal
words = fn CFArrayCreateMutable( _kCFAllocatorDefault, 0, @kCFTypeArrayCallBacks )
fn DoIt
dispatchend
 
HandleEvents
wdUC = fn CFStringCreateMutableCopy( _kCFAllocatorDefault, 0, wd )
</syntaxhighlight>
locale = fn CFLocaleCopyCurrent()
CFStringUppercase( wdUC, locale )
CFRelease( locale )
 
{{out}}
string = fn CFStringCreateWithFormat( _kCFAllocatorDefault, NULL, @"Anagrams for %@:", wdUC )
CFRelease( wdUC )
fn ConsolePrintCFString( string )
CFRelease( string )
 
dict = fn Dictionary()
count = fn CFArrayGetCount( dict )
for index = 0 to count - 1
string = fn CFArrayGetValueAtIndex( dict, index )
if ( fn IsAnagram( wd, string ) )
CFArrayAppendValue( words, string )
end if
next
 
string = fn CFStringCreateByCombiningStrings( _kCFAllocatorDefault, words, @", " )
CFRelease( words )
fn ConsolePrintCFString( string )
CFRelease( string )
 
fn ConsolePrintCFString( @"" )
end fn
 
fn FindAnagrams( @"bade" )
fn FindAnagrams( @"abet" )
fn FindAnagrams( @"beast" )
fn FindAnagrams( @"tuba" )
fn FindAnagrams( @"mace" )
fn FindAnagrams( @"scare" )
fn FindAnagrams( @"marine" )
fn FindAnagrams( @"antler")
fn FindAnagrams( @"spare" )
fn FindAnagrams( @"leading" )
fn FindAnagrams( @"alerted" )
fn FindAnagrams( @"allergy" )
fn FindAnagrams( @"research")
fn FindAnagrams( @"hustle" )
fn FindAnagrams( @"oriental")
fn FindAnagrams( @"creationism" )
fn FindAnagrams( @"resistance" )
fn FindAnagrams( @"mountaineer" )
</pre>
Output:
<pre>
alger glare lager large regal
Anagrams for BADE:
caret carte cater crate trace
abed, bade, bead
elan lane lean lena neal
 
abel able bale bela elba
Anagrams for ABET:
evil levi live veil vile
abet, bate, beat, beta
angel angle galen glean lange
 
Anagrams for BEAST:
baste, beast, tabes
 
Anagrams for TUBA:
abut, tabu, tuba
 
Anagrams for MACE:
acme, came, mace
 
Anagrams for SCARE:
carse, caser, ceras, scare, scrae
 
Anagrams for MARINE:
marine, remain
 
Anagrams for ANTLER:
altern, antler, learnt, rental, ternal
 
Anagrams for SPARE:
asper, parse, prase, spaer, spare, spear
 
Anagrams for LEADING:
adeling, dealing, leading
 
Anagrams for ALERTED:
delater, related, treadle
 
Anagrams for ALLERGY:
allergy, gallery, largely, regally
 
Anagrams for RESEARCH:
rechaser, research, searcher
 
Anagrams for HUSTLE:
hustle, sleuth
 
Anagrams for ORIENTAL:
oriental, relation
 
Anagrams for CREATIONISM:
anisometric, creationism, miscreation, ramisection, reactionism
 
Anagrams for RESISTANCE:
resistance, senatrices
 
Calculated in 2.409008s
Anagrams for MOUNTAINEER:
enumeration, mountaineer
</pre>
 
=={{header|GAP}}==
<syntaxhighlight lang="gap">Anagrams := function(name)
local f, p, L, line, word, words, swords, res, cur, r;
words := [ ];
Line 4,350 ⟶ 4,280:
 
=={{header|Go}}==
<syntaxhighlight lang="go">package main
 
import (
Line 4,409 ⟶ 4,339:
=={{header|Groovy}}==
This program:
<syntaxhighlight lang="groovy">def words = new URL('http://wiki.puzzlers.org/pub/wordlists/unixdict.txt').text.readLines()
def groups = words.groupBy{ it.toList().sort() }
def bigGroupSize = groups.collect{ it.value.size() }.max()
Line 4,425 ⟶ 4,355:
 
=={{header|Haskell}}==
<syntaxhighlight lang="haskell">import Data.List
 
groupon f x y = f x == f y
Line 4,436 ⟶ 4,366:
mapM_ (print . map snd) . filter ((==mxl).length) $ wix</syntaxhighlight>
{{out}}
<syntaxhighlight lang="haskell">*Main> main
["abel","able","bale","bela","elba"]
["caret","carte","cater","crate","trace"]
Line 4,446 ⟶ 4,376:
and we can noticeably speed up the second stage sorting and grouping by packing the String lists of Chars to the Text type:
 
<syntaxhighlight lang="haskell">import Data.List (groupBy, maximumBy, sort)
import Data.Ord (comparing)
import Data.Function (on)
Line 4,467 ⟶ 4,397:
 
=={{header|Icon}} and {{header|Unicon}}==
<syntaxhighlight lang="icon">procedure main(args)
every writeSet(!getLongestAnagramSets())
end
Line 4,513 ⟶ 4,443:
=={{header|J}}==
If the unixdict file has been retrieved and saved in the current directory (for example, using wget):
<syntaxhighlight lang="j"> (#~ a: ~: {:"1) (]/.~ /:~&>) <;._2 ] 1!:1 <'unixdict.txt'
+-----+-----+-----+-----+-----+
|abel |able |bale |bela |elba |
Line 4,528 ⟶ 4,458:
+-----+-----+-----+-----+-----+</syntaxhighlight>
Explanation:
<syntaxhighlight lang=J"j"> <;._2 ] 1!:1 <'unixdict.txt'</syntaxhighlight>
This reads in the dictionary and produces a list of boxes. Each box contains one line (one word) from the dictionary.
<syntaxhighlight lang=J"j"> (]/.~ /:~&>)</syntaxhighlight>
This groups the words into rows where anagram equivalents appear in the same row. In other words, creates a copy of the original list where the characters contained in each box have been sorted. Then it organizes the contents of the original list in rows, with each new row keyed by the values in the new list.
<syntaxhighlight lang=J"j"> (#~ a: ~: {:"1)</syntaxhighlight>
This selects rows whose last element is not an empty box.<br>
(In the previous step we created an array of rows of boxes. The short rows were automatically padded with empty boxes so that all rows would be the same length.)
Line 4,539 ⟶ 4,469:
The key to this algorithm is the sorting of the characters in each word from the dictionary. The line <tt>Arrays.sort(chars);</tt> sorts all of the letters in the word in ascending order using a built-in [[quicksort]], so all of the words in the first group in the result end up under the key "aegln" in the anagrams map.
{{works with|Java|1.5+}}
<syntaxhighlight lang="java5">import java.net.*;
import java.io.*;
import java.util.*;
Line 4,570 ⟶ 4,500:
}</syntaxhighlight>
{{works with|Java|1.8+}}
<syntaxhighlight lang="java5">import java.net.*;
import java.io.*;
import java.util.*;
Line 4,634 ⟶ 4,564:
===ES5===
{{Works with|Node.js}}
<syntaxhighlight lang="javascript">var fs = require('fs');
var words = fs.readFileSync('unixdict.txt', 'UTF-8').split('\n');
 
Line 4,667 ⟶ 4,597:
 
Alternative using reduce:
<syntaxhighlight lang="javascript">var fs = require('fs');
var dictionary = fs.readFileSync('unixdict.txt', 'UTF-8').split('\n');
 
Line 4,696 ⟶ 4,626:
Using JavaScript for Automation
(A JavaScriptCore interpreter on macOS with an Automation library).
<syntaxhighlight lang="javascript">(() => {
'use strict';
 
Line 4,911 ⟶ 4,841:
 
=={{header|jq}}==
<syntaxhighlight lang="jq">def anagrams:
(reduce .[] as $word (
{table: {}, max: 0}; # state
Line 4,924 ⟶ 4,854:
</syntaxhighlight>
{{Out}}
<syntaxhighlight lang="sh">
$ jq -M -s -c -R -f anagrams.jq unixdict.txt
["abel","able","bale","bela","elba"]
Line 4,936 ⟶ 4,866:
=={{header|Jsish}}==
From Javascript, nodejs entry.
<syntaxhighlight lang="javascript">/* Anagrams, in Jsish */
var datafile = 'unixdict.txt';
if (console.args[0] == '-more' && Interp.conf('maxArrayList') > 500000)
Line 4,984 ⟶ 4,914:
=={{header|Julia}}==
{{works with|Julia|1.6}}
<syntaxhighlight lang="julia">url = "http://wiki.puzzlers.org/pub/wordlists/unixdict.txt"
wordlist = open(readlines, download(url))
 
Line 5,010 ⟶ 4,940:
 
=={{header|K}}==
<syntaxhighlight lang="k">{x@&a=|/a:#:'x}{x g@&1<#:'g:={x@<x}'x}0::`unixdict.txt</syntaxhighlight>
 
=={{header|Kotlin}}==
{{trans|Java}}
<syntaxhighlight lang="scala">import java.io.BufferedReader
import java.io.InputStreamReader
import java.net.URL
Line 5,052 ⟶ 4,982:
 
=={{header|Lasso}}==
<syntaxhighlight lang="lasso">local(
anagrams = map,
words = include_url('http://wiki.puzzlers.org/pub/wordlists/unixdict.txt')->split('\n'),
Line 5,089 ⟶ 5,019:
 
=={{header|Liberty BASIC}}==
<syntaxhighlight lang="lb">' count the word list
open "unixdict.txt" for input as #1
while not(eof(#1))
Line 5,164 ⟶ 5,094:
LiveCode could definitely use a sort characters command. As it is this code converts the letters into items and then sorts that. I wrote a merge sort for characters, but the conversion to items, built-in-sort, conversion back to string is about 10% faster, and certainly easier to write.
 
<syntaxhighlight lang=LiveCode"livecode">on mouseUp
put mostCommonAnagrams(url "http://wiki.puzzlers.org/pub/wordlists/unixdict.txt")
end mouseUp
Line 5,213 ⟶ 5,143:
=={{header|Lua}}==
Lua's core library is very small and does not include built-in network functionality. If a networking library were imported, the local file in the following script could be replaced with the remote dictionary file.
<syntaxhighlight lang="lua">function sort(word)
local bytes = {word:byte(1, -1)}
table.sort(bytes)
Line 5,246 ⟶ 5,176:
 
=={{header|M4}}==
<syntaxhighlight lang=M4"m4">divert(-1)
changequote(`[',`]')
define([for],
Line 5,309 ⟶ 5,239:
The convert call discards the hashes, which have done their job, and leaves us with a list L of anagram sets.
Finally, we just note the size of the largest sets of anagrams, and pick those off.
<syntaxhighlight lang=Maple"maple">
words := HTTP:-Get( "http://wiki.puzzlers.org/pub/wordlists/unixdict.txt" )[2]: # ignore errors
use StringTools, ListTools in
Line 5,319 ⟶ 5,249:
</syntaxhighlight>
The result of running this code is
<syntaxhighlight lang=Maple"maple">
A := [{"abel", "able", "bale", "bela", "elba"}, {"angel", "angle", "galen",
"glean", "lange"}, {"alger", "glare", "lager", "large", "regal"}, {"evil",
Line 5,328 ⟶ 5,258:
=={{header|Mathematica}}/{{header|Wolfram Language}}==
Download the dictionary, split the lines, split the word in characters and sort them. Now sort by those words, and find sequences of equal 'letter-hashes'. Return the longest sequences:
<syntaxhighlight lang=Mathematica"mathematica">list=Import["http://wiki.puzzlers.org/pub/wordlists/unixdict.txt","Lines"];
text={#,StringJoin@@Sort[Characters[#]]}&/@list;
text=SortBy[text,#[[2]]&];
Line 5,335 ⟶ 5,265:
Select[splits,Length[#]==maxlen&]</syntaxhighlight>
gives back:
<syntaxhighlight lang=Mathematica"mathematica">{{abel,able,bale,bela,elba},{caret,carte,cater,crate,trace},{angel,angle,galen,glean,lange},{alger,glare,lager,large,regal},{elan,lane,lean,lena,neal},{evil,levi,live,veil,vile}}</syntaxhighlight>
An alternative is faster, but requires version 7 (for <code>Gather</code>):
<syntaxhighlight lang=Mathematica"mathematica">splits = Gather[list, Sort[Characters[#]] == Sort[Characters[#2]] &];
maxlen = Max[Length /@ splits];
Select[splits, Length[#] == maxlen &]</syntaxhighlight>
 
Or using build-in functions for sorting and gathering elements in lists it can be implimented as:
<syntaxhighlight lang=Mathematica"mathematica">anagramGroups = GatherBy[SortBy[GatherBy[list,Sort[Characters[#]] &],Length],Length];
anagramGroups[[-1]]</syntaxhighlight>
Also, Mathematica's own word list is available; replacing the list definition with <code>list = WordData[];</code> and forcing <code>maxlen</code> to 5 yields instead this result:
Line 5,365 ⟶ 5,295:
 
Also if using Mathematica 10 it gets really concise:
<syntaxhighlight lang=Mathematica"mathematica">list=Import["http://wiki.puzzlers.org/pub/wordlists/unixdict.txt","Lines"];
MaximalBy[GatherBy[list, Sort@*Characters], Length]</syntaxhighlight>
 
=={{header|Maxima}}==
<syntaxhighlight lang="maxima">read_file(name) := block([file, s, L], file: openr(name), L: [],
while stringp(s: readline(file)) do L: cons(s, L), close(file), L)$
 
Line 5,408 ⟶ 5,338:
["caret", "carte", "cater", "crate", "trace"],
["abel", "able", "bale", "bela", "elba"]] */</syntaxhighlight>
 
=={{header|MiniScript}}==
This implementation is for use with the [http://miniscript.org/MiniMicro Mini Micro] version of MiniScript. The command-line version does not include a HTTP library. The script can be modified to use the file class to read a local copy of the word list.
<syntaxhighlight lang="miniscript">
wordList = http.get("http://wiki.puzzlers.org/pub/wordlists/unixdict.txt").split(char(10))
 
makeKey = function(word)
return word.split("").sort.join("")
end function
 
wordSets = {}
for word in wordList
k = makeKey(word)
if not wordSets.hasIndex(k) then
wordSets[k] = [word]
else
wordSets[k].push(word)
end if
end for
 
counts = []
 
for wordSet in wordSets.values
counts.push([wordSet.len, wordSet])
end for
counts.sort(0, false)
 
maxCount = counts[0][0]
for count in counts
if count[0] == maxCount then print count[1]
end for
</syntaxhighlight>
{{out}}
<pre>
["abel", "able", "bale", "bela", "elba"]
["alger", "glare", "lager", "large", "regal"]
["angel", "angle", "galen", "glean", "lange"]
["caret", "carte", "cater", "crate", "trace"]
["elan", "lane", "lean", "lena", "neal"]
["evil", "levi", "live", "veil", "vile"]</pre>
 
=={{header|MUMPS}}==
<syntaxhighlight lang=MUMPS"mumps">Anagrams New ii,file,longest,most,sorted,word
Set file="unixdict.txt"
Open file:"r" Use file
Line 5,460 ⟶ 5,431:
===Java&ndash;Like===
{{trans|Java}}
<syntaxhighlight lang=NetRexx"netrexx">/* NetRexx */
options replace format comments java crossref symbols nobinary
 
Line 5,534 ⟶ 5,505:
===Rexx&ndash;Like===
Implemented with more NetRexx idioms such as indexed strings, <tt>PARSE</tt> and the NetRexx &quot;built&ndash;in functions&quot;.
<syntaxhighlight lang=NetRexx"netrexx">/* NetRexx */
options replace format comments java crossref symbols nobinary
 
Line 5,604 ⟶ 5,575:
 
=={{header|NewLisp}}==
<syntaxhighlight lang=NewLisp"newlisp">
;;; Get the words as a list, splitting at newline
(setq data
Line 5,644 ⟶ 5,615:
 
=={{header|Nim}}==
<syntaxhighlight lang="nim">
import tables, strutils, algorithm
 
Line 5,676 ⟶ 5,647:
=={{header|Oberon-2}}==
Oxford Oberon-2
<syntaxhighlight lang="oberon2">
MODULE Anagrams;
IMPORT Files,Out,In,Strings;
Line 5,845 ⟶ 5,816:
 
=={{header|Objeck}}==
<syntaxhighlight lang="objeck">use HTTP;
use Collection;
 
Line 5,895 ⟶ 5,866:
 
=={{header|OCaml}}==
<syntaxhighlight lang="ocaml">let explode str =
let l = ref [] in
let n = String.length str in
Line 5,931 ⟶ 5,902:
=={{header|Oforth}}==
 
<syntaxhighlight lang=Oforth"oforth">import: mapping
import: collect
import: quicksort
Line 5,957 ⟶ 5,928:
Two versions of this, using different collection classes.
===Version 1: Directory of arrays===
<syntaxhighlight lang=ooRexx"oorexx">
-- This assumes you've already downloaded the following file and placed it
-- in the current directory: http://wiki.puzzlers.org/pub/wordlists/unixdict.txt
Line 5,996 ⟶ 5,967:
===Version 2: Using the relation class===
This version appears to be the fastest.
<syntaxhighlight lang=ooRexx"oorexx">
-- This assumes you've already downloaded the following file and placed it
-- in the current directory: http://wiki.puzzlers.org/pub/wordlists/unixdict.txt
Line 6,069 ⟶ 6,040:
 
=={{header|Oz}}==
<syntaxhighlight lang="oz">declare
%% Helper function
fun {ReadLines Filename}
Line 6,100 ⟶ 6,071:
 
=={{header|Pascal}}==
<syntaxhighlight lang="pascal">Program Anagrams;
 
// assumes a local file
Line 6,200 ⟶ 6,171:
 
=={{header|Perl}}==
<syntaxhighlight lang="perl">use List::Util 'max';
 
my @words = split "\n", do { local( @ARGV, $/ ) = ( 'unixdict.txt' ); <> };
Line 6,213 ⟶ 6,184:
}</syntaxhighlight>
If we calculate <code>$max</code>, then we don't need the CPAN module:
<syntaxhighlight lang="perl">push @{$anagram{ join '' => sort split '' }}, $_ for @words;
$max > @$_ or $max = @$_ for values %anagram;
@$_ == $max and print "@$_\n" for values %anagram;</syntaxhighlight>
Line 6,226 ⟶ 6,197:
=={{header|Phix}}==
copied from Euphoria and cleaned up slightly
<!--<syntaxhighlight lang=Phix"phix">-->
<span style="color: #004080;">integer</span> <span style="color: #000000;">fn</span> <span style="color: #0000FF;">=</span> <span style="color: #7060A8;">open</span><span style="color: #0000FF;">(</span><span style="color: #008000;">"demo/unixdict.txt"</span><span style="color: #0000FF;">,</span><span style="color: #008000;">"r"</span><span style="color: #0000FF;">)</span>
<span style="color: #004080;">sequence</span> <span style="color: #000000;">words</span> <span style="color: #0000FF;">=</span> <span style="color: #0000FF;">{},</span> <span style="color: #000000;">anagrams</span> <span style="color: #0000FF;">=</span> <span style="color: #0000FF;">{},</span> <span style="color: #000000;">last</span><span style="color: #0000FF;">=</span><span style="color: #008000;">""</span><span style="color: #0000FF;">,</span> <span style="color: #000000;">letters</span>
Line 6,276 ⟶ 6,247:
 
=={{header|Phixmonti}}==
<syntaxhighlight lang=Phixmonti"phixmonti">include ..\Utilitys.pmt
 
"unixdict.txt" "r" fopen var f
Line 6,320 ⟶ 6,291:
Other solution
 
<syntaxhighlight lang=Phixmonti"phixmonti">include ..\Utilitys.pmt
 
( )
Line 6,367 ⟶ 6,338:
 
=={{header|PHP}}==
<syntaxhighlight lang="php"><?php
$words = explode("\n", file_get_contents('http://wiki.puzzlers.org/pub/wordlists/unixdict.txt'));
foreach ($words as $word) {
Line 6,383 ⟶ 6,354:
=={{header|Picat}}==
Using foreach loop:
<syntaxhighlight lang=Picat"picat">go =>
Dict = new_map(),
foreach(Line in read_file_lines("unixdict.txt"))
Line 6,406 ⟶ 6,377:
 
Same idea, but shorter version by (mis)using list comprehensions.
<syntaxhighlight lang=Picat"picat">go2 =>
M = new_map(),
_ = [_:W in read_file_lines("unixdict.txt"),S=sort(W),M.put(S,M.get(S,"")++[W])],
Line 6,419 ⟶ 6,390:
=={{header|PicoLisp}}==
A straight-forward implementation using 'group' takes 48 seconds on a 1.7 GHz Pentium:
<syntaxhighlight lang=PicoLisp"picolisp">(flip
(by length sort
(by '((L) (sort (copy L))) group
(in "unixdict.txt" (make (while (line) (link @)))) ) ) )</syntaxhighlight>
Using a binary tree with the 'idx' function, it takes only 0.42 seconds on the same machine, a factor of 100 faster:
<syntaxhighlight lang=PicoLisp"picolisp">(let Words NIL
(in "unixdict.txt"
(while (line)
Line 6,439 ⟶ 6,410:
 
=={{header|PL/I}}==
<syntaxhighlight lang=PL"pl/Ii">/* Search a list of words, finding those having the same letters. */
 
word_test: proc options (main);
Line 6,514 ⟶ 6,485:
 
=={{header|Pointless}}==
<syntaxhighlight lang="pointless">output =
readFileLines("unixdict.txt")
|> reduce(logWord, {})
Line 6,539 ⟶ 6,510:
=={{header|PowerShell}}==
{{works with|PowerShell|2}}
<syntaxhighlight lang="powershell">$c = New-Object Net.WebClient
$words = -split ($c.DownloadString('http://wiki.puzzlers.org/pub/wordlists/unixdict.txt'))
$top_anagrams = $words `
Line 6,560 ⟶ 6,531:
evil, levi, live, veil, vile</pre>
Another way with more .Net methods is quite a different style, but drops the runtime from 2 minutes to 1.5 seconds:
<syntaxhighlight lang="powershell">$Timer = [System.Diagnostics.Stopwatch]::StartNew()
 
$uri = 'http://wiki.puzzlers.org/pub/wordlists/unixdict.txt'
Line 6,605 ⟶ 6,576:
 
=={{header|Processing}}==
<syntaxhighlight lang="processing">import java.util.Map;
 
void setup() {
Line 6,642 ⟶ 6,613:
=={{header|Prolog}}==
{{works with|SWI-Prolog|5.10.0}}
<syntaxhighlight lang=Prolog"prolog">:- use_module(library( http/http_open )).
 
anagrams:-
Line 6,702 ⟶ 6,673:
=={{header|PureBasic}}==
{{works with|PureBasic|4.4}}
<syntaxhighlight lang=PureBasic"purebasic">InitNetwork() ;
OpenConsole()
Line 6,789 ⟶ 6,760:
===Python 3.X Using defaultdict===
Python 3.2 shell input (IDLE)
<syntaxhighlight lang="python">>>> import urllib.request
>>> from collections import defaultdict
>>> words = urllib.request.urlopen('http://wiki.puzzlers.org/pub/wordlists/unixdict.txt').read().split()
Line 6,804 ⟶ 6,775:
===Python 2.7 version===
Python 2.7 shell input (IDLE)
<syntaxhighlight lang="python">>>> import urllib
>>> from collections import defaultdict
>>> words = urllib.urlopen('http://wiki.puzzlers.org/pub/wordlists/unixdict.txt').read().split()
Line 6,833 ⟶ 6,804:
{{trans|Haskell}}
{{works with|Python|2.6}} sort and then group using groupby()
<syntaxhighlight lang="python">>>> import urllib, itertools
>>> words = urllib.urlopen('http://wiki.puzzlers.org/pub/wordlists/unixdict.txt').read().split()
>>> len(words)
Line 6,859 ⟶ 6,830:
Or, disaggregating, speeding up a bit by avoiding the slightly expensive use of ''sorted'' as a key, updating for Python 3, and using a local ''unixdict.txt'':
{{Works with|Python|3.7}}
<syntaxhighlight lang="python">'''Largest anagram groups found in list of words.'''
 
from os.path import expanduser
Line 6,976 ⟶ 6,947:
 
=={{header|QB64}}==
<syntaxhighlight lang=QB64"qb64">
$CHECKING:OFF
' Warning: Keep the above line commented out until you know your newly edited code works.
Line 7,129 ⟶ 7,100:
 
'''2nd solution (by Steve McNeill):'''
<syntaxhighlight lang=QB64"qb64">
$CHECKING:OFF
SCREEN _NEWIMAGE(640, 480, 32)
Line 7,278 ⟶ 7,249:
 
'''Output:'''
<syntaxhighlight lang=QB64"qb64">
LOOPER: 7134 executions from start to finish, in one second.
Note, this is including disk access for new data each time.
Line 7,294 ⟶ 7,265:
=={{header|Quackery}}==
 
<syntaxhighlight lang=Quackery"quackery"> $ "rosetta/unixdict.txt" sharefile drop nest$
[] swap witheach
[ dup sort
Line 7,328 ⟶ 7,299:
 
=={{header|R}}==
<syntaxhighlight lang=R"r">words <- readLines("http://wiki.puzzlers.org/pub/wordlists/unixdict.txt")
word_group <- sapply(
strsplit(words, split=""), # this will split all words to single letters...
Line 7,352 ⟶ 7,323:
 
=={{header|Racket}}==
<syntaxhighlight lang="racket">
#lang racket
 
Line 7,389 ⟶ 7,360:
 
{{works with|Rakudo|2016.08}}
<syntaxhighlight lang=perl6"raku" line>my @anagrams = 'unixdict.txt'.IO.words.classify(*.comb.sort.join).values;
my $max = @anagrams».elems.max;
Line 7,406 ⟶ 7,377:
 
{{works with|Rakudo|2016.08}}
<syntaxhighlight lang=perl6"raku" line>.put for # print each element of the array made this way:
'unixdict.txt'.IO.words # load words from file
.classify(*.comb.sort.join) # group by common anagram
Line 7,414 ⟶ 7,385:
 
=={{header|RapidQ}}==
<syntaxhighlight lang="vb">
dim x as integer, y as integer
dim SortX as integer
Line 7,491 ⟶ 7,462:
 
=={{header|Rascal}}==
<syntaxhighlight lang="rascal">import Prelude;
 
list[str] OrderedRep(str word){
Line 7,503 ⟶ 7,474:
}</syntaxhighlight>
Returns:
<syntaxhighlight lang="rascal">value: [
{"glean","galen","lange","angle","angel"},
{"glare","lager","regal","large","alger"},
Line 7,513 ⟶ 7,484:
 
=={{header|Red}}==
<syntaxhighlight lang=Red"red">Red []
 
m: make map! [] 25000
Line 7,544 ⟶ 7,515:
This version doesn't assume that the dictionary is in alphabetical order, &nbsp; nor does it assume the
<br>words are in any specific case &nbsp; (lower/upper/mixed).
<syntaxhighlight lang="rexx">/*REXX program finds words with the largest set of anagrams (of the same size). */
iFID= 'unixdict.txt' /*the dictionary input File IDentifier.*/
$=; !.=; ww=0; uw=0; most=0 /*initialize a bunch of REXX variables.*/
Line 7,592 ⟶ 7,563:
===version 1.2, optimized===
This optimized version eliminates the &nbsp; '''sortA''' &nbsp; subroutine and puts that subroutine's code in-line.
<syntaxhighlight lang="rexx">/*REXX program finds words with the largest set of anagrams (of the same size). */
iFID= 'unixdict.txt' /*the dictionary input File IDentifier.*/
$=; !.=; ww=0; uw=0; most=0 /*initialize a bunch of REXX variables.*/
Line 7,629 ⟶ 7,600:
===annotated version using &nbsp; PARSE===
(This algorithm actually utilizes a &nbsp; ''bin'' &nbsp; sort, &nbsp; one bin for each Latin letter.)
<syntaxhighlight lang="rexx">u= 'Halloween' /*word to be sorted by (Latin) letter.*/
upper u /*fast method to uppercase a variable. */
/*another: u = translate(u) */
Line 7,659 ⟶ 7,630:
 
===annotated version using a &nbsp; DO &nbsp; loop===
<syntaxhighlight lang="rexx">u= 'Halloween' /*word to be sorted by (Latin) letter.*/
upper u /*fast method to uppercase a variable. */
L=length(u) /*get the length of the word (in bytes)*/
Line 7,685 ⟶ 7,656:
 
===version 2===
<syntaxhighlight lang="rexx">/*REXX program finds words with the largest set of anagrams (same size)
* 07.08.2013 Walter Pachl
* sorta for word compression courtesy Gerard Schildberger,
Line 7,763 ⟶ 7,734:
 
=={{header|Ring}}==
<syntaxhighlight lang="ring">
# Project : Anagrams
 
Line 7,867 ⟶ 7,838:
 
=={{header|Ruby}}==
<syntaxhighlight lang="ruby">require 'open-uri'
 
anagram = Hash.new {|hash, key| hash[key] = []} # map sorted chars to anagrams
Line 7,895 ⟶ 7,866:
 
Short version (with lexical ordered result).
<syntaxhighlight lang="ruby">require 'open-uri'
 
anagrams = open('http://wiki.puzzlers.org/pub/wordlists/unixdict.txt'){|f| f.read.split.group_by{|w| w.each_char.sort} }
Line 7,911 ⟶ 7,882:
 
=={{header|Run BASIC}}==
<syntaxhighlight lang="runbasic">sqliteconnect #mem, ":memory:"
mem$ = "CREATE TABLE anti(gram,ordr);
CREATE INDEX ord ON anti(ordr)"
Line 7,987 ⟶ 7,958:
Unicode is hard so the solution depends on what you consider to be an anagram: two strings that have the same bytes, the same codepoints, or the same graphemes. The first two are easily accomplished in Rust proper, but the latter requires an external library. Graphemes are probably the most correct way, but it is also the least efficient since graphemes are variable size and thus require a heap allocation per grapheme.
 
<syntaxhighlight lang="rust">use std::collections::HashMap;
use std::fs::File;
use std::io::{BufRead,BufReader};
Line 8,029 ⟶ 8,000:
If we assume an ASCII string, we can map each character to a prime number and multiply these together to create a number which uniquely maps to each anagram.
 
<syntaxhighlight lang="rust">use std::collections::HashMap;
use std::path::Path;
use std::io::{self, BufRead, BufReader};
Line 8,073 ⟶ 8,044:
 
=={{header|Scala}}==
<syntaxhighlight lang="scala">val src = io.Source fromURL "http://wiki.puzzlers.org/pub/wordlists/unixdict.txt"
val vls = src.getLines.toList.groupBy(_.sorted).values
val max = vls.map(_.size).max
Line 8,088 ⟶ 8,059:
----
Another take:
<syntaxhighlight lang="scala">Source
.fromURL("http://wiki.puzzlers.org/pub/wordlists/unixdict.txt").getLines.toList
.groupBy(_.sorted).values
Line 8,108 ⟶ 8,079:
Uses two SRFI libraries: SRFI 125 for hash tables and SRFI 132 for sorting.
 
<syntaxhighlight lang="scheme">
(import (scheme base)
(scheme char)
Line 8,163 ⟶ 8,134:
 
=={{header|Seed7}}==
<syntaxhighlight lang="seed7">$ include "seed7_05.s7i";
include "gethttp.s7i";
include "strifile.s7i";
Line 8,198 ⟶ 8,169:
var integer: maxLength is 0;
begin
dictFile := openStrifileopenStriFile(getHttp("wiki.puzzlers.org/pub/wordlists/unixdict.txt"));
while hasNext(dictFile) do
readln(dictFile, word);
Line 8,231 ⟶ 8,202:
 
=={{header|SETL}}==
<syntaxhighlight lang=SETL"setl">h := open('unixdict.txt', "r");
anagrams := {};
while not eof(h) loop
Line 8,280 ⟶ 8,251:
 
=={{header|Sidef}}==
<syntaxhighlight lang="ruby">func main(file) {
file.open_r(\var fh, \var err) ->
|| die "Can't open file `#{file}' for reading: #{err}\n";
Line 8,299 ⟶ 8,270:
 
=={{header|Simula}}==
<syntaxhighlight lang="simula">COMMENT COMPILE WITH
$ cim -m64 anagrams-hashmap.sim
;
Line 8,583 ⟶ 8,554:
 
=={{header|Smalltalk}}==
<syntaxhighlight lang=Smalltalk"smalltalk">list:= (FillInTheBlank request: 'myMessageBoxTitle') subStrings: String crlf.
dict:= Dictionary new.
list do: [:val|
Line 8,607 ⟶ 8,578:
{{works with|Smalltalk/X}}
instead of asking for the strings, read the file:
<syntaxhighlight lang="smalltalk">d := Dictionary new.
'unixdict.txt' asFilename
readingLinesDo:[:eachWord |
Line 8,628 ⟶ 8,599:
...</pre>
not sure if getting the dictionary via http is part of the task; if so, replace the file-reading with:
<syntaxhighlight lang="smalltalk">'http://wiki.puzzlers.org/pub/wordlists/unixdict.txt' asURI contents asCollectionOfLines do:[:eachWord | ...</syntaxhighlight>
 
=={{header|SNOBOL4}}==
{{works with|Macro Spitbol}}
Note: unixdict.txt is passed in locally via STDIN. Newlines must be converted for Win/DOS environment.
<syntaxhighlight lang=SNOBOL4"snobol4">* # Sort letters of word
define('sortw(str)a,i,j') :(sortw_end)
sortw a = array(size(str))
Line 8,665 ⟶ 8,636:
 
=={{header|Stata}}==
<syntaxhighlight lang="stata">import delimited http://wiki.puzzlers.org/pub/wordlists/unixdict.txt, clear
mata
a=st_sdata(.,.)
Line 8,693 ⟶ 8,664:
 
=={{header|SuperCollider}}==
<syntaxhighlight lang=SuperCollider"supercollider">(
var text, words, sorted, dict = IdentityDictionary.new, findMax;
File.use("unixdict.txt".resolveRelative, "r", { |f| text = f.readAllString });
Line 8,715 ⟶ 8,686:
 
Answers:
<syntaxhighlight lang=SuperCollider"supercollider">[ [ angel, angle, galen, glean, lange ], [ caret, carte, cater, crate, trace ], [ elan, lane, lean, lena, neal ], [ evil, levi, live, veil, vile ], [ alger, glare, lager, large, regal ] ]</syntaxhighlight>
 
=={{header|Swift}}==
{{works with|Swift 2.0}}
 
<syntaxhighlight lang="swift">import Foundation
 
let wordsURL = NSURL(string: "http://wiki.puzzlers.org/pub/wordlists/unixdict.txt")!
Line 8,776 ⟶ 8,747:
 
=={{header|Tcl}}==
<syntaxhighlight lang="tcl">package require Tcl 8.5
package require http
 
Line 8,809 ⟶ 8,780:
 
=={{header|Transd}}==
Works with Transd v0.43.
 
<syntaxhighlight lang="scheme">#lang transd
 
MainModule: {
_start: (λ
(with fs FileStream() words String()
(open-r fs "/mnt/proj/tmp/unixdict.txt")
(textin fs words)
( -|
Line 8,838 ⟶ 8,808:
 
=={{header|TUSCRIPT}}==
<syntaxhighlight lang="tuscript">$$ MODE TUSCRIPT,{}
requestdata = REQUEST ("http://wiki.puzzlers.org/pub/wordlists/unixdict.txt")
 
Line 8,886 ⟶ 8,856:
Process substitutions eliminate the need for command pipelines.
 
<syntaxhighlight lang="bash">http_get_body() {
local host=$1
local uri=$2
Line 8,935 ⟶ 8,905:
The algorithm is to group the words together that are made from the same unordered lists of letters, then collect the groups together that have the same number of words in
them, and then show the collection associated with the highest number.
<syntaxhighlight lang=Ursala"ursala">#import std
 
#show+
Line 8,950 ⟶ 8,920:
 
=={{header|VBA}}==
<syntaxhighlight lang="vb">
Option Explicit
 
Line 9,128 ⟶ 9,098:
=={{header|VBScript}}==
A little convoluted, uses a dictionary and a recordset...
<syntaxhighlight lang="vb">
Const adInteger = 3
Const adVarChar = 200
Line 9,216 ⟶ 9,186:
 
The word list is expected to be in the same directory as the script.
<syntaxhighlight lang="vedit">File_Open("|(PATH_ONLY)\unixdict.txt")
 
Repeat(ALL) {
Line 9,291 ⟶ 9,261:
evil levi live veil vile
</pre>
{{omit from|PARI/GP|No real capacity for string manipulation}}
 
=={{header|Visual Basic .NET}}==
<syntaxhighlight lang="vbnet">Imports System.IO
Imports System.Collections.ObjectModel
 
Line 9,368 ⟶ 9,337:
</PRE>
 
=={{header|V (Vlang)}}==
{{trans|Wren}}
<syntaxhighlight lang="v (vlang)">import os
 
fn main(){
Line 9,409 ⟶ 9,378:
=={{header|Wren}}==
{{libheader|Wren-sort}}
<syntaxhighlight lang=ecmascript"wren">import "io" for File
import "./sort" for Sort
 
var words = File.read("unixdict.txt").split("\n").map { |w| w.trim() }
Line 9,440 ⟶ 9,409:
 
=={{header|Yabasic}}==
<syntaxhighlight lang=Yabasic"yabasic">filename$ = "unixdict.txt"
maxw = 0 : c = 0 : dimens(c)
i = 0
Line 9,523 ⟶ 9,492:
 
=={{header|zkl}}==
<syntaxhighlight lang="zkl">File("unixdict.txt").read(*) // dictionary file to blob, copied from web
// blob to dictionary: key is word "fuzzed", values are anagram words
.pump(Void,T(fcn(w,d){
Line 9,551 ⟶ 9,520:
</pre>
In the case where it is desirable to get the dictionary from the web, use this code:
<syntaxhighlight lang="zkl">URL:="http://wiki.puzzlers.org/pub/wordlists/unixdict.txt";
var ZC=Import("zklCurl");
unixdict:=ZC().get(URL); //--> T(Data,bytes of header, bytes of trailer)
Line 9,559 ⟶ 9,528:
{{omit from|6502 Assembly|unixdict.txt is much larger than the CPU's address space.}}
{{omit from|8080 Assembly|See 6502 Assembly.}}
{{omit from|PARI/GP|No real capacity for string manipulation}}
{{omit from|Z80 Assembly|See 6502 Assembly.}}
29

edits