Anagrams: Difference between revisions

7,332 bytes added ,  1 month ago
m
Replace deprecated function
m (Replace deprecated function)
(25 intermediate revisions by 14 users not shown)
Line 3:
When two or more words are composed of the same characters, but in a different order, they are called [[wp:Anagram|anagrams]].
 
;Task
{{task heading}}
Using the word list at   http://wiki.puzzlers.org/pub/wordlists/unixdict.txt,
<br>find the sets of words that share the same characters that contain the most words in them.
 
{{task heading|;Related tasks}}
 
{{Related tasks/Word plays}}
 
Line 16 ⟶ 17:
=={{header|11l}}==
{{trans|Python}}
<langsyntaxhighlight lang="11l">DefaultDict[String, Array[String]] anagram
L(word) File(‘unixdict.txt’).read().split("\n")
anagram[sorted(word).join(‘’)].append(word)
 
V count = max(anagram.values().map(ana -> ana.len))
Line 24 ⟶ 25:
L(ana) anagram.values()
I ana.len == count
print(ana)</langsyntaxhighlight>
{{out}}
<pre>
Line 36 ⟶ 37:
 
=={{header|8th}}==
<langsyntaxhighlight lang="8th">
\
\ anagrams.8th
Line 169 ⟶ 170:
bye
;
</syntaxhighlight>
</lang>
=={{header|AArch64 Assembly}}==
{{works with|as|Raspberry Pi 3B version Buster 64 bits <br> or android 64 bits with application Termux }}
<syntaxhighlight lang="aarch64 assembly">
<lang AArch64 Assembly>
/* ARM assembly AARCH64 Raspberry PI 3B */
/* program anagram64.s */
Line 559 ⟶ 560:
/* for this file see task include a file in language AArch64 assembly */
.include "../includeARM64.inc"
</syntaxhighlight>
</lang>
<pre>
~/.../rosetta/asm1 $ anagram64
Line 571 ⟶ 572:
</pre>
=={{header|ABAP}}==
<langsyntaxhighlight ABAPlang="abap">report zz_anagrams no standard page heading.
define update_progress.
call function 'SAPGUI_PROGRESS_INDICATOR'
Line 672 ⟶ 673:
return.
endif.
endform.</langsyntaxhighlight>
{{out}}
<pre>[ angel , angle , galen , glean , lange ]
Line 682 ⟶ 683:
 
=={{header|Ada}}==
<langsyntaxhighlight lang="ada">with Ada.Text_IO; use Ada.Text_IO;
 
with Ada.Containers.Indefinite_Ordered_Maps;
Line 750 ⟶ 751:
Iterate (Result, Put'Access);
Close (File);
end Words_Of_Equal_Characters;</langsyntaxhighlight>
{{out}}
<pre>
Line 763 ⟶ 764:
=={{header|ALGOL 68}}==
{{works with|ALGOL 68G|Any - tested with release 2.8.3.win32}} Uses the "read" PRAGMA of Algol 68 G to include the associative array code from the [[Associative_array/Iteration]] task.
<langsyntaxhighlight lang="algol68"># find longest list(s) of words that are anagrams in a list of words #
# use the associative array in the Associate array/iteration task #
PR read "aArray.a68" PR
Line 855 ⟶ 856:
e := NEXT words
OD
FI</langsyntaxhighlight>
{{out}}
<pre>
Line 865 ⟶ 866:
alger|glare|lager|large|regal
caret|carte|cater|crate|trace
</pre>
 
=={{header|Amazing Hopper}}==
<syntaxhighlight lang="c">
#include <basico.h>
 
#define MAX_LINE 30
 
algoritmo
fd=0, filas=0
word={}, 2da columna={}
old_word="",new_word=""
dimensionar (1,2) matriz de cadenas 'result'
pos=0
token.separador'""'
 
abrir para leer("basica/unixdict.txt",fd)
 
iterar mientras ' no es fin de archivo (fd) '
usando 'MAX_LINE', leer línea desde(fd),
---copiar en 'old_word'---, separar para 'word '
word, ---retener--- ordenar esto,
encadenar en 'new_word'
 
matriz.buscar en tabla (1,new_word,result)
copiar en 'pos'
si ' es negativo? '
new_word,old_word, pegar fila en 'result'
sino
#( result[pos,2] = cat(result[pos,2],cat(",",old_word) ) )
fin si
 
reiterar
 
cerrar archivo(fd)
guardar 'filas de (result)' en 'filas'
#( 2da columna = result[2:filas, 2] )
fijar separador '","'
tomar '2da columna'
contar tokens en '2da columna' ---retener resultado,
obtener máximo valor,es mayor o igual?, replicar esto
compactar esto
 
fijar separador 'NL', luego imprime todo
terminar
</syntaxhighlight>
{{out}}
<pre>
abel,able,bale,bela,elba
alger,glare,lager,large,regal
angel,angle,galen,glean,lange
caret,carte,cater,crate,trace
elan,lane,lean,lena,neal
evil,levi,live,veil,vile
</pre>
 
Line 872 ⟶ 931:
This is a rough translation of the J version, intermediate values are kept and verb trains are not used for clarity of data flow.
 
<syntaxhighlight lang="apl">
<lang APL>
anagrams←{
tie←⍵ ⎕NTIE 0
Line 880 ⟶ 939:
({~' '∊¨(⊃/¯1↑[2]⍵)}ana)⌿ana ⋄ ⎕NUNTIE
}
</syntaxhighlight>
</lang>
On a unix system we can assume wget exists and can use it from dyalog to download the file.
 
Line 886 ⟶ 945:
 
'''Example:'''
<syntaxhighlight lang="apl">
<lang APL>
⎕SH'wget http://wiki.puzzlers.org/pub/wordlists/unixdict.txt'
]display anagrams 'unixdict.txt'
</syntaxhighlight>
</lang>
'''Output:'''
<pre>
Line 915 ⟶ 974:
 
=={{header|AppleScript}}==
<syntaxhighlight lang="applescript">use AppleScript version "2.3.1" -- OS X 10.9 (Mavericks) or later.
 
use sorter : script ¬
<lang applescript>use AppleScript version "2.3.1" -- OS X 10.9 (Mavericks) or later — for these 'use' commands!
"Custom Iterative Ternary Merge Sort" -- <www.macscripter.net/t/timsort-and-nigsort/71383/3>
-- Uses the customisable AppleScript-coded sort shown at <https://macscripter.net/viewtopic.php?pid=194430#p194430>.
-- It's assumed scripters will know how and where to install it as a library.
use sorter : script "Custom Iterative Ternary Merge Sort"
use scripting additions
 
on join(lst, delim)
set astid to AppleScript's text item delimiters
set AppleScript's text item delimiters to delim
set txt to lst as text
set AppleScript's text item delimiters to astid
return txt
end join
 
on largestAnagramGroups(listOfWords)
script o
property wordList : listOfWords
property doctoredWordsgroupingTexts : {}wordList's items
property longestRangeslargestGroupSize : {}0
property outputlargestGroupRanges : {}
on judgeGroup(i, j)
set groupSize to j - i + 1
if (groupSize < largestGroupSize) then -- Most likely.
else if (groupSize = largestGroupSize) then -- Next most likely.
set end of largestGroupRanges to {i, j}
else -- Largest group so far.
set largestGroupRanges to {{i, j}}
set largestGroupSize to groupSize
end if
end judgeGroup
on isGreater(a, b)
return a's beginning > b's beginning
end isGreater
end script
set wordCount to (count o's wordList)
ignoring case
-- BuildReplace anotherthe listwords containing doctored versions ofin the inputgroupingTexts wordslist with theirsorted-character characters lexically sortedversions.
setrepeat astidwith toi AppleScript'sfrom text1 itemto delimiterswordCount
set AppleScriptchrs to o's textgroupingTexts's item delimiters toi's ""characters
tell sorter to sort(chrs, 1, -1, {})
repeat with thisWord in o's wordList
set theseCharso's to thisWordgroupingTexts's charactersitem i to join(chrs, "")
-- A straight ascending in-place sort here.
tell sorter to sort(theseChars, 1, -1, {}) -- Params: (list, start index, end index, customisation spec.).
set end of o's doctoredWords to theseChars as text
end repeat
-- Sort the list to group its contents and echo the moves in the original word list.
set AppleScript's text item delimiters to astid
tell sorter to sort(o's groupingTexts, 1, wordCount, {slave:{o's wordList}})
-- Sort the list of doctored words to group them, rearranging the original-word list in parallel.
tell sorter to sort(o's doctoredWords, 1, -1, {slave:{o's wordList}})
-- Find the list range(s) of the longest run(s) of equal grouping texts in the doctored-word list.
set longestRunLength to 1
set i to 1
set currentText to beginning of o's doctoredWordsgroupingTexts
repeat with j from 2 to (count o's doctoredWords)wordCount
set thisText to item j of o's doctoredWordsgroupingTexts's item j
if (thisText is not currentText) then
settell thisRunLengtho to judgeGroup(i, j - i1)
if (thisRunLength > longestRunLength) then
set o's longestRanges to {{i, j - 1}}
set longestRunLength to thisRunLength
else if (thisRunLength = longestRunLength) then
set end of o's longestRanges to {i, j - 1}
end if
set currentText to thisText
set i to j
end if
end repeat
set finalRunLength toif (j -> i) +then 1tell o to judgeGroup(i, j)
if (finalRunLength > longestRunLength) then
set o's longestRanges to {{i, j}}
else if (finalRunLength = longestRunLength) then
set end of o's longestRanges to {i, j}
end if
-- GetExtract the group(s) of words occupying the same range(s) in the original- word list.
set output to {}
-- The stable parallel sort above will have kept each group's words in the same order with respect to each other.
repeat with thisRange in o's longestRangeslargestGroupRanges
set {i, j} to thisRange
set-- endAdd ofthis o's outputgroup to itemsthe i thru j of o's wordListoutput.
set thisGroup to o's wordList's items i thru j
tell sorter to sort(thisGroup, 1, -1, {}) -- Not necessary with unixdict.txt. But hey.
set end of output to thisGroup
end repeat
-- As a final flourish, sort the list of groups byon their first items.
tell sorter to sort(output, 1, -1, {comparer:o})
script byFirstItem
on isGreater(a, b)
return (a's beginning > b's beginning)
end isGreater
end script
tell sorter to sort(o's output, 1, -1, {comparer:byFirstItem})
end ignoring
return o's output
end largestAnagramGroups
 
-- The closing values of AppleScript 'run handler' variables not explicity declared local are
-- saved back to the script file afterwards — and "unixdict.txt" contains 25,104 words!
local wordFile, wordList
set wordFile to ((path to desktop as text) & "www.rosettacode.org:unixdict.txt") as «class furl»
-- The words in "unixdict.txt" are arranged one per line in alphabetical order.
-- Some contain punctuation characters, so they're best extracted as 'paragraphs' rather than as 'words'.
set wordFile to ((path to desktop as text) & "unixdict.txt") as «class furl»
set wordList to paragraphs of (read wordFile as «class utf8»)
return largestAnagramGroups(wordList)</langsyntaxhighlight>
 
{{output}}
<langsyntaxhighlight lang="applescript">{{"abel", "able", "bale", "bela", "elba"}, {"alger", "glare", "lager", "large", "regal"}, {"angel", "angle", "galen", "glean", "lange"}, {"caret", "carte", "cater", "crate", "trace"}, {"elan", "lane", "lean", "lena", "neal"}, {"evil", "levi", "live", "veil", "vile"}}</langsyntaxhighlight>
 
=={{header|ARM Assembly}}==
{{works with|as|Raspberry Pi <br> or android 32 bits with application Termux}}
<syntaxhighlight lang="arm assembly">
<lang ARM Assembly>
/* ARM assembly Raspberry PI */
/* program anagram.s */
Line 1,356 ⟶ 1,415:
/***************************************************/
.include "../affichage.inc"
</syntaxhighlight>
</lang>
<pre>
bale able bela abel elba
Line 1,367 ⟶ 1,426:
=={{header|Arturo}}==
 
<langsyntaxhighlight lang="rebol">wordset: map read.lines relative "unixdict.txt" => strip
 
anagrams: #[]
Line 1,380 ⟶ 1,439:
 
loop select values anagrams 'x [5 =< size x] 'words ->
print join.with:", " words</langsyntaxhighlight>
 
{{out}}
Line 1,393 ⟶ 1,452:
=={{header|AutoHotkey}}==
Following code should work for AHK 1.0.* and 1.1* versions:
<langsyntaxhighlight AutoHotkeylang="autohotkey">FileRead, Contents, unixdict.txt
Loop, Parse, Contents, % "`n", % "`r"
{ ; parsing each line of the file we just read
Line 1,423 ⟶ 1,482:
Else ; output only those sets of letters that scored the maximum amount of common words
Break
MsgBox, % ClipBoard := SubStr(var_Output,2) ; the result is also copied to the clipboard</langsyntaxhighlight>
{{out}}
<pre>
Line 1,435 ⟶ 1,494:
 
=={{header|AWK}}==
<langsyntaxhighlight AWKlang="awk"># JUMBLEA.AWK - words with the most duplicate spellings
# syntax: GAWK -f JUMBLEA.AWK UNIXDICT.TXT
{ for (i=1; i<=NF; i++) {
Line 1,460 ⟶ 1,519:
}
return(str)
}</langsyntaxhighlight>
{{out}}
<pre>
Line 1,473 ⟶ 1,532:
Alternatively, non-POSIX version:
{{works with|gawk}}
<langsyntaxhighlight lang="awk">#!/bin/gawk -f
 
{ patsplit($0, chars, ".")
Line 1,489 ⟶ 1,548:
if (count[i] == countMax)
print substr(accum[i], 2)
}</langsyntaxhighlight>
 
=={{header|BaConBASIC}}==
==={{header|BaCon}}===
<lang freebasic>OPTION COLLAPSE TRUE
<syntaxhighlight lang="freebasic">OPTION COLLAPSE TRUE
 
DECLARE idx$ ASSOC STRING
Line 1,511 ⟶ 1,571:
FOR y = 0 TO x-1
IF MaxCount = AMOUNT(idx$(n$[y])) THEN PRINT n$[y], ": ", idx$(n$[y])
NEXT</langsyntaxhighlight>
{{out}}
<pre>
Line 1,524 ⟶ 1,584:
</pre>
 
==={{header|BBC BASIC}}===
{{works with|BBC BASIC for Windows}}
<langsyntaxhighlight lang="bbcbasic"> INSTALL @lib$+"SORTLIB"
sort% = FN_sortinit(0,0)
Line 1,589 ⟶ 1,649:
C% = LEN(word$)
CALL sort%, char&(0)
= $$^char&(0)</langsyntaxhighlight>
{{out}}
<pre>
Line 1,602 ⟶ 1,662:
=={{header|BQN}}==
 
<langsyntaxhighlight lang="bqn">words ← •FLines "unixdict.txt"
•Show¨{𝕩/˜(⊢=⌈´)≠¨𝕩} (⊐∧¨)⊸⊔ words</langsyntaxhighlight>
<langsyntaxhighlight lang="bqn">⟨ "abel" "able" "bale" "bela" "elba" ⟩
⟨ "alger" "glare" "lager" "large" "regal" ⟩
⟨ "angel" "angle" "galen" "glean" "lange" ⟩
⟨ "caret" "carte" "cater" "crate" "trace" ⟩
⟨ "elan" "lane" "lean" "lena" "neal" ⟩
⟨ "evil" "levi" "live" "veil" "vile" ⟩</langsyntaxhighlight>
 
Assumes that <code>unixdict.txt</code> is in the same folder. The [[mlochbaum/BQN|JS implementation]] must be run in Node.js to have access to the filesystem.
Line 1,619 ⟶ 1,679:
This solution makes extensive use of Bracmat's computer algebra mechanisms. A trick is needed to handle words that are merely repetitions of a single letter, such as <code>iii</code>. That's why the variabe <code>sum</code> isn't initialised with <code>0</code>, but with a non-number, in this case the empty string. Also te correct handling of characters 0-9 needs a trick so that they are not numerically added: they are prepended with a non-digit, an <code>N</code> in this case. After completely traversing the word list, the program writes a file <code>product.txt</code> that can be visually inspected.
The program is not fast. (Minutes rather than seconds.)
<langsyntaxhighlight lang="bracmat">( get$("unixdict.txt",STR):?list
& 1:?product
& whl
Line 1,646 ⟶ 1,706:
| out$!group
)
);</langsyntaxhighlight>
{{out}}
<pre> abel+able+bale+bela+elba
Line 1,656 ⟶ 1,716:
 
=={{header|C}}==
<langsyntaxhighlight lang="c">#include <stdio.h>
#include <stdlib.h>
#include <string.h>
Line 1,815 ⟶ 1,875:
fclose(f1);
return 0;
}</langsyntaxhighlight>
{{out}} (less than 1 second on old P500)
<pre>5:vile, veil, live, levi, evil,
Line 1,825 ⟶ 1,885:
</pre>
A much shorter version with no fancy data structures:
<langsyntaxhighlight lang="c">#include <stdio.h>
#include <stdlib.h>
#include <string.h>
Line 1,925 ⟶ 1,985:
close(fd);
return 0;
}</langsyntaxhighlight>
{{out}}
<pre>
Line 1,937 ⟶ 1,997:
 
=={{header|C sharp|C#}}==
<langsyntaxhighlight lang="csharp">using System;
using System.IO;
using System.Linq;
Line 1,966 ⟶ 2,026:
}
}
}</langsyntaxhighlight>
{{out}}
<pre>
Line 1,978 ⟶ 2,038:
 
=={{header|C++}}==
<langsyntaxhighlight lang="cpp">#include <iostream>
#include <fstream>
#include <string>
Line 2,012 ⟶ 2,072:
}
return 0;
}</langsyntaxhighlight>
{{out}}
abel, able, bale, bela, elba,
Line 2,023 ⟶ 2,083:
=={{header|Clojure}}==
Assume ''wordfile'' is the path of the local file containing the words. This code makes a map (''groups'') whose keys are sorted letters and values are lists of the key's anagrams. It then determines the length of the longest list, and prints out all the lists of that length.
<langsyntaxhighlight lang="clojure">(require '[clojure.java.io :as io])
 
(def groups
Line 2,032 ⟶ 2,092:
maxlength (count (first wordlists))]
(doseq [wordlist (take-while #(= (count %) maxlength) wordlists)]
(println wordlist))</langsyntaxhighlight>
 
<langsyntaxhighlight lang="clojure">
(->> (slurp "http://wiki.puzzlers.org/pub/wordlists/unixdict.txt")
clojure.string/split-lines
Line 2,049 ⟶ 2,109:
;; ["evil" "levi" "live" "veil" "vile"]
;; ["abel" "able" "bale" "bela" "elba"])
</syntaxhighlight>
</lang>
 
=={{header|CLU}}==
<langsyntaxhighlight lang="clu">% Keep a list of anagrams
anagrams = cluster is new, add, largest_size, sets
anagram_set = struct[letters: string, words: array[string]]
Line 2,134 ⟶ 2,194:
stream$putl(po, "")
end
end start_up</langsyntaxhighlight>
{{out}}
<pre>Largest amount of anagrams per set: 5
Line 2,148 ⟶ 2,208:
Tested with GnuCOBOL 2.0. ALLWORDS output display trimmed for width.
 
<langsyntaxhighlight COBOLlang="cobol"> *> TECTONICS
*> wget http://wiki.puzzlers.org/pub/wordlists/unixdict.txt
*> or visit https://sourceforge.net/projects/souptonuts/files
Line 2,390 ⟶ 2,450:
.
 
end program anagrams.</langsyntaxhighlight>
 
{{out}}
Line 2,422 ⟶ 2,482:
 
=={{header|CoffeeScript}}==
<langsyntaxhighlight lang="coffeescript">http = require 'http'
 
show_large_anagram_sets = (word_lst) ->
Line 2,452 ⟶ 2,512:
req.end()
get_word_list show_large_anagram_sets</langsyntaxhighlight>
{{out}}
<langsyntaxhighlight lang="coffeescript">> coffee anagrams.coffee
[ 'abel', 'able', 'bale', 'bela', 'elba' ]
[ 'alger', 'glare', 'lager', 'large', 'regal' ]
Line 2,460 ⟶ 2,520:
[ 'caret', 'carte', 'cater', 'crate', 'trace' ]
[ 'elan', 'lane', 'lean', 'lena', 'neal' ]
[ 'evil', 'levi', 'live', 'veil', 'vile' ]</langsyntaxhighlight>
 
=={{header|Common Lisp}}==
{{libheader|DRAKMA}} to retrieve the wordlist.
<langsyntaxhighlight lang="lisp">(defun anagrams (&optional (url "http://wiki.puzzlers.org/pub/wordlists/unixdict.txt"))
(let ((words (drakma:http-request url :want-stream t))
(wordsets (make-hash-table :test 'equalp)))
Line 2,486 ⟶ 2,546:
else if (eql (car pair) maxcount)
do (push (cdr pair) maxwordsets)
finally (return (values maxwordsets maxcount)))))</langsyntaxhighlight>
Evalutating
<langsyntaxhighlight lang="lisp">(multiple-value-bind (wordsets count) (anagrams)
(pprint wordsets)
(print count))</langsyntaxhighlight>
{{out}}
<pre>(("vile" "veil" "live" "levi" "evil")
Line 2,500 ⟶ 2,560:
5</pre>
Another method, assuming file is local:
<langsyntaxhighlight lang="lisp">(defun read-words (file)
(with-open-file (stream file)
(loop with w = "" while w collect (setf w (read-line stream nil)))))
Line 2,518 ⟶ 2,578:
longest))
 
(format t "~{~{~a ~}~^~%~}" (anagram "unixdict.txt"))</langsyntaxhighlight>
{{out}}
<pre>elba bela bale able abel
Line 2,529 ⟶ 2,589:
=={{header|Component Pascal}}==
BlackBox Component Builder
<langsyntaxhighlight lang="oberon2">
MODULE BbtAnagrams;
IMPORT StdLog,Files,Strings,Args;
Line 2,705 ⟶ 2,765:
END BbtAnagrams.
</syntaxhighlight>
</lang>
Execute:^Q BbtAnagrams.DoProcess unixdict.txt~<br/>
{{out}}
Line 2,722 ⟶ 2,782:
=={{header|Crystal}}==
{{trans|Ruby}}
<langsyntaxhighlight lang="ruby">require "http/client"
 
response = HTTP::Client.get("http://wiki.puzzlers.org/pub/wordlists/unixdict.txt")
Line 2,744 ⟶ 2,804:
anagram.each_value { |ana| puts ana if ana.size >= count }
end
</syntaxhighlight>
</lang>
 
{{out}}
Line 2,758 ⟶ 2,818:
=={{header|D}}==
===Short Functional Version===
<langsyntaxhighlight lang="d">import std.stdio, std.algorithm, std.string, std.exception, std.file;
 
void main() {
Line 2,766 ⟶ 2,826:
immutable m = an.byValue.map!q{ a.length }.reduce!max;
writefln("%(%s\n%)", an.byValue.filter!(ws => ws.length == m));
}</langsyntaxhighlight>
{{out}}
<pre>["caret", "carte", "cater", "crate", "trace"]
Line 2,778 ⟶ 2,838:
===Faster Version===
Less safe, same output.
<langsyntaxhighlight lang="d">void main() {
import std.stdio, std.algorithm, std.file, std.string;
 
Line 2,791 ⟶ 2,851:
immutable m = anags.byValue.map!q{ a.length }.reduce!max;
writefln("%(%-(%s %)\n%)", anags.byValue.filter!(ws => ws.length == m));
}</langsyntaxhighlight>
Runtime: about 0.06 seconds.
 
Line 2,798 ⟶ 2,858:
{{libheader| System.Classes}}
{{libheader| System.Diagnostics}}
<syntaxhighlight lang="delphi">
<lang Delphi>
program AnagramsTest;
 
Line 2,925 ⟶ 2,985:
end.
 
</syntaxhighlight>
</lang>
 
{{out}}
Line 2,944 ⟶ 3,004:
 
=={{header|E}}==
<langsyntaxhighlight lang="e">println("Downloading...")
when (def wordText := <http://wiki.puzzlers.org/pub/wordlists/unixdict.txt> <- getText()) -> {
def words := wordText.split("\n")
Line 2,965 ⟶ 3,025:
println(anagramGroup.snapshot())
}
}</langsyntaxhighlight>
 
=={{header|EchoLisp}}==
For a change, we will use the french dictionary - '''(lib 'dico.fr)''' - delivered within EchoLisp.
<langsyntaxhighlight lang="scheme">
(require 'struct)
(require 'hash)
Line 3,005 ⟶ 3,065:
(cdr h))
))
</syntaxhighlight>
</lang>
{{out}}
<langsyntaxhighlight lang="scheme">
(length mots-français)
→ 209315
Line 3,017 ⟶ 3,077:
→ { alisen enlias enlisa ensila islaen islean laines lianes salien saline selina }
 
</syntaxhighlight>
</lang>
 
=={{header|Eiffel}}==
<syntaxhighlight lang="eiffel">
<lang Eiffel>
class
ANAGRAMS
Line 3,108 ⟶ 3,168:
 
end
</syntaxhighlight>
</lang>
{{out}}
<pre>
Line 3,121 ⟶ 3,181:
=={{header|Ela}}==
{{trans|Haskell}}
<langsyntaxhighlight lang="ela">open monad io list string
 
groupon f x y = f x == f y
Line 3,134 ⟶ 3,194:
let wix = groupBy (groupon fst) << sort $ zip (map sort words) words
let mxl = maximum $ map length wix
mapM_ (putLn << map snd) << filter ((==mxl) << length) $ wix</langsyntaxhighlight>
 
{{out}}<pre>["vile","veil","live","levi","evil"]
Line 3,145 ⟶ 3,205:
 
=={{header|Elena}}==
ELENA 56.0x:
<langsyntaxhighlight lang="elena">import system'routines;
import system'calendar;
import system'io;
Line 3,153 ⟶ 3,213:
import extensions'routines;
import extensions'text;
import algorithms;
 
extension op
Line 3,166 ⟶ 3,227:
auto dictionary := new Map<string,object>();
 
File.assign("unixdict.txt").forEachLine::(word)
{
var key := word.normalized();
Line 3,176 ⟶ 3,237:
};
item.append:(word)
};
 
dictionary.Values
.sortquickSort::(former,later => former.Item2.Length > later.Item2.Length )
.top:(20)
.forEach::(pair){ console.printLine(pair.Item2) };
var end := now;
Line 3,191 ⟶ 3,252:
console.readChar()
}</langsyntaxhighlight>
{{out}}
<pre>
alger,glare,lager,large,regal
angel,angle,galen,glean,lange
abel,able,bale,bela,elba
alger,glare,lager,large,regal
caret,carte,cater,crate,trace
evil,levi,live,veil,vile
elan,lane,lean,lena,neal
caret,carte,cater,crate,trace
angel,angle,galen,glean,lange
are,ear,era,rae
dare,dear,erda,read
diet,edit,tide,tied
cereus,recuse,rescue,secure
ames,mesa,same,seam
emit,item,mite,time
amen,mane,mean,name
enol,leon,lone,noel
esprit,priest,sprite,stripe
beard,bread,debar,debra
hare,hear,hera,rhea
apt,pat,pta,tap
aden,dane,dean,edna
aires,aries,arise,raise
keats,skate,stake,steak
are,ear,era,rae
lament,mantel,mantle,mental
beard,bread,debar,debra
lascar,rascal,sacral,scalar
cereus,recuse,rescue,secure
latus,sault,talus,tulsa
diet,edit,tide,tied
leap,pale,peal,plea
resin,rinse,risen,siren
</pre>
 
=={{header|Elixir}}==
<langsyntaxhighlight Elixirlang="elixir">defmodule Anagrams do
def find(file) do
File.read!(file)
Line 3,232 ⟶ 3,293:
end
 
Anagrams.find("unixdict.txt")</langsyntaxhighlight>
 
{{out}}
Line 3,245 ⟶ 3,306:
 
The same output, using <code>File.Stream!</code> to generate <code>tuples</code> containing the word and it's sorted value as <code>strings</code>.
<langsyntaxhighlight Elixirlang="elixir">File.stream!("unixdict.txt")
|> Stream.map(&String.strip &1)
|> Enum.group_by(&String.codepoints(&1) |> Enum.sort)
Line 3,252 ⟶ 3,313:
|> Enum.max
|> elem(1)
|> Enum.each(fn n -> Enum.sort(n) |> Enum.join(" ") |> IO.puts end)</langsyntaxhighlight>
 
{{out}}
Line 3,266 ⟶ 3,327:
=={{header|Erlang}}==
The function fetch/2 is used to solve [[Anagrams/Deranged_anagrams]]. Please keep backwards compatibility when editing. Or update the other module, too.
<langsyntaxhighlight lang="erlang">-module(anagrams).
-compile(export_all).
 
Line 3,295 ⟶ 3,356:
get_value([], _, _, L) ->
L.
</syntaxhighlight>
</lang>
{{out}}
<pre>
Line 3,309 ⟶ 3,370:
 
=={{header|Euphoria}}==
<langsyntaxhighlight lang="euphoria">include sort.e
 
function compare_keys(sequence a, sequence b)
Line 3,357 ⟶ 3,418:
puts(1,"\n")
end if
end for</langsyntaxhighlight>
{{out}}
<pre>abel bela bale elba able
Line 3,369 ⟶ 3,430:
=={{header|F Sharp|F#}}==
Read the lines in the dictionary, group by the sorted letters in each word, find the length of the longest sets of anagrams, extract the longest sequences of words sharing the same letters (i.e. anagrams):
<langsyntaxhighlight lang="fsharp">let xss = Seq.groupBy (Array.ofSeq >> Array.sort) (System.IO.File.ReadAllLines "unixdict.txt")
Seq.map snd xss |> Seq.filter (Seq.length >> ( = ) (Seq.map (snd >> Seq.length) xss |> Seq.max))</langsyntaxhighlight>
Note that it is necessary to convert the sorted letters in each word from sequences to arrays because the groupBy function uses the default comparison and sequences do not compare structurally (but arrays do in F#).
 
Takes 0.8s to return:
<langsyntaxhighlight lang="fsharp">val it : string seq seq =
seq
[seq ["abel"; "able"; "bale"; "bela"; "elba"];
Line 3,381 ⟶ 3,442:
seq ["caret"; "carte"; "cater"; "crate"; "trace"];
seq ["elan"; "lane"; "lean"; "lena"; "neal"];
seq ["evil"; "levi"; "live"; "veil"; "vile"]]</langsyntaxhighlight>
 
=={{header|Fantom}}==
<langsyntaxhighlight lang="fantom">class Main
{
// take given word and return a string rearranging characters in order
Line 3,426 ⟶ 3,487:
}
}
}</langsyntaxhighlight>
 
{{out}}
Line 3,439 ⟶ 3,500:
=={{header|Fortran}}==
This program:
<langsyntaxhighlight lang="fortran">!***************************************************************************************
module anagram_routines
!***************************************************************************************
Line 3,613 ⟶ 3,674:
!***************************************************************************************
end program main
!***************************************************************************************</langsyntaxhighlight>
 
{{out}}
Line 3,631 ⟶ 3,692:
=={{header|FBSL}}==
'''A little bit of cheating: literatim re-implementation of C solution in FBSL's Dynamic C layer.'''
<langsyntaxhighlight Clang="c">#APPTYPE CONSOLE
 
DIM gtc = GetTickCount()
Line 3,790 ⟶ 3,851:
fclose(f1);
}
END DYNC</langsyntaxhighlight>
{{out}} (2.2GHz Intel Core2 Duo)
<pre>25104 words in dictionary max ana=5
Line 3,805 ⟶ 3,866:
 
=={{header|Factor}}==
<langsyntaxhighlight lang="factor"> "resource:unixdict.txt" utf8 file-lines
[ [ natural-sort >string ] keep ] { } map>assoc sort-keys
[ [ first ] compare +eq+ = ] monotonic-split
dup 0 [ length max ] reduce '[ length _ = ] filter [ values ] map .</langsyntaxhighlight>
<langsyntaxhighlight lang="factor">{
{ "abel" "able" "bale" "bela" "elba" }
{ "caret" "carte" "cater" "crate" "trace" }
Line 3,816 ⟶ 3,877:
{ "elan" "lane" "lean" "lena" "neal" }
{ "evil" "levi" "live" "veil" "vile" }
}</langsyntaxhighlight>
 
=={{header|FreeBASIC}}==
<langsyntaxhighlight lang="freebasic">' FB 1.05.0 Win64
 
Type IndexedWord
Line 3,954 ⟶ 4,015:
Print
Print "Press any key to quit"
Sleep</langsyntaxhighlight>
 
{{out}}
Line 3,972 ⟶ 4,033:
 
=={{header|Frink}}==
<langsyntaxhighlight lang="frink">
d = new dict
for w = lines["http://wiki.puzzlers.org/pub/wordlists/unixdict.txt"]
Line 3,989 ⟶ 4,050:
i = i + 1
}
</syntaxhighlight>
</lang>
 
=={{header|FutureBasic}}==
Applications in the latest versions of Macintosh OS X 10.x are sandboxed and require setting special permissions to link to internet files. For illustration purposes here, this code uses the internal Unix dictionary file available isin all versions of OS X.
 
<syntaxhighlight lang="futurebasic">
This first example is a hybrid using FB's native dynamic global array combined with Core Foundation functions:
include "NSLog.incl"
<lang futurebasic>
include "ConsoleWindow"
 
local fn Dictionary as CFArrayRef
def tab 9
CFURLRef url = fn URLFileURLWithPath( @"/usr/share/dict/words" )
CFStringRef string = fn StringWithContentsOfURL( url, NSUTF8StringEncoding, NULL )
end fn = fn StringComponentsSeparatedByString( string, @"\n" )
 
local fn IsAnagram( wrd1 as CFStringRef, wrd2 as CFStringRef ) as BOOL
begin globals
NSUInteger i
dim dynamic gDictionary(_maxLong) as Str255
BOOL result = NO
end globals
 
if ( len(wrd1) != len(wrd2) ) then exit fn
local fn IsAnagram( word1 as Str31, word2 as Str31 ) as Boolean
if ( fn StringCompare( wrd1, wrd2 ) == NSOrderedSame ) then exit fn
dim as long i, j, h, q
CFMutableArrayRef mutArr1 = fn MutableArrayWithCapacity(0) : CFMutableArrayRef mutArr2 = fn MutableArrayWithCapacity(0)
dim as Boolean result
for i = 0 to len(wrd1) - 1
 
MutableArrayAddObject( mutArr1, fn StringWithFormat( @"%C", fn StringCharacterAtIndex( wrd1, i ) ) )
if word1[0] != word2[0] then result = _false : exit fn
MutableArrayAddObject( mutArr2, fn StringWithFormat( @"%C", fn StringCharacterAtIndex( wrd2, i ) ) )
 
for i = 0 to word1[0]
h = 0 : q = 0
for j = 0 to word1[0]
if word1[i] == word1[j] then h++
if word1[i] == word2[j] then q++
next
SortDescriptorRef sd = fn SortDescriptorWithKeyAndSelector( NULL, YES, @"caseInsensitiveCompare:" )
if h != q then result = _false : exit fn
if ( fn ArrayIsEqual( fn ArraySortedArrayUsingDescriptors( mutArr1, @[sd] ), fn ArraySortedArrayUsingDescriptors( mutArr2, @[sd] ) ) ) then result = YES
next
result = _true
end fn = result
 
void local fn FindAnagramsInDictionary( wd as CFStringRef, dict as CFArrayRef )
local fn LoadDictionaryToArray
CFStringRef string, temp
'~'1
dim as CFURLRef url
CFMutableArrayRef words = fn MutableArrayWithCapacity(0)
dim as CFArrayRef arr
dim as CFStringReffor temp, cfStrin dict
if ( fn IsAnagram( lcase( wd ), temp ) ) then MutableArrayAddObject( words, temp )
dim as CFIndex elements
next
dim as Handle h
string = fn ArrayComponentsJoinedByString( words, @", " )
dim as Str255 s
NSLogSetTextColor( fn ColorText ) : NSLog( @"Anagrams for %@:", lcase(wd) )
dim as long fileLen, i
NSLogSetTextColor( fn ColorSystemBlue ) : NSLog(@"%@\n",string)
 
kill dynamic gDictionary
url = fn CFURLCreateWithFileSystemPath( _kCFAllocatorDefault, @"/usr/share/dict/words", _kCFURLPOSIXPathStyle, _false )
open "i", 2, url
fileLen = lof(2, 1)
h = fn NewHandleClear( fileLen )
if ( h )
read file 2, [h], fileLen
cfStr = fn CFStringCreateWithBytes( _kCFAllocatorDefault, #[h], fn GetHandleSize(h), _kCFStringEncodingMacRoman, _false )
if ( cfStr )
arr = fn CFStringCreateArrayBySeparatingStrings( _kCFAllocatorDefault, cfStr, @"\n" )
CFRelease( cfStr )
elements = fn CFArrayGetCount( arr )
for i = 0 to elements - 1
temp = fn CFArrayGetValueAtIndex( arr, i )
fn CFStringGetPascalString( temp, @s, 256, _kCFStringEncodingMacRoman )
gDictionary(i) = s
next
CFRelease( arr )
end if
fn DisposeH( h )
end if
close #2
CFRelease( url )
end fn
 
void local fn DoIt
local fn FindAnagrams( whichWord as Str31 )
CFArrayRef dictionary = fn Dictionary
dim as long elements, i
 
dispatchglobal
print "Anagrams for "; UCase$(whichWord); ":",
CFStringRef string
elements = fn DynamicNextElement( dynamic( gDictionary ) )
CFArrayRef words = @[@"bade",@"abet",@"beast",@"tuba",@"mace",@"scare",@"marine",@"antler",@"spare",@"leading",@"alerted",@"allergy",@"research",@"hustle",@"oriental",@"creationism",@"resistance",@"mountaineer"]
for i = 0 to elements - 1
for string in words
if ( len( gDictionary(i) ) == whichWord[0] )
fn FindAnagramsInDictionary( string, dictionary )
if ( fn IsAnagram( whichWord, gDictionary(i) ) == _true )
next
print gDictionary(i),
dispatchend
end if
end if
next
print
end fn
 
fn DoIt
fn LoadDictionaryToArray
 
HandleEvents
fn FindAnagrams( "bade" )
</syntaxhighlight>
fn FindAnagrams( "abet" )
fn FindAnagrams( "beast" )
fn FindAnagrams( "tuba" )
fn FindAnagrams( "mace" )
fn FindAnagrams( "scare" )
fn FindAnagrams( "marine" )
fn FindAnagrams( "antler" )
fn FindAnagrams( "spare" )
fn FindAnagrams( "leading" )
fn FindAnagrams( "alerted" )
fn FindAnagrams( "allergy" )
fn FindAnagrams( "research")
fn FindAnagrams( "hustle" )
fn FindAnagrams( "oriental")
def tab 3
print
fn FindAnagrams( "creationism" )
fn FindAnagrams( "resistance" )
fn FindAnagrams( "mountaineer" )
</lang>
Output:
<pre>
Line 4,117 ⟶ 4,129:
</pre>
 
This version fulfils the task description.
This second example is pure Core Foundation:
<pre>
include "ConsoleWindow"
include "Tlbx CFBag.incl"
 
<syntaxhighlight lang="futurebasic">
local fn Dictionary as CFArrayRef
'~'1
dim as CFURLRef      url
dim as CFStringRef   string
dim as Handle        h
dim as long          fileLen
 
begin globals
dim as CFArrayRef sDictionary// static
end globals
 
include "NSLog.incl"
if ( sDictionary == NULL )
url = fn CFURLCreateWithFileSystemPath( _kCFAllocatorDefault, @"/usr/share/dict/words", _kCFURLPOSIXPathStyle, _false )
open "i", 2, url
fileLen = lof(2,1)
h = fn NewHandleClear( fileLen )
if ( h )
read file 2, [h], fileLen
string = fn CFStringCreateWithBytes( _kCFAllocatorDefault, #[h], fn GetHandleSize(h), _kCFStringEncodingMacRoman, _false )
if ( string )
sDictionary = fn CFStringCreateArrayBySeparatingStrings( _kCFAllocatorDefault, string, @"\n" )
CFRelease( string )
end if
fn DisposeH( h )
end if
close #2
CFRelease( url )
end if
end fn = sDictionary
 
#plist NSAppTransportSecurity @{NSAllowsArbitraryLoads:YES}
local fn IsAnagram( wd1 as CFStringRef, wd2 as CFStringRef ) as Boolean
'~'1
dim as CFMutableBagRef   bag1, bag2
dim as CFStringRef       chr1, chr2
dim as CFIndex           length1, length2, i
dim as Boolean           result : result = _false
 
local fn Dictionary as CFArrayRef
length1 = fn CFStringGetLength( wd1 )
CFURLRef url = fn URLWithString( @"http://wiki.puzzlers.org/pub/wordlists/unixdict.txt" )
length2 = fn CFStringGetLength( wd2 )
CFStringRef string = fn StringWithContentsOfURL( url, NSUTF8StringEncoding, NULL )
if ( length1 == length2 )
end fn = fn StringComponentsSeparatedByCharactersInSet( string, fn CharacterSetNewlineSet )
bag1 = fn CFBagCreateMutable( _kCFAllocatorDefault, 0, @kCFCopyStringBagCallBacks )
bag2 = fn CFBagCreateMutable( _kCFAllocatorDefault, 0, @kCFCopyStringBagCallBacks )
 
local fn TestIndexes( array as CFArrayRef, obj as CFTypeRef, index as NSUInteger, stp as ^BOOL, userData as ptr ) as BOOL
for i = 0 to length1 - 1
end fn = fn StringIsEqual( obj, userData )
chr1 = fn CFStringCreateWithSubstring( _kCFAllocatorDefault, wd1, fn CFRangeMake(i,1) )
chr2 = fn CFStringCreateWithSubstring( _kCFAllocatorDefault, wd2, fn CFRangeMake(i,1) )
CFBagAddValue( bag1, chr1 )
CFBagAddValue( bag2, chr2 )
CFRelease( chr1 )
CFRelease( chr2 )
next
 
void local fn IndexSetEnumerator( set as IndexSetRef, index as NSUInteger, stp as ^BOOL, userData as ptr )
result = fn CFEqual( bag1, bag2 )
NSLog(@"\t%@\b",fn ArrayObjectAtIndex( userData, index ))
CFRelease( bag1 )
end fn
CFRelease( bag2 )
end if
end fn = result
 
void local fn FindAnagrams( wd as CFStringRef )DoIt
CFArrayRef words
'~'1
dim as CFMutableArrayRef    words sortedWords, letters
CFStringRef string, sortedString
dim as CFMutableStringRef   wdUC
IndexSetRef indexes
dim as CFLocaleRef          locale
long i, j, count, indexCount, maxCount = 0, length
dim as CFStringRef          string
CFMutableDictionaryRef anagrams
dim as CFIndex              count, index
CFTimeInterval ti
dim as CFArrayRef           dict
ti = fn CACurrentMediaTime
NSLog(@"Searching...")
// create another word list with sorted letters
words = fn Dictionary
count = len(words)
sortedWords = fn MutableArrayWithCapacity(count)
for string in words
length = len(string)
letters = fn MutableArrayWithCapacity(length)
for i = 0 to length - 1
MutableArrayAddObject( letters, mid(string,i,1) )
next
MutableArraySortUsingSelector( letters, @"compare:" )
sortedString = fn ArrayComponentsJoinedByString( letters, @"" )
MutableArrayAddObject( sortedWords, sortedString )
next
// search for identical sorted words
anagrams = fn MutableDictionaryWithCapacity(0)
for i = 0 to count - 2
j = i + 1
indexes = fn ArrayIndexesOfObjectsAtIndexesPassingTest( sortedWords, fn IndexSetWithIndexesInRange( fn CFRangeMake(j,count-j) ), NSEnumerationConcurrent, @fn TestIndexes, (ptr)sortedWords[i] )
indexCount = len(indexes)
if ( indexCount > maxCount )
maxCount = indexCount
MutableDictionaryRemoveAllObjects( anagrams )
end if
if ( indexCount == maxCount )
MutableDictionarySetValueForKey( anagrams, indexes, words[i] )
end if
next
// show results
NSLogClear
for string in anagrams
NSLog(@"%@\b",string)
indexes = anagrams[string]
IndexSetEnumerateIndexes( indexes, @fn IndexSetEnumerator, (ptr)words )
NSLog(@"")
next
NSLog(@"\nCalculated in %0.6fs",fn CACurrentMediaTime - ti)
end fn
 
dispatchglobal
words = fn CFArrayCreateMutable( _kCFAllocatorDefault, 0, @kCFTypeArrayCallBacks )
fn DoIt
dispatchend
 
HandleEvents
wdUC = fn CFStringCreateMutableCopy( _kCFAllocatorDefault, 0, wd )
</syntaxhighlight>
locale = fn CFLocaleCopyCurrent()
CFStringUppercase( wdUC, locale )
CFRelease( locale )
 
{{out}}
string = fn CFStringCreateWithFormat( _kCFAllocatorDefault, NULL, @"Anagrams for %@:", wdUC )
CFRelease( wdUC )
fn ConsolePrintCFString( string )
CFRelease( string )
 
dict = fn Dictionary()
count = fn CFArrayGetCount( dict )
for index = 0 to count - 1
string = fn CFArrayGetValueAtIndex( dict, index )
if ( fn IsAnagram( wd, string ) )
CFArrayAppendValue( words, string )
end if
next
 
string = fn CFStringCreateByCombiningStrings( _kCFAllocatorDefault, words, @", " )
CFRelease( words )
fn ConsolePrintCFString( string )
CFRelease( string )
 
fn ConsolePrintCFString( @"" )
end fn
 
fn FindAnagrams( @"bade" )
fn FindAnagrams( @"abet" )
fn FindAnagrams( @"beast" )
fn FindAnagrams( @"tuba" )
fn FindAnagrams( @"mace" )
fn FindAnagrams( @"scare" )
fn FindAnagrams( @"marine" )
fn FindAnagrams( @"antler")
fn FindAnagrams( @"spare" )
fn FindAnagrams( @"leading" )
fn FindAnagrams( @"alerted" )
fn FindAnagrams( @"allergy" )
fn FindAnagrams( @"research")
fn FindAnagrams( @"hustle" )
fn FindAnagrams( @"oriental")
fn FindAnagrams( @"creationism" )
fn FindAnagrams( @"resistance" )
fn FindAnagrams( @"mountaineer" )
</pre>
Output:
<pre>
alger glare lager large regal
Anagrams for BADE:
caret carte cater crate trace
abed, bade, bead
elan lane lean lena neal
 
abel able bale bela elba
Anagrams for ABET:
evil levi live veil vile
abet, bate, beat, beta
angel angle galen glean lange
 
Anagrams for BEAST:
baste, beast, tabes
 
Anagrams for TUBA:
abut, tabu, tuba
 
Anagrams for MACE:
acme, came, mace
 
Anagrams for SCARE:
carse, caser, ceras, scare, scrae
 
Anagrams for MARINE:
marine, remain
 
Anagrams for ANTLER:
altern, antler, learnt, rental, ternal
 
Anagrams for SPARE:
asper, parse, prase, spaer, spare, spear
 
Anagrams for LEADING:
adeling, dealing, leading
 
Anagrams for ALERTED:
delater, related, treadle
 
Anagrams for ALLERGY:
allergy, gallery, largely, regally
 
Anagrams for RESEARCH:
rechaser, research, searcher
 
Anagrams for HUSTLE:
hustle, sleuth
 
Anagrams for ORIENTAL:
oriental, relation
 
Anagrams for CREATIONISM:
anisometric, creationism, miscreation, ramisection, reactionism
 
Anagrams for RESISTANCE:
resistance, senatrices
 
Calculated in 2.409008s
Anagrams for MOUNTAINEER:
enumeration, mountaineer
</pre>
 
=={{header|GAP}}==
<langsyntaxhighlight lang="gap">Anagrams := function(name)
local f, p, L, line, word, words, swords, res, cur, r;
words := [ ];
Line 4,347 ⟶ 4,277:
# [ "alger", "glare", "lager", "large", "regal" ],
# [ "elan", "lane", "lean", "lena", "neal" ],
# [ "evil", "levi", "live", "veil", "vile" ] ]</langsyntaxhighlight>
 
=={{header|Go}}==
<langsyntaxhighlight lang="go">package main
 
import (
Line 4,396 ⟶ 4,326:
func (b byteSlice) Len() int { return len(b) }
func (b byteSlice) Swap(i, j int) { b[i], b[j] = b[j], b[i] }
func (b byteSlice) Less(i, j int) bool { return b[i] < b[j] }</langsyntaxhighlight>
{{out}}
<pre>
Line 4,409 ⟶ 4,339:
=={{header|Groovy}}==
This program:
<langsyntaxhighlight lang="groovy">def words = new URL('http://wiki.puzzlers.org/pub/wordlists/unixdict.txt').text.readLines()
def groups = words.groupBy{ it.toList().sort() }
def bigGroupSize = groups.collect{ it.value.size() }.max()
def isBigAnagram = { it.value.size() == bigGroupSize }
println groups.findAll(isBigAnagram).collect{ it.value }.collect{ it.join(' ') }.join('\n')</langsyntaxhighlight>
{{out}}
<pre>
Line 4,425 ⟶ 4,355:
 
=={{header|Haskell}}==
<langsyntaxhighlight lang="haskell">import Data.List
 
groupon f x y = f x == f y
Line 4,434 ⟶ 4,364:
wix = groupBy (groupon fst) . sort $ zip (map sort words) words
mxl = maximum $ map length wix
mapM_ (print . map snd) . filter ((==mxl).length) $ wix</langsyntaxhighlight>
{{out}}
<langsyntaxhighlight lang="haskell">*Main> main
["abel","able","bale","bela","elba"]
["caret","carte","cater","crate","trace"]
Line 4,442 ⟶ 4,372:
["alger","glare","lager","large","regal"]
["elan","lane","lean","lena","neal"]
["evil","levi","live","veil","vile"]</langsyntaxhighlight>
 
and we can noticeably speed up the second stage sorting and grouping by packing the String lists of Chars to the Text type:
 
<langsyntaxhighlight lang="haskell">import Data.List (groupBy, maximumBy, sort)
import Data.Ord (comparing)
import Data.Function (on)
Line 4,457 ⟶ 4,387:
mapM_
(print . fmap snd)
(filter ((length (maximumBy (comparing length) ws) ==) . length) ws)</langsyntaxhighlight>
{{Out}}
<pre>["abel","able","bale","bela","elba"]
Line 4,467 ⟶ 4,397:
 
=={{header|Icon}} and {{header|Unicon}}==
<langsyntaxhighlight lang="icon">procedure main(args)
every writeSet(!getLongestAnagramSets())
end
Line 4,500 ⟶ 4,430:
every (s := "") ||:= (find(c := !cset(w),w),c)
return s
end</langsyntaxhighlight>
Sample run:
<pre>->an <unixdict.txt
Line 4,513 ⟶ 4,443:
=={{header|J}}==
If the unixdict file has been retrieved and saved in the current directory (for example, using wget):
<langsyntaxhighlight lang="j"> (#~ a: ~: {:"1) (]/.~ /:~&>) <;._2 ] 1!:1 <'unixdict.txt'
+-----+-----+-----+-----+-----+
|abel |able |bale |bela |elba |
Line 4,526 ⟶ 4,456:
+-----+-----+-----+-----+-----+
|evil |levi |live |veil |vile |
+-----+-----+-----+-----+-----+</langsyntaxhighlight>
Explanation:
<langsyntaxhighlight Jlang="j"> <;._2 ] 1!:1 <'unixdict.txt'</langsyntaxhighlight>
This reads in the dictionary and produces a list of boxes. Each box contains one line (one word) from the dictionary.
<langsyntaxhighlight Jlang="j"> (]/.~ /:~&>)</langsyntaxhighlight>
This groups the words into rows where anagram equivalents appear in the same row. In other words, creates a copy of the original list where the characters contained in each box have been sorted. Then it organizes the contents of the original list in rows, with each new row keyed by the values in the new list.
<langsyntaxhighlight Jlang="j"> (#~ a: ~: {:"1)</langsyntaxhighlight>
This selects rows whose last element is not an empty box.<br>
(In the previous step we created an array of rows of boxes. The short rows were automatically padded with empty boxes so that all rows would be the same length.)
Line 4,539 ⟶ 4,469:
The key to this algorithm is the sorting of the characters in each word from the dictionary. The line <tt>Arrays.sort(chars);</tt> sorts all of the letters in the word in ascending order using a built-in [[quicksort]], so all of the words in the first group in the result end up under the key "aegln" in the anagrams map.
{{works with|Java|1.5+}}
<langsyntaxhighlight lang="java5">import java.net.*;
import java.io.*;
import java.util.*;
Line 4,568 ⟶ 4,498:
System.out.println(ana);
}
}</langsyntaxhighlight>
{{works with|Java|1.8+}}
<langsyntaxhighlight lang="java5">import java.net.*;
import java.io.*;
import java.util.*;
Line 4,622 ⟶ 4,552:
;
}
}</langsyntaxhighlight>
{{out}}
[angel, angle, galen, glean, lange]
Line 4,634 ⟶ 4,564:
===ES5===
{{Works with|Node.js}}
<langsyntaxhighlight lang="javascript">var fs = require('fs');
var words = fs.readFileSync('unixdict.txt', 'UTF-8').split('\n');
 
Line 4,656 ⟶ 4,586:
}
}
}</langsyntaxhighlight>
 
{{Out}}
Line 4,667 ⟶ 4,597:
 
Alternative using reduce:
<langsyntaxhighlight lang="javascript">var fs = require('fs');
var dictionary = fs.readFileSync('unixdict.txt', 'UTF-8').split('\n');
 
Line 4,688 ⟶ 4,618:
keysSortedByFrequency.slice(0, 10).forEach(function (key) {
console.log(sortedDict[key].join(' '));
});</langsyntaxhighlight>
 
 
Line 4,696 ⟶ 4,626:
Using JavaScript for Automation
(A JavaScriptCore interpreter on macOS with an Automation library).
<langsyntaxhighlight lang="javascript">(() => {
'use strict';
 
Line 4,863 ⟶ 4,793:
// MAIN ---
return main();
})();</langsyntaxhighlight>
{{Out}}
<pre>[
Line 4,911 ⟶ 4,841:
 
=={{header|jq}}==
<langsyntaxhighlight lang="jq">def anagrams:
(reduce .[] as $word (
{table: {}, max: 0}; # state
Line 4,922 ⟶ 4,852:
# The task:
split("\n") | anagrams
</syntaxhighlight>
</lang>
{{Out}}
<syntaxhighlight lang="sh">
<lang sh>
$ jq -M -s -c -R -f anagrams.jq unixdict.txt
["abel","able","bale","bela","elba"]
Line 4,932 ⟶ 4,862:
["elan","lane","lean","lena","neal"]
["evil","levi","live","veil","vile"]
</syntaxhighlight>
</lang>
 
=={{header|Jsish}}==
From Javascript, nodejs entry.
<langsyntaxhighlight lang="javascript">/* Anagrams, in Jsish */
var datafile = 'unixdict.txt';
if (console.args[0] == '-more' && Interp.conf('maxArrayList') > 500000)
Line 4,974 ⟶ 4,904:
evil levi live veil vile
=!EXPECTEND!=
*/</langsyntaxhighlight>
 
{{out}}
Line 4,984 ⟶ 4,914:
=={{header|Julia}}==
{{works with|Julia|1.6}}
<langsyntaxhighlight lang="julia">url = "http://wiki.puzzlers.org/pub/wordlists/unixdict.txt"
wordlist = open(readlines, download(url))
 
Line 4,999 ⟶ 4,929:
end
 
println.(anagram(wordlist))</langsyntaxhighlight>
 
{{out}}
Line 5,010 ⟶ 4,940:
 
=={{header|K}}==
<langsyntaxhighlight lang="k">{x@&a=|/a:#:'x}{x g@&1<#:'g:={x@<x}'x}0::`unixdict.txt</langsyntaxhighlight>
 
=={{header|Kotlin}}==
{{trans|Java}}
<langsyntaxhighlight lang="scala">import java.io.BufferedReader
import java.io.InputStreamReader
import java.net.URL
Line 5,039 ⟶ 4,969:
.filter { it.size == count }
.forEach { println(it) }
}</langsyntaxhighlight>
 
{{out}}
Line 5,052 ⟶ 4,982:
 
=={{header|Lasso}}==
<langsyntaxhighlight lang="lasso">local(
anagrams = map,
words = include_url('http://wiki.puzzlers.org/pub/wordlists/unixdict.txt')->split('\n'),
Line 5,079 ⟶ 5,009:
 
#findings -> join('<br />\n')
</syntaxhighlight>
</lang>
{{out}}
<pre>abel, able, bale, bela, elba
Line 5,089 ⟶ 5,019:
 
=={{header|Liberty BASIC}}==
<langsyntaxhighlight lang="lb">' count the word list
open "unixdict.txt" for input as #1
while not(eof(#1))
Line 5,159 ⟶ 5,089:
sorted$=sorted$+chrSort$(chr)
next
end function</langsyntaxhighlight>
 
=={{header|LiveCode}}==
LiveCode could definitely use a sort characters command. As it is this code converts the letters into items and then sorts that. I wrote a merge sort for characters, but the conversion to items, built-in-sort, conversion back to string is about 10% faster, and certainly easier to write.
 
<langsyntaxhighlight LiveCodelang="livecode">on mouseUp
put mostCommonAnagrams(url "http://wiki.puzzlers.org/pub/wordlists/unixdict.txt")
end mouseUp
Line 5,202 ⟶ 5,132:
replace comma with empty in X
return X
end itemsToChars</langsyntaxhighlight>
{{out}}
<pre>abel,able,bale,bela,elba
Line 5,213 ⟶ 5,143:
=={{header|Lua}}==
Lua's core library is very small and does not include built-in network functionality. If a networking library were imported, the local file in the following script could be replaced with the remote dictionary file.
<langsyntaxhighlight lang="lua">function sort(word)
local bytes = {word:byte(1, -1)}
table.sort(bytes)
Line 5,236 ⟶ 5,166:
print('') -- Finish with a newline.
end
end</langsyntaxhighlight>
{{out}}
<pre>abel able bale bela elba
Line 5,246 ⟶ 5,176:
 
=={{header|M4}}==
<langsyntaxhighlight M4lang="m4">divert(-1)
changequote(`[',`]')
define([for],
Line 5,291 ⟶ 5,221:
_max
for([x],1,_n,[ifelse(_get([count],x),_max,[_get([list],x)
])])</langsyntaxhighlight>
 
Memory limitations keep this program from working on the full-sized dictionary.
Line 5,309 ⟶ 5,239:
The convert call discards the hashes, which have done their job, and leaves us with a list L of anagram sets.
Finally, we just note the size of the largest sets of anagrams, and pick those off.
<syntaxhighlight lang="maple">
<lang Maple>
words := HTTP:-Get( "http://wiki.puzzlers.org/pub/wordlists/unixdict.txt" )[2]: # ignore errors
use StringTools, ListTools in
Line 5,317 ⟶ 5,247:
m := max( map( nops, L ) ); # what is the largest set?
A := select( s -> evalb( nops( s ) = m ), L ); # get the maximal sets of anagrams
</syntaxhighlight>
</lang>
The result of running this code is
<syntaxhighlight lang="maple">
<lang Maple>
A := [{"abel", "able", "bale", "bela", "elba"}, {"angel", "angle", "galen",
"glean", "lange"}, {"alger", "glare", "lager", "large", "regal"}, {"evil",
"levi", "live", "veil", "vile"}, {"caret", "carte", "cater", "crate", "trace"}
, {"elan", "lane", "lean", "lena", "neal"}];
</syntaxhighlight>
</lang>
 
=={{header|Mathematica}}/{{header|Wolfram Language}}==
Download the dictionary, split the lines, split the word in characters and sort them. Now sort by those words, and find sequences of equal 'letter-hashes'. Return the longest sequences:
<langsyntaxhighlight Mathematicalang="mathematica">list=Import["http://wiki.puzzlers.org/pub/wordlists/unixdict.txt","Lines"];
text={#,StringJoin@@Sort[Characters[#]]}&/@list;
text=SortBy[text,#[[2]]&];
splits=Split[text,#1[[2]]==#2[[2]]&][[All,All,1]];
maxlen=Max[Length/@splits];
Select[splits,Length[#]==maxlen&]</langsyntaxhighlight>
gives back:
<langsyntaxhighlight Mathematicalang="mathematica">{{abel,able,bale,bela,elba},{caret,carte,cater,crate,trace},{angel,angle,galen,glean,lange},{alger,glare,lager,large,regal},{elan,lane,lean,lena,neal},{evil,levi,live,veil,vile}}</langsyntaxhighlight>
An alternative is faster, but requires version 7 (for <code>Gather</code>):
<langsyntaxhighlight Mathematicalang="mathematica">splits = Gather[list, Sort[Characters[#]] == Sort[Characters[#2]] &];
maxlen = Max[Length /@ splits];
Select[splits, Length[#] == maxlen &]</langsyntaxhighlight>
 
Or using build-in functions for sorting and gathering elements in lists it can be implimented as:
<langsyntaxhighlight Mathematicalang="mathematica">anagramGroups = GatherBy[SortBy[GatherBy[list,Sort[Characters[#]] &],Length],Length];
anagramGroups[[-1]]</langsyntaxhighlight>
Also, Mathematica's own word list is available; replacing the list definition with <code>list = WordData[];</code> and forcing <code>maxlen</code> to 5 yields instead this result:
 
Line 5,365 ⟶ 5,295:
 
Also if using Mathematica 10 it gets really concise:
<langsyntaxhighlight Mathematicalang="mathematica">list=Import["http://wiki.puzzlers.org/pub/wordlists/unixdict.txt","Lines"];
MaximalBy[GatherBy[list, Sort@*Characters], Length]</langsyntaxhighlight>
 
=={{header|Maxima}}==
<langsyntaxhighlight lang="maxima">read_file(name) := block([file, s, L], file: openr(name), L: [],
while stringp(s: readline(file)) do L: cons(s, L), close(file), L)$
 
Line 5,407 ⟶ 5,337:
["angel", "angle", "galen", "glean", "lange"],
["caret", "carte", "cater", "crate", "trace"],
["abel", "able", "bale", "bela", "elba"]] */</langsyntaxhighlight>
 
=={{header|MiniScript}}==
This implementation is for use with the [http://miniscript.org/MiniMicro Mini Micro] version of MiniScript. The command-line version does not include a HTTP library. The script can be modified to use the file class to read a local copy of the word list.
<syntaxhighlight lang="miniscript">
wordList = http.get("http://wiki.puzzlers.org/pub/wordlists/unixdict.txt").split(char(10))
 
makeKey = function(word)
return word.split("").sort.join("")
end function
 
wordSets = {}
for word in wordList
k = makeKey(word)
if not wordSets.hasIndex(k) then
wordSets[k] = [word]
else
wordSets[k].push(word)
end if
end for
 
counts = []
 
for wordSet in wordSets.values
counts.push([wordSet.len, wordSet])
end for
counts.sort(0, false)
 
maxCount = counts[0][0]
for count in counts
if count[0] == maxCount then print count[1]
end for
</syntaxhighlight>
{{out}}
<pre>
["abel", "able", "bale", "bela", "elba"]
["alger", "glare", "lager", "large", "regal"]
["angel", "angle", "galen", "glean", "lange"]
["caret", "carte", "cater", "crate", "trace"]
["elan", "lane", "lean", "lena", "neal"]
["evil", "levi", "live", "veil", "vile"]</pre>
 
=={{header|MUMPS}}==
<langsyntaxhighlight MUMPSlang="mumps">Anagrams New ii,file,longest,most,sorted,word
Set file="unixdict.txt"
Open file:"r" Use file
Line 5,444 ⟶ 5,415:
Quit
 
Do Anagrams</langsyntaxhighlight>
<pre>
The anagrams with the most variations:
Line 5,460 ⟶ 5,431:
===Java&ndash;Like===
{{trans|Java}}
<langsyntaxhighlight NetRexxlang="netrexx">/* NetRexx */
options replace format comments java crossref symbols nobinary
 
Line 5,520 ⟶ 5,491:
 
return
</syntaxhighlight>
</lang>
{{out}}
<pre>
Line 5,534 ⟶ 5,505:
===Rexx&ndash;Like===
Implemented with more NetRexx idioms such as indexed strings, <tt>PARSE</tt> and the NetRexx &quot;built&ndash;in functions&quot;.
<langsyntaxhighlight NetRexxlang="netrexx">/* NetRexx */
options replace format comments java crossref symbols nobinary
 
Line 5,591 ⟶ 5,562:
 
Return
</syntaxhighlight>
</lang>
{{out}}
<pre>
Line 5,604 ⟶ 5,575:
 
=={{header|NewLisp}}==
<syntaxhighlight lang="newlisp">
<lang NewLisp>
;;; Get the words as a list, splitting at newline
(setq data
Line 5,632 ⟶ 5,603:
;;; Print out only groups of more than 4 words
(map println (filter (fn(x) (> (length x) 4)) (group-by-key)))
</syntaxhighlight>
</lang>
{{out}}
<pre>
Line 5,644 ⟶ 5,615:
 
=={{header|Nim}}==
<langsyntaxhighlight lang="nim">
import tables, strutils, algorithm
 
Line 5,663 ⟶ 5,634:
 
main()
</syntaxhighlight>
</lang>
{{out}}
<pre>
Line 5,676 ⟶ 5,647:
=={{header|Oberon-2}}==
Oxford Oberon-2
<langsyntaxhighlight lang="oberon2">
MODULE Anagrams;
IMPORT Files,Out,In,Strings;
Line 5,833 ⟶ 5,804:
DoProcess("unixdict.txt");
END Anagrams.
</syntaxhighlight>
</lang>
{{out}}
<pre>
Line 5,845 ⟶ 5,816:
 
=={{header|Objeck}}==
<langsyntaxhighlight lang="objeck">use HTTP;
use Collection;
 
Line 5,885 ⟶ 5,856:
}
}
</syntaxhighlight>
</lang>
{{out}}
<pre>[abel,able,bale,bela,elba]
Line 5,895 ⟶ 5,866:
 
=={{header|OCaml}}==
<langsyntaxhighlight lang="ocaml">let explode str =
let l = ref [] in
let n = String.length str in
Line 5,927 ⟶ 5,898:
( List.iter (Printf.printf " %s") lw;
print_newline () )
) h</langsyntaxhighlight>
 
=={{header|Oforth}}==
 
<langsyntaxhighlight Oforthlang="oforth">import: mapping
import: collect
import: quicksort
Line 5,941 ⟶ 5,912:
filter( #[ second size m == ] )
apply ( #[ second .cr ] )
;</langsyntaxhighlight>
 
{{out}}
Line 5,957 ⟶ 5,928:
Two versions of this, using different collection classes.
===Version 1: Directory of arrays===
<syntaxhighlight lang="oorexx">
<lang ooRexx>
-- This assumes you've already downloaded the following file and placed it
-- in the current directory: http://wiki.puzzlers.org/pub/wordlists/unixdict.txt
Line 5,993 ⟶ 5,964:
say letters":" list~makestring("l", ", ")
end
</syntaxhighlight>
</lang>
===Version 2: Using the relation class===
This version appears to be the fastest.
<syntaxhighlight lang="oorexx">
<lang ooRexx>
-- This assumes you've already downloaded the following file and placed it
-- in the current directory: http://wiki.puzzlers.org/pub/wordlists/unixdict.txt
Line 6,043 ⟶ 6,014:
say letters":" words~makestring("l", ", ")
end
</syntaxhighlight>
</lang>
Timings taken on my laptop:
<pre>
Line 6,069 ⟶ 6,040:
 
=={{header|Oz}}==
<langsyntaxhighlight lang="oz">declare
%% Helper function
fun {ReadLines Filename}
Line 6,097 ⟶ 6,068:
%% Display result (make sure strings are shown as string, not as number lists)
{Inspector.object configureEntry(widgetShowStrings true)}
{Inspect LargestSets}</langsyntaxhighlight>
 
=={{header|Pascal}}==
<langsyntaxhighlight lang="pascal">Program Anagrams;
 
// assumes a local file
Line 6,187 ⟶ 6,158:
AnagramList[i].Destroy;
 
end.</langsyntaxhighlight>
{{out}}
<pre>
Line 6,200 ⟶ 6,171:
 
=={{header|Perl}}==
<langsyntaxhighlight lang="perl">use List::Util 'max';
 
my @words = split "\n", do { local( @ARGV, $/ ) = ( 'unixdict.txt' ); <> };
Line 6,211 ⟶ 6,182:
for my $ana (values %anagram) {
print "@$ana\n" if @$ana == $count;
}</langsyntaxhighlight>
If we calculate <code>$max</code>, then we don't need the CPAN module:
<langsyntaxhighlight lang="perl">push @{$anagram{ join '' => sort split '' }}, $_ for @words;
$max > @$_ or $max = @$_ for values %anagram;
@$_ == $max and print "@$_\n" for values %anagram;</langsyntaxhighlight>
{{out}}
alger glare lager large regal
Line 6,226 ⟶ 6,197:
=={{header|Phix}}==
copied from Euphoria and cleaned up slightly
<!--<langsyntaxhighlight Phixlang="phix">-->
<span style="color: #004080;">integer</span> <span style="color: #000000;">fn</span> <span style="color: #0000FF;">=</span> <span style="color: #7060A8;">open</span><span style="color: #0000FF;">(</span><span style="color: #008000;">"demo/unixdict.txt"</span><span style="color: #0000FF;">,</span><span style="color: #008000;">"r"</span><span style="color: #0000FF;">)</span>
<span style="color: #004080;">sequence</span> <span style="color: #000000;">words</span> <span style="color: #0000FF;">=</span> <span style="color: #0000FF;">{},</span> <span style="color: #000000;">anagrams</span> <span style="color: #0000FF;">=</span> <span style="color: #0000FF;">{},</span> <span style="color: #000000;">last</span><span style="color: #0000FF;">=</span><span style="color: #008000;">""</span><span style="color: #0000FF;">,</span> <span style="color: #000000;">letters</span>
Line 6,263 ⟶ 6,234:
<span style="color: #008080;">end</span> <span style="color: #008080;">if</span>
<span style="color: #008080;">end</span> <span style="color: #008080;">for</span>
<!--</langsyntaxhighlight>-->
{{out}}
<pre>
Line 6,276 ⟶ 6,247:
 
=={{header|Phixmonti}}==
<langsyntaxhighlight Phixmontilang="phixmonti">include ..\Utilitys.pmt
 
"unixdict.txt" "r" fopen var f
Line 6,316 ⟶ 6,287:
len for
get len maxlen == if ? else drop endif
endfor</langsyntaxhighlight>
 
Other solution
 
<langsyntaxhighlight Phixmontilang="phixmonti">include ..\Utilitys.pmt
 
( )
Line 6,355 ⟶ 6,326:
len for
get len maxlen == if ? else drop endif
endfor</langsyntaxhighlight>
 
{{out}}<pre>["abel", "able", "bale", "bela", "elba"]
Line 6,367 ⟶ 6,338:
 
=={{header|PHP}}==
<langsyntaxhighlight lang="php"><?php
$words = explode("\n", file_get_contents('http://wiki.puzzlers.org/pub/wordlists/unixdict.txt'));
foreach ($words as $word) {
Line 6,379 ⟶ 6,350:
if (count($ana) == $best)
print_r($ana);
?></langsyntaxhighlight>
 
=={{header|Picat}}==
Using foreach loop:
<langsyntaxhighlight Picatlang="picat">go =>
Dict = new_map(),
foreach(Line in read_file_lines("unixdict.txt"))
Line 6,394 ⟶ 6,365:
println(Value)
end,
nl.</langsyntaxhighlight>
 
{{out}}
Output:
<pre>maxLen = 5
maxLen = 5
[alger,glare,lager,large,regal]
[evil,levi,live,veil,vile]
Line 6,407 ⟶ 6,377:
 
Same idea, but shorter version by (mis)using list comprehensions.
<langsyntaxhighlight Picatlang="picat">go2 =>
M = new_map(),
_ = [_:W in read_file_lines("unixdict.txt"),S=sort(W),M.put(S,M.get(S,"")++[W])],
X = max([V.len : _K=V in M]),
println(maxLen=X),
[V : _=V in M, V.len=X].println.</langsyntaxhighlight>
 
{{out}}
Output:
<pre>maxLen = 5
 
<pre>
maxLen = 5
[[evil,levi,live,veil,vile],[abel,able,bale,bela,elba],[caret,carte,cater,crate,trace],[angel,angle,galen,glean,lange],[elan,lane,lean,lena,neal],[alger,glare,lager,large,regal]]</pre>
 
=={{header|PicoLisp}}==
A straight-forward implementation using 'group' takes 48 seconds on a 1.7 GHz Pentium:
<langsyntaxhighlight PicoLisplang="picolisp">(flip
(by length sort
(by '((L) (sort (copy L))) group
(in "unixdict.txt" (make (while (line) (link @)))) ) ) )</langsyntaxhighlight>
Using a binary tree with the 'idx' function, it takes only 0.42 seconds on the same machine, a factor of 100 faster:
<langsyntaxhighlight PicoLisplang="picolisp">(let Words NIL
(in "unixdict.txt"
(while (line)
Line 6,434 ⟶ 6,402:
(push (car @) Word)
(set Key (list Word)) ) ) ) )
(flip (by length sort (mapcar val (idx 'Words)))) )</langsyntaxhighlight>
{{out}}
<pre>-> (("vile" "veil" "live" "levi" "evil") ("trace" "crate" "cater" "carte" "caret
Line 6,442 ⟶ 6,410:
 
=={{header|PL/I}}==
<langsyntaxhighlight PLlang="pl/Ii">/* Search a list of words, finding those having the same letters. */
 
word_test: proc options (main);
Line 6,508 ⟶ 6,476:
end is_anagram;
 
end word_test;</langsyntaxhighlight>
{{out}}
<pre>
Line 6,517 ⟶ 6,485:
 
=={{header|Pointless}}==
<langsyntaxhighlight lang="pointless">output =
readFileLines("unixdict.txt")
|> reduce(logWord, {})
Line 6,530 ⟶ 6,498:
getMax(groups) =
groups |> filter(g => length(g) == maxLength)
where maxLength = groups |> map(length) |> maximum</langsyntaxhighlight>
 
{{out}}
Line 6,542 ⟶ 6,510:
=={{header|PowerShell}}==
{{works with|PowerShell|2}}
<langsyntaxhighlight lang="powershell">$c = New-Object Net.WebClient
$words = -split ($c.DownloadString('http://wiki.puzzlers.org/pub/wordlists/unixdict.txt'))
$top_anagrams = $words `
Line 6,554 ⟶ 6,522:
| Select-Object -First 1
 
$top_anagrams.Group | ForEach-Object { $_.Group -join ', ' }</langsyntaxhighlight>
{{out}}
<pre>abel, able, bale, bela, elba
Line 6,563 ⟶ 6,531:
evil, levi, live, veil, vile</pre>
Another way with more .Net methods is quite a different style, but drops the runtime from 2 minutes to 1.5 seconds:
<langsyntaxhighlight lang="powershell">$Timer = [System.Diagnostics.Stopwatch]::StartNew()
 
$uri = 'http://wiki.puzzlers.org/pub/wordlists/unixdict.txt'
Line 6,605 ⟶ 6,573:
[string]::join('', $entry.Value)
}
}</langsyntaxhighlight>
 
=={{header|Processing}}==
<langsyntaxhighlight lang="processing">import java.util.Map;
 
void setup() {
Line 6,633 ⟶ 6,601:
}
}
}</langsyntaxhighlight>
 
{{out}}
Line 6,645 ⟶ 6,613:
=={{header|Prolog}}==
{{works with|SWI-Prolog|5.10.0}}
<langsyntaxhighlight Prologlang="prolog">:- use_module(library( http/http_open )).
 
anagrams:-
Line 6,693 ⟶ 6,661:
length(V1, L1),
length(V2, L2),
( L1 < L2 -> R = >; L1 > L2 -> R = <; compare(R, K1, K2)).</langsyntaxhighlight>
The result is
<pre>[abel,able,bale,bela,elba]
Line 6,705 ⟶ 6,673:
=={{header|PureBasic}}==
{{works with|PureBasic|4.4}}
<langsyntaxhighlight PureBasiclang="purebasic">InitNetwork() ;
OpenConsole()
Line 6,780 ⟶ 6,748:
PrintN("Press any key"): Repeat: Until Inkey() <> ""
EndIf
EndIf</langsyntaxhighlight>
{{out}}
<pre>evil, levi, live, veil, vile
Line 6,792 ⟶ 6,760:
===Python 3.X Using defaultdict===
Python 3.2 shell input (IDLE)
<langsyntaxhighlight lang="python">>>> import urllib.request
>>> from collections import defaultdict
>>> words = urllib.request.urlopen('http://wiki.puzzlers.org/pub/wordlists/unixdict.txt').read().split()
Line 6,803 ⟶ 6,771:
>>> for ana in anagram.values():
if len(ana) >= count:
print ([x.decode() for x in ana])</langsyntaxhighlight>
 
===Python 2.7 version===
Python 2.7 shell input (IDLE)
<langsyntaxhighlight lang="python">>>> import urllib
>>> from collections import defaultdict
>>> words = urllib.urlopen('http://wiki.puzzlers.org/pub/wordlists/unixdict.txt').read().split()
Line 6,831 ⟶ 6,799:
>>> count
5
>>></langsyntaxhighlight>
 
===Python: Using groupby===
{{trans|Haskell}}
{{works with|Python|2.6}} sort and then group using groupby()
<langsyntaxhighlight lang="python">>>> import urllib, itertools
>>> words = urllib.urlopen('http://wiki.puzzlers.org/pub/wordlists/unixdict.txt').read().split()
>>> len(words)
Line 6,857 ⟶ 6,825:
>>> count
5
>>></langsyntaxhighlight>
 
 
Or, disaggregating, speeding up a bit by avoiding the slightly expensive use of ''sorted'' as a key, updating for Python 3, and using a local ''unixdict.txt'':
{{Works with|Python|3.7}}
<langsyntaxhighlight lang="python">'''Largest anagram groups found in list of words.'''
 
from os.path import expanduser
Line 6,972 ⟶ 6,940:
# MAIN ---
if __name__ == '__main__':
main()</langsyntaxhighlight>
{{Out}}
<pre>caret carte cater crate creat creta react recta trace
Line 6,979 ⟶ 6,947:
 
=={{header|QB64}}==
<syntaxhighlight lang="qb64">
<lang QB64>
$CHECKING:OFF
' Warning: Keep the above line commented out until you know your newly edited code works.
Line 7,129 ⟶ 7,097:
IF i < Finish THEN QSort i, Finish
END SUB
</syntaxhighlight>
</lang>
 
'''2nd solution (by Steve McNeill):'''
<syntaxhighlight lang="qb64">
<lang QB64>
$CHECKING:OFF
SCREEN _NEWIMAGE(640, 480, 32)
Line 7,278 ⟶ 7,246:
LOOP UNTIL gap = 1 AND swapped = 0
END SUB
</syntaxhighlight>
</lang>
 
'''Output:'''
<syntaxhighlight lang="qb64">
<lang QB64>
LOOPER: 7134 executions from start to finish, in one second.
Note, this is including disk access for new data each time.
Line 7,293 ⟶ 7,261:
caret, trace, crate, carte, cater
bale, abel, able, elba, bela
</syntaxhighlight>
</lang>
 
=={{header|Quackery}}==
 
<langsyntaxhighlight Quackerylang="quackery"> $ "rosetta/unixdict.txt" sharefile drop nest$
[] swap witheach
[ dup sort
Line 7,318 ⟶ 7,286:
else drop ]
drop cr ]
drop</langsyntaxhighlight>
 
{{out}}
Line 7,331 ⟶ 7,299:
 
=={{header|R}}==
<langsyntaxhighlight Rlang="r">words <- readLines("http://wiki.puzzlers.org/pub/wordlists/unixdict.txt")
word_group <- sapply(
strsplit(words, split=""), # this will split all words to single letters...
Line 7,352 ⟶ 7,320:
"angel, angle, galen, glean, lange" "alger, glare, lager, large, regal"
aeln eilv
"elan, lane, lean, lena, neal" "evil, levi, live, veil, vile" </langsyntaxhighlight>
 
=={{header|Racket}}==
<langsyntaxhighlight lang="racket">
#lang racket
 
Line 7,377 ⟶ 7,345:
 
(get-maxes (hash-words (get-lines "http://wiki.puzzlers.org/pub/wordlists/unixdict.txt")))
</syntaxhighlight>
</lang>
{{out}}
<pre>
Line 7,392 ⟶ 7,360:
 
{{works with|Rakudo|2016.08}}
<syntaxhighlight lang="raku" perl6line>my @anagrams = 'unixdict.txt'.IO.words.classify(*.comb.sort.join).values;
my $max = @anagrams».elems.max;
 
.put for @anagrams.grep(*.elems == $max);</langsyntaxhighlight>
 
{{out}}
Line 7,409 ⟶ 7,377:
 
{{works with|Rakudo|2016.08}}
<syntaxhighlight lang="raku" perl6line>.put for # print each element of the array made this way:
'unixdict.txt'.IO.words # load words from file
.classify(*.comb.sort.join) # group by common anagram
.classify(*.value.elems) # group by number of anagrams in a group
.max(*.key).value # get the group with highest number of anagrams
.map(*.value) # get all groups of anagrams in the group just selected</langsyntaxhighlight>
 
=={{header|RapidQ}}==
<syntaxhighlight lang="vb">
<lang vb>
dim x as integer, y as integer
dim SortX as integer
Line 7,482 ⟶ 7,450:
End
 
</syntaxhighlight>
</lang>
{{out}}
<pre>
Line 7,494 ⟶ 7,462:
 
=={{header|Rascal}}==
<langsyntaxhighlight lang="rascal">import Prelude;
 
list[str] OrderedRep(str word){
Line 7,504 ⟶ 7,472:
longest = max([size(group) | group <- range(AnagramMap)]);
return [AnagramMap[rep]| rep <- AnagramMap, size(AnagramMap[rep]) == longest];
}</langsyntaxhighlight>
Returns:
<langsyntaxhighlight lang="rascal">value: [
{"glean","galen","lange","angle","angel"},
{"glare","lager","regal","large","alger"},
Line 7,513 ⟶ 7,481:
{"able","bale","abel","bela","elba"},
{"levi","live","vile","evil","veil"}
]</langsyntaxhighlight>
 
=={{header|Red}}==
<langsyntaxhighlight Redlang="red">Red []
 
m: make map! [] 25000
Line 7,532 ⟶ 7,500:
]
foreach v values-of m [ if maxx = length? v [print v] ]
</syntaxhighlight>
</lang>
{{out}}
<pre>abel able bale bela elba
Line 7,547 ⟶ 7,515:
This version doesn't assume that the dictionary is in alphabetical order, &nbsp; nor does it assume the
<br>words are in any specific case &nbsp; (lower/upper/mixed).
<langsyntaxhighlight lang="rexx">/*REXX program finds words with the largest set of anagrams (of the same size). */
iFID= 'unixdict.txt' /*the dictionary input File IDentifier.*/
$=; !.=; ww=0; uw=0; most=0 /*initialize a bunch of REXX variables.*/
Line 7,576 ⟶ 7,544:
/*reassemble word with sorted letters. */
return @.a || @.b || @.c || @.d || @.e || @.f||@.g||@.h||@.i||@.j||@.k||@.l||@.m||,
@.n || @.o || @.p || @.q || @.r || @.s||@.t||@.u||@.v||@.w||@.x||@.y||@.z</langsyntaxhighlight>
Programming note: &nbsp; the long (wide) assignment for &nbsp; &nbsp; '''return @.a||'''... &nbsp; &nbsp; could've been coded as an elegant &nbsp; '''do''' &nbsp; loop instead of hardcoding 26 letters,<br>but since the dictionary (word list) is rather large, a rather expaciated method was used for speed.
 
Line 7,595 ⟶ 7,563:
===version 1.2, optimized===
This optimized version eliminates the &nbsp; '''sortA''' &nbsp; subroutine and puts that subroutine's code in-line.
<langsyntaxhighlight lang="rexx">/*REXX program finds words with the largest set of anagrams (of the same size). */
iFID= 'unixdict.txt' /*the dictionary input File IDentifier.*/
$=; !.=; ww=0; uw=0; most=0 /*initialize a bunch of REXX variables.*/
Line 7,624 ⟶ 7,592:
/*reassemble word with sorted letters. */
return @.a || @.b || @.c || @.d || @.e || @.f||@.g||@.h||@.i||@.j||@.k||@.l||@.m||,
@.n || @.o || @.p || @.q || @.r || @.s||@.t||@.u||@.v||@.w||@.x||@.y||@.z</langsyntaxhighlight>
{{out|output|text=&nbsp; is the same as REXX version 1.1}}
 
Line 7,632 ⟶ 7,600:
===annotated version using &nbsp; PARSE===
(This algorithm actually utilizes a &nbsp; ''bin'' &nbsp; sort, &nbsp; one bin for each Latin letter.)
<langsyntaxhighlight lang="rexx">u= 'Halloween' /*word to be sorted by (Latin) letter.*/
upper u /*fast method to uppercase a variable. */
/*another: u = translate(u) */
Line 7,654 ⟶ 7,622:
/*Note: the ? is prefixed to the letter to avoid */
/*collisions with other REXX one-character variables.*/
say 'z=' z</langsyntaxhighlight>
{{out|output|:}}
<pre>
Line 7,662 ⟶ 7,630:
 
===annotated version using a &nbsp; DO &nbsp; loop===
<langsyntaxhighlight lang="rexx">u= 'Halloween' /*word to be sorted by (Latin) letter.*/
upper u /*fast method to uppercase a variable. */
L=length(u) /*get the length of the word (in bytes)*/
Line 7,679 ⟶ 7,647:
_.?n||_.?o||_.?p||_.?q||_.?r||_.?s||_.?t||_.?u||_.?v||_.?w||_.?x||_.?y||_.?z
 
say 'z=' z</langsyntaxhighlight>
{{out|output|:}}
<pre>
Line 7,688 ⟶ 7,656:
 
===version 2===
<langsyntaxhighlight lang="rexx">/*REXX program finds words with the largest set of anagrams (same size)
* 07.08.2013 Walter Pachl
* sorta for word compression courtesy Gerard Schildberger,
Line 7,748 ⟶ 7,716:
End
Return c.a||c.b||c.c||c.d||c.e||c.f||c.g||c.h||c.i||c.j||c.k||c.l||,
c.m||c.n||c.o||c.p||c.q||c.r||c.s||c.t||c.u||c.v||c.w||c.x||c.y||c.z</langsyntaxhighlight>
{{out}}
<pre>
Line 7,766 ⟶ 7,734:
 
=={{header|Ring}}==
<langsyntaxhighlight lang="ring">
# Project : Anagrams
 
Line 7,847 ⟶ 7,815:
end
return cnt
</syntaxhighlight>
</lang>
Output:
<pre>
Line 7,870 ⟶ 7,838:
 
=={{header|Ruby}}==
<langsyntaxhighlight lang="ruby">require 'open-uri'
 
anagram = Hash.new {|hash, key| hash[key] = []} # map sorted chars to anagrams
Line 7,886 ⟶ 7,854:
p ana
end
end</langsyntaxhighlight>
{{out}}
<pre>
Line 7,898 ⟶ 7,866:
 
Short version (with lexical ordered result).
<langsyntaxhighlight lang="ruby">require 'open-uri'
 
anagrams = open('http://wiki.puzzlers.org/pub/wordlists/unixdict.txt'){|f| f.read.split.group_by{|w| w.each_char.sort} }
anagrams.values.group_by(&:size).max.last.each{|group| puts group.join(", ") }
</syntaxhighlight>
</lang>
{{Out}}
<pre>
Line 7,914 ⟶ 7,882:
 
=={{header|Run BASIC}}==
<langsyntaxhighlight lang="runbasic">sqliteconnect #mem, ":memory:"
mem$ = "CREATE TABLE anti(gram,ordr);
CREATE INDEX ord ON anti(ordr)"
Line 7,977 ⟶ 7,945:
print
next i
end</langsyntaxhighlight>
<pre>
abel able bale bela elba
Line 7,990 ⟶ 7,958:
Unicode is hard so the solution depends on what you consider to be an anagram: two strings that have the same bytes, the same codepoints, or the same graphemes. The first two are easily accomplished in Rust proper, but the latter requires an external library. Graphemes are probably the most correct way, but it is also the least efficient since graphemes are variable size and thus require a heap allocation per grapheme.
 
<langsyntaxhighlight lang="rust">use std::collections::HashMap;
use std::fs::File;
use std::io::{BufRead,BufReader};
Line 8,019 ⟶ 7,987:
}
}
}</langsyntaxhighlight>
{{out}}
<pre>
Line 8,032 ⟶ 8,000:
If we assume an ASCII string, we can map each character to a prime number and multiply these together to create a number which uniquely maps to each anagram.
 
<langsyntaxhighlight lang="rust">use std::collections::HashMap;
use std::path::Path;
use std::io::{self, BufRead, BufReader};
Line 8,073 ⟶ 8,041:
}
Ok(map.into_iter().map(|(_, entry)| entry).collect())
}</langsyntaxhighlight>
 
=={{header|Scala}}==
<langsyntaxhighlight lang="scala">val src = io.Source fromURL "http://wiki.puzzlers.org/pub/wordlists/unixdict.txt"
val vls = src.getLines.toList.groupBy(_.sorted).values
val max = vls.map(_.size).max
vls filter (_.size == max) map (_ mkString " ") mkString "\n"</langsyntaxhighlight>
{{out}}
<pre>
Line 8,091 ⟶ 8,059:
----
Another take:
<langsyntaxhighlight lang="scala">Source
.fromURL("http://wiki.puzzlers.org/pub/wordlists/unixdict.txt").getLines.toList
.groupBy(_.sorted).values
.groupBy(_.size).maxBy(_._1)._2
.map(_.mkString("\t"))
.foreach(println)</langsyntaxhighlight>
{{out}}
<pre>
Line 8,111 ⟶ 8,079:
Uses two SRFI libraries: SRFI 125 for hash tables and SRFI 132 for sorting.
 
<langsyntaxhighlight lang="scheme">
(import (scheme base)
(scheme char)
Line 8,153 ⟶ 8,121:
(map (lambda (grp) (list-sort string<? grp))
(largest-groups (read-groups)))))
</syntaxhighlight>
</lang>
 
{{out}}
Line 8,166 ⟶ 8,134:
 
=={{header|Seed7}}==
<langsyntaxhighlight lang="seed7">$ include "seed7_05.s7i";
include "gethttp.s7i";
include "strifile.s7i";
Line 8,201 ⟶ 8,169:
var integer: maxLength is 0;
begin
dictFile := openStrifileopenStriFile(getHttp("wiki.puzzlers.org/pub/wordlists/unixdict.txt"));
while hasNext(dictFile) do
readln(dictFile, word);
Line 8,221 ⟶ 8,189:
end if;
end for;
end func;</langsyntaxhighlight>
 
{{out}}
Line 8,234 ⟶ 8,202:
 
=={{header|SETL}}==
<langsyntaxhighlight SETLlang="setl">h := open('unixdict.txt', "r");
anagrams := {};
while not eof(h) loop
Line 8,273 ⟶ 8,241:
end loop;
return A;
end procedure;</langsyntaxhighlight>
{{out}}
<pre>{abel able bale bela elba}
Line 8,283 ⟶ 8,251:
 
=={{header|Sidef}}==
<langsyntaxhighlight lang="ruby">func main(file) {
file.open_r(\var fh, \var err) ->
|| die "Can't open file `#{file}' for reading: #{err}\n";
Line 8,292 ⟶ 8,260:
}
 
main(%f'/tmp/unixdict.txt');</langsyntaxhighlight>
{{out}}
<pre>alger glare lager large regal
Line 8,302 ⟶ 8,270:
 
=={{header|Simula}}==
<langsyntaxhighlight lang="simula">COMMENT COMPILE WITH
$ cim -m64 anagrams-hashmap.sim
;
Line 8,571 ⟶ 8,539:
 
END
</syntaxhighlight>
</lang>
{{out}}
<pre>
Line 8,586 ⟶ 8,554:
 
=={{header|Smalltalk}}==
<langsyntaxhighlight Smalltalklang="smalltalk">list:= (FillInTheBlank request: 'myMessageBoxTitle') subStrings: String crlf.
dict:= Dictionary new.
list do: [:val|
Line 8,592 ⟶ 8,560:
add: val.
].
sorted:=dict asSortedCollection: [:a :b| a size > b size].</langsyntaxhighlight>
Documentation:
<pre>
Line 8,610 ⟶ 8,578:
{{works with|Smalltalk/X}}
instead of asking for the strings, read the file:
<langsyntaxhighlight lang="smalltalk">d := Dictionary new.
'unixdict.txt' asFilename
readingLinesDo:[:eachWord |
Line 8,619 ⟶ 8,587:
sortBySelector:#size)
reverse
do:[:s | s printCR]</langsyntaxhighlight>
{{out}}
<pre>
Line 8,631 ⟶ 8,599:
...</pre>
not sure if getting the dictionary via http is part of the task; if so, replace the file-reading with:
<langsyntaxhighlight lang="smalltalk">'http://wiki.puzzlers.org/pub/wordlists/unixdict.txt' asURI contents asCollectionOfLines do:[:eachWord | ...</langsyntaxhighlight>
 
=={{header|SNOBOL4}}==
{{works with|Macro Spitbol}}
Note: unixdict.txt is passed in locally via STDIN. Newlines must be converted for Win/DOS environment.
<langsyntaxhighlight SNOBOL4lang="snobol4">* # Sort letters of word
define('sortw(str)a,i,j') :(sortw_end)
sortw a = array(size(str))
Line 8,658 ⟶ 8,626:
L3 j = j + 1; key = kv<j,1>; val = kv<j,2> :f(end)
output = eq(countw(val),max) key ': ' val :(L3)
end</langsyntaxhighlight>
{{out}}
<pre>abel: abel able bale bela elba
Line 8,668 ⟶ 8,636:
 
=={{header|Stata}}==
<langsyntaxhighlight lang="stata">import delimited http://wiki.puzzlers.org/pub/wordlists/unixdict.txt, clear
mata
a=st_sdata(.,.)
Line 8,683 ⟶ 8,651:
reshape wide v1, i(k) j(group) string
drop k
list, noobs noheader</langsyntaxhighlight>
 
'''Output'''
Line 8,696 ⟶ 8,664:
 
=={{header|SuperCollider}}==
<syntaxhighlight lang="supercollider">(
<lang SuperCollider>(
var text, words, sorted, dict = IdentityDictionary.new, findMax;
File.use("unixdict.txt".resolveRelative, "r", { |f| text = f.readAllString });
Line 8,715 ⟶ 8,683:
};
findMax.(dict)
)</langsyntaxhighlight>
 
Answers:
<langsyntaxhighlight SuperColliderlang="supercollider">[ [ angel, angle, galen, glean, lange ], [ caret, carte, cater, crate, trace ], [ elan, lane, lean, lena, neal ], [ evil, levi, live, veil, vile ], [ alger, glare, lager, large, regal ] ]</langsyntaxhighlight>
 
=={{header|Swift}}==
{{works with|Swift 2.0}}
 
<langsyntaxhighlight lang="swift">import Foundation
 
let wordsURL = NSURL(string: "http://wiki.puzzlers.org/pub/wordlists/unixdict.txt")!
Line 8,764 ⟶ 8,732:
print("set \(i): \(thislist.sort())")
}
</syntaxhighlight>
</lang>
 
{{out}}
Line 8,779 ⟶ 8,747:
 
=={{header|Tcl}}==
<langsyntaxhighlight lang="tcl">package require Tcl 8.5
package require http
 
Line 8,802 ⟶ 8,770:
puts $anagrams($key)
}
}</langsyntaxhighlight>
{{out}}
<pre>evil levi live veil vile
Line 8,812 ⟶ 8,780:
 
=={{header|Transd}}==
Works with Transd v0.43.
 
<langsyntaxhighlight lang="scheme">#lang transd
 
MainModule: {
_start: (λ
(with fs FileStream() words String()
(open-r fs "/mnt/proj/tmp/unixdict.txt")
(textin fs words)
( -|
Line 8,830 ⟶ 8,797:
)
))
}</langsyntaxhighlight>{{out}}
<pre>
[[abel, able, bale, bela, elba],
Line 8,841 ⟶ 8,808:
 
=={{header|TUSCRIPT}}==
<langsyntaxhighlight lang="tuscript">$$ MODE TUSCRIPT,{}
requestdata = REQUEST ("http://wiki.puzzlers.org/pub/wordlists/unixdict.txt")
 
Line 8,868 ⟶ 8,835:
PRINT cs," ",f,": ",a
ENDLOOP
ENDCOMPILE</langsyntaxhighlight>
{{out}}
<pre>
Line 8,889 ⟶ 8,856:
Process substitutions eliminate the need for command pipelines.
 
<langsyntaxhighlight lang="bash">http_get_body() {
local host=$1
local uri=$2
Line 8,924 ⟶ 8,891:
done
 
printf "%s\n" "${maxwords[@]}"</langsyntaxhighlight>
 
{{output}}
Line 8,938 ⟶ 8,905:
The algorithm is to group the words together that are made from the same unordered lists of letters, then collect the groups together that have the same number of words in
them, and then show the collection associated with the highest number.
<langsyntaxhighlight Ursalalang="ursala">#import std
 
#show+
 
anagrams = mat` * leql$^&h eql|=@rK2tFlSS ^(~&,-<&)* unixdict_dot_txt</langsyntaxhighlight>
{{out}}
<pre>
Line 8,953 ⟶ 8,920:
 
=={{header|VBA}}==
<syntaxhighlight lang="vb">
<lang vb>
Option Explicit
 
Line 9,115 ⟶ 9,082:
If (mini < j) Then Call SortTwoDimArray(myArr, mini, j, Colonne)
If (i < Maxi) Then Call SortTwoDimArray(myArr, i, Maxi, Colonne)
End Sub</langsyntaxhighlight>
{{out}}
<pre>25104 words, in the dictionary
Line 9,128 ⟶ 9,095:
 
Time to go : 2,464844 seconds.</pre>
 
=={{header|VBScript}}==
A little convoluted, uses a dictionary and a recordset...
<syntaxhighlight lang="vb">
Const adInteger = 3
Const adVarChar = 200
 
function charcnt(s,ch)
charcnt=0
for i=1 to len(s)
if mid(s,i,1)=ch then charcnt=charcnt+1
next
end function
 
set fso=createobject("Scripting.Filesystemobject")
dim a(122)
 
sfn=WScript.ScriptFullName
sfn= Left(sfn, InStrRev(sfn, "\"))
set f=fso.opentextfile(sfn & "unixdict.txt",1)
 
'words to dictionnary using acronym as key
set d=createobject("Scripting.Dictionary")
 
while not f.AtEndOfStream
erase a :cnt=0
s=trim(f.readline)
'tally chars
for i=1 to len(s)
n=asc(mid(s,i,1))
a(n)=a(n)+1
next
'build the anagram
k=""
for i= 48 to 122
if a(i) then k=k & string(a(i),chr(i))
next
'add to dict
if d.exists(k) then
b=d(k)
d(k)=b & " " & s
else
d(k)=s
end if
wend
 
'copy dictionnary to recorset to be able to sort it .Add nr of items as a new field
Set rs = CreateObject("ADODB.Recordset")
rs.Fields.Append "anag", adVarChar, 30
rs.Fields.Append "items", adInteger
rs.Fields.Append "words", adVarChar, 200
rs.open
for each k in d.keys
rs.addnew
rs("anag")=k
s=d(k)
rs("words")=s
rs("items")=charcnt(s," ")+1
rs.update
next
d.removeall
 
'do the query
rs.sort="items DESC, anag ASC"
rs.movefirst
it=rs("items")
while rs("items")=it
wscript.echo rs("items") & " (" &rs("anag") & ") " & rs("words")
rs.movenext
wend
rs.close
</syntaxhighlight>
The output:
<pre>
5 (abel) abel able bale bela elba
5 (acert) caret carte cater crate trace
5 (aegln) angel angle galen glean lange
5 (aeglr) alger glare lager large regal
5 (aeln) elan lane lean lena neal
5 (eilv) evil levi live veil vile
</pre>
 
=={{header|Vedit macro language}}==
Line 9,135 ⟶ 9,186:
 
The word list is expected to be in the same directory as the script.
<langsyntaxhighlight lang="vedit">File_Open("|(PATH_ONLY)\unixdict.txt")
 
Repeat(ALL) {
Line 9,200 ⟶ 9,251:
Ins_Char(#8, OVERWRITE)
}
return</langsyntaxhighlight>
{{out}}
<pre>
Line 9,210 ⟶ 9,261:
evil levi live veil vile
</pre>
{{omit from|PARI/GP|No real capacity for string manipulation}}
 
=={{header|Visual Basic .NET}}==
<langsyntaxhighlight lang="vbnet">Imports System.IO
Imports System.Collections.ObjectModel
 
Line 9,276 ⟶ 9,326:
End Function
 
End Module</langsyntaxhighlight>
{{out}}
<PRE>
Line 9,286 ⟶ 9,336:
[EILV] evil, levi, live, veil, vile
</PRE>
 
=={{header|V (Vlang)}}==
{{trans|Wren}}
<syntaxhighlight lang="v (vlang)">import os
 
fn main(){
words := os.read_lines('unixdict.txt')?
 
mut m := map[string][]string{}
mut ma := 0
for word in words {
mut letters := word.split('')
letters.sort()
sorted_word := letters.join('')
if sorted_word in m {
m[sorted_word] << word
} else {
m[sorted_word] = [word]
}
if m[sorted_word].len > ma {
ma = m[sorted_word].len
}
}
for _, a in m {
if a.len == ma {
println(a)
}
}
}</syntaxhighlight>
 
{{out}}
<pre>
['abel', 'able', 'bale', 'bela', 'elba']
['alger', 'glare', 'lager', 'large', 'regal']
['angel', 'angle', 'galen', 'glean', 'lange']
['caret', 'carte', 'cater', 'crate', 'trace']
['elan', 'lane', 'lean', 'lena', 'neal']
['evil', 'levi', 'live', 'veil', 'vile']
</pre>
 
=={{header|Wren}}==
{{libheader|Wren-sort}}
<langsyntaxhighlight ecmascriptlang="wren">import "io" for File
import "./sort" for Sort
 
var words = File.read("unixdict.txt").split("\n").map { |w| w.trim() }
Line 9,307 ⟶ 9,396:
for (key in wordMap.keys) {
if (wordMap[key].count == most) System.print(wordMap[key])
}</langsyntaxhighlight>
 
{{out}}
Line 9,320 ⟶ 9,409:
 
=={{header|Yabasic}}==
<langsyntaxhighlight Yabasiclang="yabasic">filename$ = "unixdict.txt"
maxw = 0 : c = 0 : dimens(c)
i = 0
Line 9,400 ⟶ 9,489:
d(j,p) = c
end if
end sub</langsyntaxhighlight>
 
=={{header|zkl}}==
<langsyntaxhighlight lang="zkl">File("unixdict.txt").read(*) // dictionary file to blob, copied from web
// blob to dictionary: key is word "fuzzed", values are anagram words
.pump(Void,T(fcn(w,d){
Line 9,415 ⟶ 9,504:
"%d:%s: %s".fmt(v.len(),zz.strip(),
v.apply("strip").concat(","))
});</langsyntaxhighlight>
{{out}}
<pre>
Line 9,431 ⟶ 9,520:
</pre>
In the case where it is desirable to get the dictionary from the web, use this code:
<langsyntaxhighlight lang="zkl">URL:="http://wiki.puzzlers.org/pub/wordlists/unixdict.txt";
var ZC=Import("zklCurl");
unixdict:=ZC().get(URL); //--> T(Data,bytes of header, bytes of trailer)
unixdict=unixdict[0].del(0,unixdict[1]); // remove HTTP header
File("unixdict.txt","w").write(unixdict);</langsyntaxhighlight>
 
{{omit from|6502 Assembly|unixdict.txt is much larger than the CPU's address space.}}
{{omit from|8080 Assembly|See 6502 Assembly.}}
{{omit from|PARI/GP|No real capacity for string manipulation}}
{{omit from|Z80 Assembly|See 6502 Assembly.}}
29

edits