Anagrams: Difference between revisions
m
Replace deprecated function
Thundergnat (talk | contribs) m (syntax highlighting fixup automation) |
m (Replace deprecated function) |
||
(20 intermediate revisions by 11 users not shown) | |||
Line 3:
When two or more words are composed of the same characters, but in a different order, they are called [[wp:Anagram|anagrams]].
;Task
Using the word list at http://wiki.puzzlers.org/pub/wordlists/unixdict.txt,
<br>find the sets of words that share the same characters that contain the most words in them.
{{Related tasks/Word plays}}
Line 16 ⟶ 17:
=={{header|11l}}==
{{trans|Python}}
<syntaxhighlight lang="11l">DefaultDict[String, Array[String]] anagram
L(word) File(‘unixdict.txt’).read().split("\n")
anagram[sorted(word
V count = max(anagram.values().map(ana -> ana.len))
Line 36 ⟶ 37:
=={{header|8th}}==
<syntaxhighlight lang="8th">
\
\ anagrams.8th
Line 172 ⟶ 173:
=={{header|AArch64 Assembly}}==
{{works with|as|Raspberry Pi 3B version Buster 64 bits <br> or android 64 bits with application Termux }}
<syntaxhighlight lang=
/* ARM assembly AARCH64 Raspberry PI 3B */
/* program anagram64.s */
Line 571 ⟶ 572:
</pre>
=={{header|ABAP}}==
<syntaxhighlight lang=
define update_progress.
call function 'SAPGUI_PROGRESS_INDICATOR'
Line 682 ⟶ 683:
=={{header|Ada}}==
<syntaxhighlight lang="ada">with Ada.Text_IO; use Ada.Text_IO;
with Ada.Containers.Indefinite_Ordered_Maps;
Line 763 ⟶ 764:
=={{header|ALGOL 68}}==
{{works with|ALGOL 68G|Any - tested with release 2.8.3.win32}} Uses the "read" PRAGMA of Algol 68 G to include the associative array code from the [[Associative_array/Iteration]] task.
<syntaxhighlight lang="algol68"># find longest list(s) of words that are anagrams in a list of words #
# use the associative array in the Associate array/iteration task #
PR read "aArray.a68" PR
Line 865 ⟶ 866:
alger|glare|lager|large|regal
caret|carte|cater|crate|trace
</pre>
=={{header|Amazing Hopper}}==
<syntaxhighlight lang="c">
#include <basico.h>
#define MAX_LINE 30
algoritmo
fd=0, filas=0
word={}, 2da columna={}
old_word="",new_word=""
dimensionar (1,2) matriz de cadenas 'result'
pos=0
token.separador'""'
abrir para leer("basica/unixdict.txt",fd)
iterar mientras ' no es fin de archivo (fd) '
usando 'MAX_LINE', leer línea desde(fd),
---copiar en 'old_word'---, separar para 'word '
word, ---retener--- ordenar esto,
encadenar en 'new_word'
matriz.buscar en tabla (1,new_word,result)
copiar en 'pos'
si ' es negativo? '
new_word,old_word, pegar fila en 'result'
sino
#( result[pos,2] = cat(result[pos,2],cat(",",old_word) ) )
fin si
reiterar
cerrar archivo(fd)
guardar 'filas de (result)' en 'filas'
#( 2da columna = result[2:filas, 2] )
fijar separador '","'
tomar '2da columna'
contar tokens en '2da columna' ---retener resultado,
obtener máximo valor,es mayor o igual?, replicar esto
compactar esto
fijar separador 'NL', luego imprime todo
terminar
</syntaxhighlight>
{{out}}
<pre>
abel,able,bale,bela,elba
alger,glare,lager,large,regal
angel,angle,galen,glean,lange
caret,carte,cater,crate,trace
elan,lane,lean,lena,neal
evil,levi,live,veil,vile
</pre>
Line 872 ⟶ 931:
This is a rough translation of the J version, intermediate values are kept and verb trains are not used for clarity of data flow.
<syntaxhighlight lang=
anagrams←{
tie←⍵ ⎕NTIE 0
Line 886 ⟶ 945:
'''Example:'''
<syntaxhighlight lang=
⎕SH'wget http://wiki.puzzlers.org/pub/wordlists/unixdict.txt'
]display anagrams 'unixdict.txt'
Line 915 ⟶ 974:
=={{header|AppleScript}}==
<syntaxhighlight lang="applescript">use AppleScript version "2.3.1" -- OS X 10.9 (Mavericks) or later.
use sorter : script ¬
"Custom Iterative Ternary Merge Sort" -- <www.macscripter.net/t/timsort-and-nigsort/71383/3>
use scripting additions
on join(lst, delim)
set astid to AppleScript's text item delimiters
set AppleScript's text item delimiters to delim
set txt to lst as text
set AppleScript's text item delimiters to astid
return txt
end join
on largestAnagramGroups(listOfWords)
script o
property wordList : listOfWords
property
property
property
on judgeGroup(i, j)
set groupSize to j - i + 1
if (groupSize < largestGroupSize) then -- Most likely.
else if (groupSize = largestGroupSize) then -- Next most likely.
set end of largestGroupRanges to {i, j}
else -- Largest group so far.
set largestGroupRanges to {{i, j}}
set largestGroupSize to groupSize
end if
end judgeGroup
on isGreater(a, b)
return a's beginning > b's beginning
end isGreater
end script
set wordCount to (count o's wordList)
ignoring case
--
set
tell sorter to sort(chrs, 1, -1, {})
set
end repeat
-- Sort the list to group its contents and echo the moves in the original word list.
tell sorter to sort(o's groupingTexts, 1, wordCount, {slave:{o's wordList}})
-- Find the list range(s) of the longest run(s) of equal grouping texts
set i to 1
set currentText to beginning of o's
repeat with j from 2 to
set thisText to
if (thisText is not currentText) then
set currentText to thisText
set i to j
end if
end repeat
--
set output to {}
repeat with thisRange in o's
set {i, j} to thisRange
set thisGroup to o's wordList's items i thru j
tell sorter to sort(thisGroup, 1, -1, {}) -- Not necessary with unixdict.txt. But hey.
set end of output to thisGroup
end repeat
-- As a final flourish, sort the
tell sorter to sort(output, 1, -1, {comparer:o})
end ignoring
return
end largestAnagramGroups
local wordFile, wordList
set wordFile to ((path to desktop as text) & "www.rosettacode.org:unixdict.txt") as «class furl»
set wordList to paragraphs of (read wordFile as «class utf8»)
return largestAnagramGroups(wordList)</syntaxhighlight>
{{output}}
<syntaxhighlight lang="applescript">{{"abel", "able", "bale", "bela", "elba"}, {"alger", "glare", "lager", "large", "regal"}, {"angel", "angle", "galen", "glean", "lange"}, {"caret", "carte", "cater", "crate", "trace"}, {"elan", "lane", "lean", "lena", "neal"}, {"evil", "levi", "live", "veil", "vile"}}</syntaxhighlight>
=={{header|ARM Assembly}}==
{{works with|as|Raspberry Pi <br> or android 32 bits with application Termux}}
<syntaxhighlight lang=
/* ARM assembly Raspberry PI */
/* program anagram.s */
Line 1,367 ⟶ 1,426:
=={{header|Arturo}}==
<syntaxhighlight lang="rebol">wordset: map read.lines relative "unixdict.txt" => strip
anagrams: #[]
Line 1,393 ⟶ 1,452:
=={{header|AutoHotkey}}==
Following code should work for AHK 1.0.* and 1.1* versions:
<syntaxhighlight lang=
Loop, Parse, Contents, % "`n", % "`r"
{ ; parsing each line of the file we just read
Line 1,435 ⟶ 1,494:
=={{header|AWK}}==
<syntaxhighlight lang=
# syntax: GAWK -f JUMBLEA.AWK UNIXDICT.TXT
{ for (i=1; i<=NF; i++) {
Line 1,473 ⟶ 1,532:
Alternatively, non-POSIX version:
{{works with|gawk}}
<syntaxhighlight lang="awk">#!/bin/gawk -f
{ patsplit($0, chars, ".")
Line 1,491 ⟶ 1,550:
}</syntaxhighlight>
=={{header|
==={{header|BaCon}}===
<syntaxhighlight lang="freebasic">OPTION COLLAPSE TRUE
DECLARE idx$ ASSOC STRING
Line 1,524 ⟶ 1,584:
</pre>
==={{header|BBC BASIC}}===
{{works with|BBC BASIC for Windows}}
<syntaxhighlight lang="bbcbasic"> INSTALL @lib$+"SORTLIB"
sort% = FN_sortinit(0,0)
Line 1,602 ⟶ 1,662:
=={{header|BQN}}==
<syntaxhighlight lang="bqn">words ← •FLines "unixdict.txt"
•Show¨{𝕩/˜(⊢=⌈´)≠¨𝕩} (⊐∧¨)⊸⊔ words</syntaxhighlight>
<syntaxhighlight lang="bqn">⟨ "abel" "able" "bale" "bela" "elba" ⟩
⟨ "alger" "glare" "lager" "large" "regal" ⟩
⟨ "angel" "angle" "galen" "glean" "lange" ⟩
Line 1,619 ⟶ 1,679:
This solution makes extensive use of Bracmat's computer algebra mechanisms. A trick is needed to handle words that are merely repetitions of a single letter, such as <code>iii</code>. That's why the variabe <code>sum</code> isn't initialised with <code>0</code>, but with a non-number, in this case the empty string. Also te correct handling of characters 0-9 needs a trick so that they are not numerically added: they are prepended with a non-digit, an <code>N</code> in this case. After completely traversing the word list, the program writes a file <code>product.txt</code> that can be visually inspected.
The program is not fast. (Minutes rather than seconds.)
<syntaxhighlight lang="bracmat">( get$("unixdict.txt",STR):?list
& 1:?product
& whl
Line 1,656 ⟶ 1,716:
=={{header|C}}==
<syntaxhighlight lang="c">#include <stdio.h>
#include <stdlib.h>
#include <string.h>
Line 1,825 ⟶ 1,885:
</pre>
A much shorter version with no fancy data structures:
<syntaxhighlight lang="c">#include <stdio.h>
#include <stdlib.h>
#include <string.h>
Line 1,937 ⟶ 1,997:
=={{header|C sharp|C#}}==
<syntaxhighlight lang="csharp">using System;
using System.IO;
using System.Linq;
Line 1,978 ⟶ 2,038:
=={{header|C++}}==
<syntaxhighlight lang="cpp">#include <iostream>
#include <fstream>
#include <string>
Line 2,023 ⟶ 2,083:
=={{header|Clojure}}==
Assume ''wordfile'' is the path of the local file containing the words. This code makes a map (''groups'') whose keys are sorted letters and values are lists of the key's anagrams. It then determines the length of the longest list, and prints out all the lists of that length.
<syntaxhighlight lang="clojure">(require '[clojure.java.io :as io])
(def groups
Line 2,034 ⟶ 2,094:
(println wordlist))</syntaxhighlight>
<syntaxhighlight lang="clojure">
(->> (slurp "http://wiki.puzzlers.org/pub/wordlists/unixdict.txt")
clojure.string/split-lines
Line 2,052 ⟶ 2,112:
=={{header|CLU}}==
<syntaxhighlight lang="clu">% Keep a list of anagrams
anagrams = cluster is new, add, largest_size, sets
anagram_set = struct[letters: string, words: array[string]]
Line 2,148 ⟶ 2,208:
Tested with GnuCOBOL 2.0. ALLWORDS output display trimmed for width.
<syntaxhighlight lang=
*> wget http://wiki.puzzlers.org/pub/wordlists/unixdict.txt
*> or visit https://sourceforge.net/projects/souptonuts/files
Line 2,422 ⟶ 2,482:
=={{header|CoffeeScript}}==
<syntaxhighlight lang="coffeescript">http = require 'http'
show_large_anagram_sets = (word_lst) ->
Line 2,454 ⟶ 2,514:
get_word_list show_large_anagram_sets</syntaxhighlight>
{{out}}
<syntaxhighlight lang="coffeescript">> coffee anagrams.coffee
[ 'abel', 'able', 'bale', 'bela', 'elba' ]
[ 'alger', 'glare', 'lager', 'large', 'regal' ]
Line 2,464 ⟶ 2,524:
=={{header|Common Lisp}}==
{{libheader|DRAKMA}} to retrieve the wordlist.
<syntaxhighlight lang="lisp">(defun anagrams (&optional (url "http://wiki.puzzlers.org/pub/wordlists/unixdict.txt"))
(let ((words (drakma:http-request url :want-stream t))
(wordsets (make-hash-table :test 'equalp)))
Line 2,488 ⟶ 2,548:
finally (return (values maxwordsets maxcount)))))</syntaxhighlight>
Evalutating
<syntaxhighlight lang="lisp">(multiple-value-bind (wordsets count) (anagrams)
(pprint wordsets)
(print count))</syntaxhighlight>
Line 2,500 ⟶ 2,560:
5</pre>
Another method, assuming file is local:
<syntaxhighlight lang="lisp">(defun read-words (file)
(with-open-file (stream file)
(loop with w = "" while w collect (setf w (read-line stream nil)))))
Line 2,529 ⟶ 2,589:
=={{header|Component Pascal}}==
BlackBox Component Builder
<syntaxhighlight lang="oberon2">
MODULE BbtAnagrams;
IMPORT StdLog,Files,Strings,Args;
Line 2,722 ⟶ 2,782:
=={{header|Crystal}}==
{{trans|Ruby}}
<syntaxhighlight lang="ruby">require "http/client"
response = HTTP::Client.get("http://wiki.puzzlers.org/pub/wordlists/unixdict.txt")
Line 2,758 ⟶ 2,818:
=={{header|D}}==
===Short Functional Version===
<syntaxhighlight lang="d">import std.stdio, std.algorithm, std.string, std.exception, std.file;
void main() {
Line 2,778 ⟶ 2,838:
===Faster Version===
Less safe, same output.
<syntaxhighlight lang="d">void main() {
import std.stdio, std.algorithm, std.file, std.string;
Line 2,798 ⟶ 2,858:
{{libheader| System.Classes}}
{{libheader| System.Diagnostics}}
<syntaxhighlight lang=
program AnagramsTest;
Line 2,944 ⟶ 3,004:
=={{header|E}}==
<syntaxhighlight lang="e">println("Downloading...")
when (def wordText := <http://wiki.puzzlers.org/pub/wordlists/unixdict.txt> <- getText()) -> {
def words := wordText.split("\n")
Line 2,969 ⟶ 3,029:
=={{header|EchoLisp}}==
For a change, we will use the french dictionary - '''(lib 'dico.fr)''' - delivered within EchoLisp.
<syntaxhighlight lang="scheme">
(require 'struct)
(require 'hash)
Line 3,007 ⟶ 3,067:
</syntaxhighlight>
{{out}}
<syntaxhighlight lang="scheme">
(length mots-français)
→ 209315
Line 3,020 ⟶ 3,080:
=={{header|Eiffel}}==
<syntaxhighlight lang=
class
ANAGRAMS
Line 3,121 ⟶ 3,181:
=={{header|Ela}}==
{{trans|Haskell}}
<syntaxhighlight lang="ela">open monad io list string
groupon f x y = f x == f y
Line 3,145 ⟶ 3,205:
=={{header|Elena}}==
ELENA
<syntaxhighlight lang="elena">import system'routines;
import system'calendar;
import system'io;
Line 3,153 ⟶ 3,213:
import extensions'routines;
import extensions'text;
import algorithms;
extension op
Line 3,166 ⟶ 3,227:
auto dictionary := new Map<string,object>();
File.assign("unixdict.txt").forEachLine::(word)
{
var key := word.normalized();
Line 3,176 ⟶ 3,237:
};
item.append
};
dictionary.Values
.
.top
.forEach::(pair){ console.printLine(pair.Item2) };
var end := now;
Line 3,194 ⟶ 3,255:
{{out}}
<pre>
abel,able,bale,bela,elba
alger,glare,lager,large,regal
evil,levi,live,veil,vile
elan,lane,lean,lena,neal
caret,carte,cater,crate,trace
angel,angle,galen,glean,lange
are,ear,era,rae
dare,dear,erda,read
diet,edit,tide,tied
cereus,recuse,rescue,secure
ames,mesa,same,seam
emit,item,mite,time
amen,mane,mean,name
enol,leon,lone,noel
esprit,priest,sprite,stripe
beard,bread,debar,debra
hare,hear,hera,rhea
apt,pat,pta,tap
aires,aries,arise,raise
keats,skate,stake,steak
</pre>
=={{header|Elixir}}==
<syntaxhighlight lang=
def find(file) do
File.read!(file)
Line 3,245 ⟶ 3,306:
The same output, using <code>File.Stream!</code> to generate <code>tuples</code> containing the word and it's sorted value as <code>strings</code>.
<syntaxhighlight lang=
|> Stream.map(&String.strip &1)
|> Enum.group_by(&String.codepoints(&1) |> Enum.sort)
Line 3,266 ⟶ 3,327:
=={{header|Erlang}}==
The function fetch/2 is used to solve [[Anagrams/Deranged_anagrams]]. Please keep backwards compatibility when editing. Or update the other module, too.
<syntaxhighlight lang="erlang">-module(anagrams).
-compile(export_all).
Line 3,309 ⟶ 3,370:
=={{header|Euphoria}}==
<syntaxhighlight lang="euphoria">include sort.e
function compare_keys(sequence a, sequence b)
Line 3,369 ⟶ 3,430:
=={{header|F Sharp|F#}}==
Read the lines in the dictionary, group by the sorted letters in each word, find the length of the longest sets of anagrams, extract the longest sequences of words sharing the same letters (i.e. anagrams):
<syntaxhighlight lang="fsharp">let xss = Seq.groupBy (Array.ofSeq >> Array.sort) (System.IO.File.ReadAllLines "unixdict.txt")
Seq.map snd xss |> Seq.filter (Seq.length >> ( = ) (Seq.map (snd >> Seq.length) xss |> Seq.max))</syntaxhighlight>
Note that it is necessary to convert the sorted letters in each word from sequences to arrays because the groupBy function uses the default comparison and sequences do not compare structurally (but arrays do in F#).
Takes 0.8s to return:
<syntaxhighlight lang="fsharp">val it : string seq seq =
seq
[seq ["abel"; "able"; "bale"; "bela"; "elba"];
Line 3,384 ⟶ 3,445:
=={{header|Fantom}}==
<syntaxhighlight lang="fantom">class Main
{
// take given word and return a string rearranging characters in order
Line 3,439 ⟶ 3,500:
=={{header|Fortran}}==
This program:
<syntaxhighlight lang="fortran">!***************************************************************************************
module anagram_routines
!***************************************************************************************
Line 3,631 ⟶ 3,692:
=={{header|FBSL}}==
'''A little bit of cheating: literatim re-implementation of C solution in FBSL's Dynamic C layer.'''
<syntaxhighlight lang=
DIM gtc = GetTickCount()
Line 3,805 ⟶ 3,866:
=={{header|Factor}}==
<syntaxhighlight lang="factor"> "resource:unixdict.txt" utf8 file-lines
[ [ natural-sort >string ] keep ] { } map>assoc sort-keys
[ [ first ] compare +eq+ = ] monotonic-split
dup 0 [ length max ] reduce '[ length _ = ] filter [ values ] map .</syntaxhighlight>
<syntaxhighlight lang="factor">{
{ "abel" "able" "bale" "bela" "elba" }
{ "caret" "carte" "cater" "crate" "trace" }
Line 3,819 ⟶ 3,880:
=={{header|FreeBASIC}}==
<syntaxhighlight lang="freebasic">' FB 1.05.0 Win64
Type IndexedWord
Line 3,972 ⟶ 4,033:
=={{header|Frink}}==
<syntaxhighlight lang="frink">
d = new dict
for w = lines["http://wiki.puzzlers.org/pub/wordlists/unixdict.txt"]
Line 3,992 ⟶ 4,053:
=={{header|FutureBasic}}==
Applications in the latest versions of Macintosh OS X 10.x are sandboxed and require setting special permissions to link to internet files. For illustration purposes here, this code uses the internal Unix dictionary file available
<syntaxhighlight lang="futurebasic">
include "NSLog.incl"
local fn Dictionary as CFArrayRef
CFURLRef url = fn URLFileURLWithPath( @"/usr/share/dict/words" )
CFStringRef string = fn StringWithContentsOfURL( url, NSUTF8StringEncoding, NULL )
end fn = fn StringComponentsSeparatedByString( string, @"\n" )
local fn IsAnagram( wrd1 as CFStringRef, wrd2 as CFStringRef ) as BOOL
NSUInteger i
BOOL result = NO
if ( len(wrd1) != len(wrd2) ) then exit fn
if ( fn StringCompare( wrd1, wrd2 ) == NSOrderedSame ) then exit fn
CFMutableArrayRef mutArr1 = fn MutableArrayWithCapacity(0) : CFMutableArrayRef mutArr2 = fn MutableArrayWithCapacity(0)
for i = 0 to len(wrd1) - 1
MutableArrayAddObject( mutArr1, fn StringWithFormat( @"%C", fn StringCharacterAtIndex( wrd1, i ) ) )
MutableArrayAddObject( mutArr2, fn StringWithFormat( @"%C", fn StringCharacterAtIndex( wrd2, i ) ) )
next
SortDescriptorRef sd = fn SortDescriptorWithKeyAndSelector( NULL, YES, @"caseInsensitiveCompare:" )
if ( fn ArrayIsEqual( fn ArraySortedArrayUsingDescriptors( mutArr1, @[sd] ), fn ArraySortedArrayUsingDescriptors( mutArr2, @[sd] ) ) ) then result = YES
end fn = result
void local fn FindAnagramsInDictionary( wd as CFStringRef, dict as CFArrayRef )
CFStringRef string, temp
CFMutableArrayRef words = fn MutableArrayWithCapacity(0)
if ( fn IsAnagram( lcase( wd ), temp ) ) then MutableArrayAddObject( words, temp )
next
string = fn ArrayComponentsJoinedByString( words, @", " )
NSLogSetTextColor( fn ColorText ) : NSLog( @"Anagrams for %@:", lcase(wd) )
NSLogSetTextColor( fn ColorSystemBlue ) : NSLog(@"%@\n",string)
end fn
void local fn DoIt
CFArrayRef dictionary = fn Dictionary
dispatchglobal
CFStringRef string
CFArrayRef words = @[@"bade",@"abet",@"beast",@"tuba",@"mace",@"scare",@"marine",@"antler",@"spare",@"leading",@"alerted",@"allergy",@"research",@"hustle",@"oriental",@"creationism",@"resistance",@"mountaineer"]
for string in words
fn FindAnagramsInDictionary( string, dictionary )
next
dispatchend
end fn
fn DoIt
HandleEvents
</syntaxhighlight>
Output:
Line 4,117 ⟶ 4,129:
</pre>
This version fulfils the task description.
<syntaxhighlight lang="futurebasic">
include "NSLog.incl"
#plist NSAppTransportSecurity @{NSAllowsArbitraryLoads:YES}
local fn Dictionary as CFArrayRef
CFURLRef url = fn URLWithString( @"http://wiki.puzzlers.org/pub/wordlists/unixdict.txt" )
CFStringRef string = fn StringWithContentsOfURL( url, NSUTF8StringEncoding, NULL )
end fn = fn StringComponentsSeparatedByCharactersInSet( string, fn CharacterSetNewlineSet )
local fn TestIndexes( array as CFArrayRef, obj as CFTypeRef, index as NSUInteger, stp as ^BOOL, userData as ptr ) as BOOL
end fn = fn StringIsEqual( obj, userData )
void local fn IndexSetEnumerator( set as IndexSetRef, index as NSUInteger, stp as ^BOOL, userData as ptr )
NSLog(@"\t%@\b",fn ArrayObjectAtIndex( userData, index ))
end fn
void local fn
CFArrayRef words
CFStringRef string, sortedString
IndexSetRef indexes
long i, j, count, indexCount, maxCount = 0, length
CFMutableDictionaryRef anagrams
CFTimeInterval ti
ti = fn CACurrentMediaTime
NSLog(@"Searching...")
// create another word list with sorted letters
words = fn Dictionary
count = len(words)
sortedWords = fn MutableArrayWithCapacity(count)
for string in words
length = len(string)
letters = fn MutableArrayWithCapacity(length)
for i = 0 to length - 1
MutableArrayAddObject( letters, mid(string,i,1) )
next
MutableArraySortUsingSelector( letters, @"compare:" )
sortedString = fn ArrayComponentsJoinedByString( letters, @"" )
MutableArrayAddObject( sortedWords, sortedString )
next
// search for identical sorted words
anagrams = fn MutableDictionaryWithCapacity(0)
for i = 0 to count - 2
j = i + 1
indexes = fn ArrayIndexesOfObjectsAtIndexesPassingTest( sortedWords, fn IndexSetWithIndexesInRange( fn CFRangeMake(j,count-j) ), NSEnumerationConcurrent, @fn TestIndexes, (ptr)sortedWords[i] )
indexCount = len(indexes)
if ( indexCount > maxCount )
maxCount = indexCount
MutableDictionaryRemoveAllObjects( anagrams )
end if
if ( indexCount == maxCount )
MutableDictionarySetValueForKey( anagrams, indexes, words[i] )
end if
next
// show results
NSLogClear
for string in anagrams
NSLog(@"%@\b",string)
indexes = anagrams[string]
IndexSetEnumerateIndexes( indexes, @fn IndexSetEnumerator, (ptr)words )
NSLog(@"")
next
NSLog(@"\nCalculated in %0.6fs",fn CACurrentMediaTime - ti)
end fn
dispatchglobal
fn DoIt
dispatchend
HandleEvents
</syntaxhighlight>
{{out}}
<pre>
alger glare lager large regal
caret carte cater crate trace
elan lane lean lena neal
abel able bale bela elba
evil levi live veil vile
angel angle galen glean lange
Calculated in 2.409008s
</pre>
=={{header|GAP}}==
<syntaxhighlight lang="gap">Anagrams := function(name)
local f, p, L, line, word, words, swords, res, cur, r;
words := [ ];
Line 4,350 ⟶ 4,280:
=={{header|Go}}==
<syntaxhighlight lang="go">package main
import (
Line 4,409 ⟶ 4,339:
=={{header|Groovy}}==
This program:
<syntaxhighlight lang="groovy">def words = new URL('http://wiki.puzzlers.org/pub/wordlists/unixdict.txt').text.readLines()
def groups = words.groupBy{ it.toList().sort() }
def bigGroupSize = groups.collect{ it.value.size() }.max()
Line 4,425 ⟶ 4,355:
=={{header|Haskell}}==
<syntaxhighlight lang="haskell">import Data.List
groupon f x y = f x == f y
Line 4,436 ⟶ 4,366:
mapM_ (print . map snd) . filter ((==mxl).length) $ wix</syntaxhighlight>
{{out}}
<syntaxhighlight lang="haskell">*Main> main
["abel","able","bale","bela","elba"]
["caret","carte","cater","crate","trace"]
Line 4,446 ⟶ 4,376:
and we can noticeably speed up the second stage sorting and grouping by packing the String lists of Chars to the Text type:
<syntaxhighlight lang="haskell">import Data.List (groupBy, maximumBy, sort)
import Data.Ord (comparing)
import Data.Function (on)
Line 4,467 ⟶ 4,397:
=={{header|Icon}} and {{header|Unicon}}==
<syntaxhighlight lang="icon">procedure main(args)
every writeSet(!getLongestAnagramSets())
end
Line 4,513 ⟶ 4,443:
=={{header|J}}==
If the unixdict file has been retrieved and saved in the current directory (for example, using wget):
<syntaxhighlight lang="j"> (#~ a: ~: {:"1) (]/.~ /:~&>) <;._2 ] 1!:1 <'unixdict.txt'
+-----+-----+-----+-----+-----+
|abel |able |bale |bela |elba |
Line 4,528 ⟶ 4,458:
+-----+-----+-----+-----+-----+</syntaxhighlight>
Explanation:
<syntaxhighlight lang=
This reads in the dictionary and produces a list of boxes. Each box contains one line (one word) from the dictionary.
<syntaxhighlight lang=
This groups the words into rows where anagram equivalents appear in the same row. In other words, creates a copy of the original list where the characters contained in each box have been sorted. Then it organizes the contents of the original list in rows, with each new row keyed by the values in the new list.
<syntaxhighlight lang=
This selects rows whose last element is not an empty box.<br>
(In the previous step we created an array of rows of boxes. The short rows were automatically padded with empty boxes so that all rows would be the same length.)
Line 4,539 ⟶ 4,469:
The key to this algorithm is the sorting of the characters in each word from the dictionary. The line <tt>Arrays.sort(chars);</tt> sorts all of the letters in the word in ascending order using a built-in [[quicksort]], so all of the words in the first group in the result end up under the key "aegln" in the anagrams map.
{{works with|Java|1.5+}}
<syntaxhighlight lang="java5">import java.net.*;
import java.io.*;
import java.util.*;
Line 4,570 ⟶ 4,500:
}</syntaxhighlight>
{{works with|Java|1.8+}}
<syntaxhighlight lang="java5">import java.net.*;
import java.io.*;
import java.util.*;
Line 4,634 ⟶ 4,564:
===ES5===
{{Works with|Node.js}}
<syntaxhighlight lang="javascript">var fs = require('fs');
var words = fs.readFileSync('unixdict.txt', 'UTF-8').split('\n');
Line 4,667 ⟶ 4,597:
Alternative using reduce:
<syntaxhighlight lang="javascript">var fs = require('fs');
var dictionary = fs.readFileSync('unixdict.txt', 'UTF-8').split('\n');
Line 4,696 ⟶ 4,626:
Using JavaScript for Automation
(A JavaScriptCore interpreter on macOS with an Automation library).
<syntaxhighlight lang="javascript">(() => {
'use strict';
Line 4,911 ⟶ 4,841:
=={{header|jq}}==
<syntaxhighlight lang="jq">def anagrams:
(reduce .[] as $word (
{table: {}, max: 0}; # state
Line 4,924 ⟶ 4,854:
</syntaxhighlight>
{{Out}}
<syntaxhighlight lang="sh">
$ jq -M -s -c -R -f anagrams.jq unixdict.txt
["abel","able","bale","bela","elba"]
Line 4,936 ⟶ 4,866:
=={{header|Jsish}}==
From Javascript, nodejs entry.
<syntaxhighlight lang="javascript">/* Anagrams, in Jsish */
var datafile = 'unixdict.txt';
if (console.args[0] == '-more' && Interp.conf('maxArrayList') > 500000)
Line 4,984 ⟶ 4,914:
=={{header|Julia}}==
{{works with|Julia|1.6}}
<syntaxhighlight lang="julia">url = "http://wiki.puzzlers.org/pub/wordlists/unixdict.txt"
wordlist = open(readlines, download(url))
Line 5,010 ⟶ 4,940:
=={{header|K}}==
<syntaxhighlight lang="k">{x@&a=|/a:#:'x}{x g@&1<#:'g:={x@<x}'x}0::`unixdict.txt</syntaxhighlight>
=={{header|Kotlin}}==
{{trans|Java}}
<syntaxhighlight lang="scala">import java.io.BufferedReader
import java.io.InputStreamReader
import java.net.URL
Line 5,052 ⟶ 4,982:
=={{header|Lasso}}==
<syntaxhighlight lang="lasso">local(
anagrams = map,
words = include_url('http://wiki.puzzlers.org/pub/wordlists/unixdict.txt')->split('\n'),
Line 5,089 ⟶ 5,019:
=={{header|Liberty BASIC}}==
<syntaxhighlight lang="lb">' count the word list
open "unixdict.txt" for input as #1
while not(eof(#1))
Line 5,164 ⟶ 5,094:
LiveCode could definitely use a sort characters command. As it is this code converts the letters into items and then sorts that. I wrote a merge sort for characters, but the conversion to items, built-in-sort, conversion back to string is about 10% faster, and certainly easier to write.
<syntaxhighlight lang=
put mostCommonAnagrams(url "http://wiki.puzzlers.org/pub/wordlists/unixdict.txt")
end mouseUp
Line 5,213 ⟶ 5,143:
=={{header|Lua}}==
Lua's core library is very small and does not include built-in network functionality. If a networking library were imported, the local file in the following script could be replaced with the remote dictionary file.
<syntaxhighlight lang="lua">function sort(word)
local bytes = {word:byte(1, -1)}
table.sort(bytes)
Line 5,246 ⟶ 5,176:
=={{header|M4}}==
<syntaxhighlight lang=
changequote(`[',`]')
define([for],
Line 5,309 ⟶ 5,239:
The convert call discards the hashes, which have done their job, and leaves us with a list L of anagram sets.
Finally, we just note the size of the largest sets of anagrams, and pick those off.
<syntaxhighlight lang=
words := HTTP:-Get( "http://wiki.puzzlers.org/pub/wordlists/unixdict.txt" )[2]: # ignore errors
use StringTools, ListTools in
Line 5,319 ⟶ 5,249:
</syntaxhighlight>
The result of running this code is
<syntaxhighlight lang=
A := [{"abel", "able", "bale", "bela", "elba"}, {"angel", "angle", "galen",
"glean", "lange"}, {"alger", "glare", "lager", "large", "regal"}, {"evil",
Line 5,328 ⟶ 5,258:
=={{header|Mathematica}}/{{header|Wolfram Language}}==
Download the dictionary, split the lines, split the word in characters and sort them. Now sort by those words, and find sequences of equal 'letter-hashes'. Return the longest sequences:
<syntaxhighlight lang=
text={#,StringJoin@@Sort[Characters[#]]}&/@list;
text=SortBy[text,#[[2]]&];
Line 5,335 ⟶ 5,265:
Select[splits,Length[#]==maxlen&]</syntaxhighlight>
gives back:
<syntaxhighlight lang=
An alternative is faster, but requires version 7 (for <code>Gather</code>):
<syntaxhighlight lang=
maxlen = Max[Length /@ splits];
Select[splits, Length[#] == maxlen &]</syntaxhighlight>
Or using build-in functions for sorting and gathering elements in lists it can be implimented as:
<syntaxhighlight lang=
anagramGroups[[-1]]</syntaxhighlight>
Also, Mathematica's own word list is available; replacing the list definition with <code>list = WordData[];</code> and forcing <code>maxlen</code> to 5 yields instead this result:
Line 5,365 ⟶ 5,295:
Also if using Mathematica 10 it gets really concise:
<syntaxhighlight lang=
MaximalBy[GatherBy[list, Sort@*Characters], Length]</syntaxhighlight>
=={{header|Maxima}}==
<syntaxhighlight lang="maxima">read_file(name) := block([file, s, L], file: openr(name), L: [],
while stringp(s: readline(file)) do L: cons(s, L), close(file), L)$
Line 5,408 ⟶ 5,338:
["caret", "carte", "cater", "crate", "trace"],
["abel", "able", "bale", "bela", "elba"]] */</syntaxhighlight>
=={{header|MiniScript}}==
This implementation is for use with the [http://miniscript.org/MiniMicro Mini Micro] version of MiniScript. The command-line version does not include a HTTP library. The script can be modified to use the file class to read a local copy of the word list.
<syntaxhighlight lang="miniscript">
wordList = http.get("http://wiki.puzzlers.org/pub/wordlists/unixdict.txt").split(char(10))
makeKey = function(word)
return word.split("").sort.join("")
end function
wordSets = {}
for word in wordList
k = makeKey(word)
if not wordSets.hasIndex(k) then
wordSets[k] = [word]
else
wordSets[k].push(word)
end if
end for
counts = []
for wordSet in wordSets.values
counts.push([wordSet.len, wordSet])
end for
counts.sort(0, false)
maxCount = counts[0][0]
for count in counts
if count[0] == maxCount then print count[1]
end for
</syntaxhighlight>
{{out}}
<pre>
["abel", "able", "bale", "bela", "elba"]
["alger", "glare", "lager", "large", "regal"]
["angel", "angle", "galen", "glean", "lange"]
["caret", "carte", "cater", "crate", "trace"]
["elan", "lane", "lean", "lena", "neal"]
["evil", "levi", "live", "veil", "vile"]</pre>
=={{header|MUMPS}}==
<syntaxhighlight lang=
Set file="unixdict.txt"
Open file:"r" Use file
Line 5,460 ⟶ 5,431:
===Java–Like===
{{trans|Java}}
<syntaxhighlight lang=
options replace format comments java crossref symbols nobinary
Line 5,534 ⟶ 5,505:
===Rexx–Like===
Implemented with more NetRexx idioms such as indexed strings, <tt>PARSE</tt> and the NetRexx "built–in functions".
<syntaxhighlight lang=
options replace format comments java crossref symbols nobinary
Line 5,604 ⟶ 5,575:
=={{header|NewLisp}}==
<syntaxhighlight lang=
;;; Get the words as a list, splitting at newline
(setq data
Line 5,644 ⟶ 5,615:
=={{header|Nim}}==
<syntaxhighlight lang="nim">
import tables, strutils, algorithm
Line 5,676 ⟶ 5,647:
=={{header|Oberon-2}}==
Oxford Oberon-2
<syntaxhighlight lang="oberon2">
MODULE Anagrams;
IMPORT Files,Out,In,Strings;
Line 5,845 ⟶ 5,816:
=={{header|Objeck}}==
<syntaxhighlight lang="objeck">use HTTP;
use Collection;
Line 5,895 ⟶ 5,866:
=={{header|OCaml}}==
<syntaxhighlight lang="ocaml">let explode str =
let l = ref [] in
let n = String.length str in
Line 5,931 ⟶ 5,902:
=={{header|Oforth}}==
<syntaxhighlight lang=
import: collect
import: quicksort
Line 5,957 ⟶ 5,928:
Two versions of this, using different collection classes.
===Version 1: Directory of arrays===
<syntaxhighlight lang=
-- This assumes you've already downloaded the following file and placed it
-- in the current directory: http://wiki.puzzlers.org/pub/wordlists/unixdict.txt
Line 5,996 ⟶ 5,967:
===Version 2: Using the relation class===
This version appears to be the fastest.
<syntaxhighlight lang=
-- This assumes you've already downloaded the following file and placed it
-- in the current directory: http://wiki.puzzlers.org/pub/wordlists/unixdict.txt
Line 6,069 ⟶ 6,040:
=={{header|Oz}}==
<syntaxhighlight lang="oz">declare
%% Helper function
fun {ReadLines Filename}
Line 6,100 ⟶ 6,071:
=={{header|Pascal}}==
<syntaxhighlight lang="pascal">Program Anagrams;
// assumes a local file
Line 6,200 ⟶ 6,171:
=={{header|Perl}}==
<syntaxhighlight lang="perl">use List::Util 'max';
my @words = split "\n", do { local( @ARGV, $/ ) = ( 'unixdict.txt' ); <> };
Line 6,213 ⟶ 6,184:
}</syntaxhighlight>
If we calculate <code>$max</code>, then we don't need the CPAN module:
<syntaxhighlight lang="perl">push @{$anagram{ join '' => sort split '' }}, $_ for @words;
$max > @$_ or $max = @$_ for values %anagram;
@$_ == $max and print "@$_\n" for values %anagram;</syntaxhighlight>
Line 6,226 ⟶ 6,197:
=={{header|Phix}}==
copied from Euphoria and cleaned up slightly
<!--<syntaxhighlight lang=
<span style="color: #004080;">integer</span> <span style="color: #000000;">fn</span> <span style="color: #0000FF;">=</span> <span style="color: #7060A8;">open</span><span style="color: #0000FF;">(</span><span style="color: #008000;">"demo/unixdict.txt"</span><span style="color: #0000FF;">,</span><span style="color: #008000;">"r"</span><span style="color: #0000FF;">)</span>
<span style="color: #004080;">sequence</span> <span style="color: #000000;">words</span> <span style="color: #0000FF;">=</span> <span style="color: #0000FF;">{},</span> <span style="color: #000000;">anagrams</span> <span style="color: #0000FF;">=</span> <span style="color: #0000FF;">{},</span> <span style="color: #000000;">last</span><span style="color: #0000FF;">=</span><span style="color: #008000;">""</span><span style="color: #0000FF;">,</span> <span style="color: #000000;">letters</span>
Line 6,276 ⟶ 6,247:
=={{header|Phixmonti}}==
<syntaxhighlight lang=
"unixdict.txt" "r" fopen var f
Line 6,320 ⟶ 6,291:
Other solution
<syntaxhighlight lang=
( )
Line 6,367 ⟶ 6,338:
=={{header|PHP}}==
<syntaxhighlight lang="php"><?php
$words = explode("\n", file_get_contents('http://wiki.puzzlers.org/pub/wordlists/unixdict.txt'));
foreach ($words as $word) {
Line 6,383 ⟶ 6,354:
=={{header|Picat}}==
Using foreach loop:
<syntaxhighlight lang=
Dict = new_map(),
foreach(Line in read_file_lines("unixdict.txt"))
Line 6,406 ⟶ 6,377:
Same idea, but shorter version by (mis)using list comprehensions.
<syntaxhighlight lang=
M = new_map(),
_ = [_:W in read_file_lines("unixdict.txt"),S=sort(W),M.put(S,M.get(S,"")++[W])],
Line 6,419 ⟶ 6,390:
=={{header|PicoLisp}}==
A straight-forward implementation using 'group' takes 48 seconds on a 1.7 GHz Pentium:
<syntaxhighlight lang=
(by length sort
(by '((L) (sort (copy L))) group
(in "unixdict.txt" (make (while (line) (link @)))) ) ) )</syntaxhighlight>
Using a binary tree with the 'idx' function, it takes only 0.42 seconds on the same machine, a factor of 100 faster:
<syntaxhighlight lang=
(in "unixdict.txt"
(while (line)
Line 6,439 ⟶ 6,410:
=={{header|PL/I}}==
<syntaxhighlight lang=
word_test: proc options (main);
Line 6,514 ⟶ 6,485:
=={{header|Pointless}}==
<syntaxhighlight lang="pointless">output =
readFileLines("unixdict.txt")
|> reduce(logWord, {})
Line 6,539 ⟶ 6,510:
=={{header|PowerShell}}==
{{works with|PowerShell|2}}
<syntaxhighlight lang="powershell">$c = New-Object Net.WebClient
$words = -split ($c.DownloadString('http://wiki.puzzlers.org/pub/wordlists/unixdict.txt'))
$top_anagrams = $words `
Line 6,560 ⟶ 6,531:
evil, levi, live, veil, vile</pre>
Another way with more .Net methods is quite a different style, but drops the runtime from 2 minutes to 1.5 seconds:
<syntaxhighlight lang="powershell">$Timer = [System.Diagnostics.Stopwatch]::StartNew()
$uri = 'http://wiki.puzzlers.org/pub/wordlists/unixdict.txt'
Line 6,605 ⟶ 6,576:
=={{header|Processing}}==
<syntaxhighlight lang="processing">import java.util.Map;
void setup() {
Line 6,642 ⟶ 6,613:
=={{header|Prolog}}==
{{works with|SWI-Prolog|5.10.0}}
<syntaxhighlight lang=
anagrams:-
Line 6,702 ⟶ 6,673:
=={{header|PureBasic}}==
{{works with|PureBasic|4.4}}
<syntaxhighlight lang=
OpenConsole()
Line 6,789 ⟶ 6,760:
===Python 3.X Using defaultdict===
Python 3.2 shell input (IDLE)
<syntaxhighlight lang="python">>>> import urllib.request
>>> from collections import defaultdict
>>> words = urllib.request.urlopen('http://wiki.puzzlers.org/pub/wordlists/unixdict.txt').read().split()
Line 6,804 ⟶ 6,775:
===Python 2.7 version===
Python 2.7 shell input (IDLE)
<syntaxhighlight lang="python">>>> import urllib
>>> from collections import defaultdict
>>> words = urllib.urlopen('http://wiki.puzzlers.org/pub/wordlists/unixdict.txt').read().split()
Line 6,833 ⟶ 6,804:
{{trans|Haskell}}
{{works with|Python|2.6}} sort and then group using groupby()
<syntaxhighlight lang="python">>>> import urllib, itertools
>>> words = urllib.urlopen('http://wiki.puzzlers.org/pub/wordlists/unixdict.txt').read().split()
>>> len(words)
Line 6,859 ⟶ 6,830:
Or, disaggregating, speeding up a bit by avoiding the slightly expensive use of ''sorted'' as a key, updating for Python 3, and using a local ''unixdict.txt'':
{{Works with|Python|3.7}}
<syntaxhighlight lang="python">'''Largest anagram groups found in list of words.'''
from os.path import expanduser
Line 6,976 ⟶ 6,947:
=={{header|QB64}}==
<syntaxhighlight lang=
$CHECKING:OFF
' Warning: Keep the above line commented out until you know your newly edited code works.
Line 7,129 ⟶ 7,100:
'''2nd solution (by Steve McNeill):'''
<syntaxhighlight lang=
$CHECKING:OFF
SCREEN _NEWIMAGE(640, 480, 32)
Line 7,278 ⟶ 7,249:
'''Output:'''
<syntaxhighlight lang=
LOOPER: 7134 executions from start to finish, in one second.
Note, this is including disk access for new data each time.
Line 7,294 ⟶ 7,265:
=={{header|Quackery}}==
<syntaxhighlight lang=
[] swap witheach
[ dup sort
Line 7,328 ⟶ 7,299:
=={{header|R}}==
<syntaxhighlight lang=
word_group <- sapply(
strsplit(words, split=""), # this will split all words to single letters...
Line 7,352 ⟶ 7,323:
=={{header|Racket}}==
<syntaxhighlight lang="racket">
#lang racket
Line 7,389 ⟶ 7,360:
{{works with|Rakudo|2016.08}}
<syntaxhighlight lang=
my $max = @anagrams».elems.max;
Line 7,406 ⟶ 7,377:
{{works with|Rakudo|2016.08}}
<syntaxhighlight lang=
'unixdict.txt'.IO.words # load words from file
.classify(*.comb.sort.join) # group by common anagram
Line 7,414 ⟶ 7,385:
=={{header|RapidQ}}==
<syntaxhighlight lang="vb">
dim x as integer, y as integer
dim SortX as integer
Line 7,491 ⟶ 7,462:
=={{header|Rascal}}==
<syntaxhighlight lang="rascal">import Prelude;
list[str] OrderedRep(str word){
Line 7,503 ⟶ 7,474:
}</syntaxhighlight>
Returns:
<syntaxhighlight lang="rascal">value: [
{"glean","galen","lange","angle","angel"},
{"glare","lager","regal","large","alger"},
Line 7,513 ⟶ 7,484:
=={{header|Red}}==
<syntaxhighlight lang=
m: make map! [] 25000
Line 7,544 ⟶ 7,515:
This version doesn't assume that the dictionary is in alphabetical order, nor does it assume the
<br>words are in any specific case (lower/upper/mixed).
<syntaxhighlight lang="rexx">/*REXX program finds words with the largest set of anagrams (of the same size). */
iFID= 'unixdict.txt' /*the dictionary input File IDentifier.*/
$=; !.=; ww=0; uw=0; most=0 /*initialize a bunch of REXX variables.*/
Line 7,592 ⟶ 7,563:
===version 1.2, optimized===
This optimized version eliminates the '''sortA''' subroutine and puts that subroutine's code in-line.
<syntaxhighlight lang="rexx">/*REXX program finds words with the largest set of anagrams (of the same size). */
iFID= 'unixdict.txt' /*the dictionary input File IDentifier.*/
$=; !.=; ww=0; uw=0; most=0 /*initialize a bunch of REXX variables.*/
Line 7,629 ⟶ 7,600:
===annotated version using PARSE===
(This algorithm actually utilizes a ''bin'' sort, one bin for each Latin letter.)
<syntaxhighlight lang="rexx">u= 'Halloween' /*word to be sorted by (Latin) letter.*/
upper u /*fast method to uppercase a variable. */
/*another: u = translate(u) */
Line 7,659 ⟶ 7,630:
===annotated version using a DO loop===
<syntaxhighlight lang="rexx">u= 'Halloween' /*word to be sorted by (Latin) letter.*/
upper u /*fast method to uppercase a variable. */
L=length(u) /*get the length of the word (in bytes)*/
Line 7,685 ⟶ 7,656:
===version 2===
<syntaxhighlight lang="rexx">/*REXX program finds words with the largest set of anagrams (same size)
* 07.08.2013 Walter Pachl
* sorta for word compression courtesy Gerard Schildberger,
Line 7,763 ⟶ 7,734:
=={{header|Ring}}==
<syntaxhighlight lang="ring">
# Project : Anagrams
Line 7,867 ⟶ 7,838:
=={{header|Ruby}}==
<syntaxhighlight lang="ruby">require 'open-uri'
anagram = Hash.new {|hash, key| hash[key] = []} # map sorted chars to anagrams
Line 7,895 ⟶ 7,866:
Short version (with lexical ordered result).
<syntaxhighlight lang="ruby">require 'open-uri'
anagrams = open('http://wiki.puzzlers.org/pub/wordlists/unixdict.txt'){|f| f.read.split.group_by{|w| w.each_char.sort} }
Line 7,911 ⟶ 7,882:
=={{header|Run BASIC}}==
<syntaxhighlight lang="runbasic">sqliteconnect #mem, ":memory:"
mem$ = "CREATE TABLE anti(gram,ordr);
CREATE INDEX ord ON anti(ordr)"
Line 7,987 ⟶ 7,958:
Unicode is hard so the solution depends on what you consider to be an anagram: two strings that have the same bytes, the same codepoints, or the same graphemes. The first two are easily accomplished in Rust proper, but the latter requires an external library. Graphemes are probably the most correct way, but it is also the least efficient since graphemes are variable size and thus require a heap allocation per grapheme.
<syntaxhighlight lang="rust">use std::collections::HashMap;
use std::fs::File;
use std::io::{BufRead,BufReader};
Line 8,029 ⟶ 8,000:
If we assume an ASCII string, we can map each character to a prime number and multiply these together to create a number which uniquely maps to each anagram.
<syntaxhighlight lang="rust">use std::collections::HashMap;
use std::path::Path;
use std::io::{self, BufRead, BufReader};
Line 8,073 ⟶ 8,044:
=={{header|Scala}}==
<syntaxhighlight lang="scala">val src = io.Source fromURL "http://wiki.puzzlers.org/pub/wordlists/unixdict.txt"
val vls = src.getLines.toList.groupBy(_.sorted).values
val max = vls.map(_.size).max
Line 8,088 ⟶ 8,059:
----
Another take:
<syntaxhighlight lang="scala">Source
.fromURL("http://wiki.puzzlers.org/pub/wordlists/unixdict.txt").getLines.toList
.groupBy(_.sorted).values
Line 8,108 ⟶ 8,079:
Uses two SRFI libraries: SRFI 125 for hash tables and SRFI 132 for sorting.
<syntaxhighlight lang="scheme">
(import (scheme base)
(scheme char)
Line 8,163 ⟶ 8,134:
=={{header|Seed7}}==
<syntaxhighlight lang="seed7">$ include "seed7_05.s7i";
include "gethttp.s7i";
include "strifile.s7i";
Line 8,198 ⟶ 8,169:
var integer: maxLength is 0;
begin
dictFile :=
while hasNext(dictFile) do
readln(dictFile, word);
Line 8,231 ⟶ 8,202:
=={{header|SETL}}==
<syntaxhighlight lang=
anagrams := {};
while not eof(h) loop
Line 8,280 ⟶ 8,251:
=={{header|Sidef}}==
<syntaxhighlight lang="ruby">func main(file) {
file.open_r(\var fh, \var err) ->
|| die "Can't open file `#{file}' for reading: #{err}\n";
Line 8,299 ⟶ 8,270:
=={{header|Simula}}==
<syntaxhighlight lang="simula">COMMENT COMPILE WITH
$ cim -m64 anagrams-hashmap.sim
;
Line 8,583 ⟶ 8,554:
=={{header|Smalltalk}}==
<syntaxhighlight lang=
dict:= Dictionary new.
list do: [:val|
Line 8,607 ⟶ 8,578:
{{works with|Smalltalk/X}}
instead of asking for the strings, read the file:
<syntaxhighlight lang="smalltalk">d := Dictionary new.
'unixdict.txt' asFilename
readingLinesDo:[:eachWord |
Line 8,628 ⟶ 8,599:
...</pre>
not sure if getting the dictionary via http is part of the task; if so, replace the file-reading with:
<syntaxhighlight lang="smalltalk">'http://wiki.puzzlers.org/pub/wordlists/unixdict.txt' asURI contents asCollectionOfLines do:[:eachWord | ...</syntaxhighlight>
=={{header|SNOBOL4}}==
{{works with|Macro Spitbol}}
Note: unixdict.txt is passed in locally via STDIN. Newlines must be converted for Win/DOS environment.
<syntaxhighlight lang=
define('sortw(str)a,i,j') :(sortw_end)
sortw a = array(size(str))
Line 8,665 ⟶ 8,636:
=={{header|Stata}}==
<syntaxhighlight lang="stata">import delimited http://wiki.puzzlers.org/pub/wordlists/unixdict.txt, clear
mata
a=st_sdata(.,.)
Line 8,693 ⟶ 8,664:
=={{header|SuperCollider}}==
<syntaxhighlight lang=
var text, words, sorted, dict = IdentityDictionary.new, findMax;
File.use("unixdict.txt".resolveRelative, "r", { |f| text = f.readAllString });
Line 8,715 ⟶ 8,686:
Answers:
<syntaxhighlight lang=
=={{header|Swift}}==
{{works with|Swift 2.0}}
<syntaxhighlight lang="swift">import Foundation
let wordsURL = NSURL(string: "http://wiki.puzzlers.org/pub/wordlists/unixdict.txt")!
Line 8,776 ⟶ 8,747:
=={{header|Tcl}}==
<syntaxhighlight lang="tcl">package require Tcl 8.5
package require http
Line 8,809 ⟶ 8,780:
=={{header|Transd}}==
<syntaxhighlight lang="scheme">#lang transd
MainModule: {
_start: (λ
(with fs FileStream() words String()
(open-r fs "/mnt/proj/tmp/unixdict.txt")
(textin fs words)
( -|
Line 8,838 ⟶ 8,808:
=={{header|TUSCRIPT}}==
<syntaxhighlight lang="tuscript">$$ MODE TUSCRIPT,{}
requestdata = REQUEST ("http://wiki.puzzlers.org/pub/wordlists/unixdict.txt")
Line 8,886 ⟶ 8,856:
Process substitutions eliminate the need for command pipelines.
<syntaxhighlight lang="bash">http_get_body() {
local host=$1
local uri=$2
Line 8,935 ⟶ 8,905:
The algorithm is to group the words together that are made from the same unordered lists of letters, then collect the groups together that have the same number of words in
them, and then show the collection associated with the highest number.
<syntaxhighlight lang=
#show+
Line 8,950 ⟶ 8,920:
=={{header|VBA}}==
<syntaxhighlight lang="vb">
Option Explicit
Line 9,128 ⟶ 9,098:
=={{header|VBScript}}==
A little convoluted, uses a dictionary and a recordset...
<syntaxhighlight lang="vb">
Const adInteger = 3
Const adVarChar = 200
Line 9,216 ⟶ 9,186:
The word list is expected to be in the same directory as the script.
<syntaxhighlight lang="vedit">File_Open("|(PATH_ONLY)\unixdict.txt")
Repeat(ALL) {
Line 9,291 ⟶ 9,261:
evil levi live veil vile
</pre>
=={{header|Visual Basic .NET}}==
<syntaxhighlight lang="vbnet">Imports System.IO
Imports System.Collections.ObjectModel
Line 9,368 ⟶ 9,337:
</PRE>
=={{header|V (Vlang)}}==
{{trans|Wren}}
<syntaxhighlight lang="v (vlang)">import os
fn main(){
Line 9,409 ⟶ 9,378:
=={{header|Wren}}==
{{libheader|Wren-sort}}
<syntaxhighlight lang=
import "./sort" for Sort
var words = File.read("unixdict.txt").split("\n").map { |w| w.trim() }
Line 9,440 ⟶ 9,409:
=={{header|Yabasic}}==
<syntaxhighlight lang=
maxw = 0 : c = 0 : dimens(c)
i = 0
Line 9,523 ⟶ 9,492:
=={{header|zkl}}==
<syntaxhighlight lang="zkl">File("unixdict.txt").read(*) // dictionary file to blob, copied from web
// blob to dictionary: key is word "fuzzed", values are anagram words
.pump(Void,T(fcn(w,d){
Line 9,551 ⟶ 9,520:
</pre>
In the case where it is desirable to get the dictionary from the web, use this code:
<syntaxhighlight lang="zkl">URL:="http://wiki.puzzlers.org/pub/wordlists/unixdict.txt";
var ZC=Import("zklCurl");
unixdict:=ZC().get(URL); //--> T(Data,bytes of header, bytes of trailer)
Line 9,559 ⟶ 9,528:
{{omit from|6502 Assembly|unixdict.txt is much larger than the CPU's address space.}}
{{omit from|8080 Assembly|See 6502 Assembly.}}
{{omit from|PARI/GP|No real capacity for string manipulation}}
{{omit from|Z80 Assembly|See 6502 Assembly.}}
|