Teacup rim text: Difference between revisions

From Rosetta Code
Content added Content deleted
m (→‎{{header|REXX}}: added/changed some comments.)
m (→‎{{header|Wren}}: Minor tidy)
 
(126 intermediate revisions by 26 users not shown)
Line 1: Line 1:
{{draft task|Teacup rim text}}
{{task}}


On a set of coasters we have, there's a picture of a teacup. On the rim of the teacup the word "TEA" appears a number of times separated by bullet characters. It occurred to me that if the bullet were removed and the words run together, you could start at any letter and still end up with a meaningful three-letter word. So start at the "T" and read "TEA". Start at the "E" and read "EAT", or start at the "A" and read "ATE".
On a set of coasters we have, there's a picture of a teacup.   On the rim of the teacup the word   '''TEA'''   appears a number of times separated by bullet characters   (•).


It occurred to me that if the bullet were removed and the words run together,   you could start at any letter and still end up with a meaningful three-letter word.
That got me thinking that maybe there are other words that could be used rather that "TEA". And that's just English. What about Italian or Greek or ... um ... Telugu. For English, use the MIT 10000 word list located at https://www.mit.edu/~ecprice/wordlist.10000


So start at the   '''T'''   and read   '''TEA'''.   Start at the   '''E'''   and read   '''EAT''',   or start at the   '''A'''   and read   '''ATE'''.
So here's the task: from a web accessible or locally accessible word source, iterate through each word of length 3 or more. With each word, peel off the first letter and put it at the end. Check if the word exists. If it does, keep going with the next letter, repeating the process for as many letters as there are minus one. If all of the words exist store the original word. List each word that survives the filtration process along with all its variants. Having listed a set, for example [ate tea eat], resist displaying permutations of that set, e.g. [eat tea ate] etc.

That got me thinking that maybe there are other words that could be used rather that   '''TEA'''.   And that's just English.   What about Italian or Greek or ... um ... Telugu.

For English, we will use the unixdict (now) located at:   [http://wiki.puzzlers.org/pub/wordlists/unixdict.txt unixdict.txt].

(This will maintain continuity with other Rosetta Code tasks that also use it.)


;Task:
Search for a set of words that could be printed around the edge of a teacup.   The words in each set are to be of the same length, that length being greater than two (thus precluding   '''AH'''   and   '''HA''',   for example.)

Having listed a set, for example   ['''ate tea eat'''],   refrain from displaying permutations of that set, e.g.:   ['''eat tea ate''']   etc.

The words should also be made of more than one letter   (thus precluding   '''III'''   and   '''OOO'''   etc.)

The relationship between these words is (using ATE as an example) that the first letter of the first becomes the last letter of the second.   The first letter of the second becomes the last letter of the third.   So   '''ATE'''   becomes   '''TEA'''   and   '''TEA'''   becomes   '''EAT'''.

All of the possible permutations, using this particular permutation technique, must be words in the list.

The set you generate for   '''ATE'''   will never included the word   '''ETA'''   as that cannot be reached via the first-to-last movement method.

Display one line for each set of teacup rim words.


{{Template:Strings}}
<br><br>

=={{header|11l}}==
<syntaxhighlight lang="11l">F rotated(String s)
R s[1..]‘’s[0]

V s = Set(File(‘unixdict.txt’).read().rtrim("\n").split("\n"))
L !s.empty
L(=word) s // `=` is needed here because otherwise after `s.remove(word)` `word` becomes invalid
s.remove(word)
I word.len < 3
L.break

V w = word
L 0 .< word.len - 1
w = rotated(w)
I w C s
s.remove(w)
E
L.break
L.was_no_break
print(word, end' ‘’)
w = word
L 0 .< word.len - 1
w = rotated(w)
print(‘ -> ’w, end' ‘’)
print()

L.break</syntaxhighlight>

{{out}}
<pre>
apt -> pta -> tap
arc -> rca -> car
ate -> tea -> eat
</pre>

=={{header|Arturo}}==

<syntaxhighlight lang="rebol">wordset: map read.lines relative "unixdict.txt" => strip

rotateable?: function [w][
loop 1..dec size w 'i [
rotated: rotate w i
if or? [rotated = w][not? contains? wordset rotated] ->
return false
]
return true
]

results: new []
loop select wordset 'word [3 =< size word] 'word [
if rotateable? word ->
'results ++ @[ sort map 1..size word 'i [ rotate word i ]]
]

loop sort unique results 'result [
root: first result
print join.with: " -> " map 1..size root 'i [ rotate.left root i]
]</syntaxhighlight>

{{out}}

<pre>tea -> eat -> ate
rca -> car -> arc
pta -> tap -> apt</pre>

=={{header|AutoHotkey}}==
<syntaxhighlight lang="autohotkey">Teacup_rim_text(wList){
oWord := [], oRes := [], n := 0
for i, w in StrSplit(wList, "`n", "`r")
if StrLen(w) >= 3
oWord[StrLen(w), w] := true
for l, obj in oWord
{
for w, bool in obj
{
loop % l
if oWord[l, rotate(w)]
{
oWord[l, w] := 0
if (A_Index = 1)
n++, oRes[n] := w
if (A_Index < l)
oRes[n] := oRes[n] "," (w := rotate(w))
}
if (StrSplit(oRes[n], ",").Count() <> l)
oRes.RemoveAt(n)
}
}
return oRes
}

rotate(w){
return SubStr(w, 2) . SubStr(w, 1, 1)
}</syntaxhighlight>
Examples:<syntaxhighlight lang="autohotkey">FileRead, wList, % A_Desktop "\unixdict.txt"
result := ""
for i, v in Teacup_rim_text(wList)
result .= v "`n"
MsgBox % result
return</syntaxhighlight>
{{out}}
<pre>apt,pta,tap
arc,rca,car
ate,tea,eat</pre>

=={{header|AWK}}==
<syntaxhighlight lang="awk">
# syntax: GAWK -f TEACUP_RIM_TEXT.AWK UNIXDICT.TXT
#
# sorting:
# PROCINFO["sorted_in"] is used by GAWK
# SORTTYPE is used by Thompson Automation's TAWK
#
{ for (i=1; i<=NF; i++) {
arr[tolower($i)] = 0
}
}
END {
PROCINFO["sorted_in"] = "@ind_str_asc" ; SORTTYPE = 1
for (i in arr) {
leng = length(i)
if (leng > 2) {
delete tmp_arr
words = str = i
tmp_arr[i] = ""
for (j=2; j<=leng; j++) {
str = substr(str,2) substr(str,1,1)
if (str in arr) {
words = words " " str
tmp_arr[str] = ""
}
}
if (length(tmp_arr) == leng) {
count = 0
for (j in tmp_arr) {
(arr[j] == 0) ? arr[j]++ : count++
}
if (count == 0) {
printf("%s\n",words)
circular++
}
}
}
}
printf("%d words, %d circular\n",length(arr),circular)
exit(0)
}
</syntaxhighlight>
{{out}}
<p>using UNIXDICT.TXT</p>
<pre>
apt pta tap
arc rca car
ate tea eat
25104 words, 3 circular
</pre>
<p>using MIT10000.TXT</p>
<pre>
aim ima mai
arc rca car
asp spa pas
ate tea eat
ips psi sip
10000 words, 5 circular
</pre>

=={{header|BaCon}}==
<syntaxhighlight lang="bacon">OPTION COLLAPSE TRUE

dict$ = LOAD$(DIRNAME$(ME$) & "/unixdict.txt")

FOR word$ IN dict$ STEP NL$
IF LEN(word$) = 3 AND AMOUNT(UNIQ$(EXPLODE$(word$, 1))) = 3 THEN domain$ = APPEND$(domain$, 0, word$)
NEXT

FOR w1$ IN domain$
w2$ = RIGHT$(w1$, 2) & LEFT$(w1$, 1)
w3$ = RIGHT$(w2$, 2) & LEFT$(w2$, 1)
IF TALLY(domain$, w2$) AND TALLY(domain$, w3$) AND NOT(TALLY(result$, w1$)) THEN
result$ = APPEND$(result$, 0, w1$ & " " & w2$ & " " & w3$, NL$)
ENDIF
NEXT

PRINT result$
PRINT "Total words: ", AMOUNT(dict$, NL$), ", and ", AMOUNT(result$, NL$), " are circular."</syntaxhighlight>
{{out}}
Using 'unixdict.txt':
<pre>apt pta tap
arc rca car
ate tea eat
Total words: 25104, and 3 are circular.</pre>
Using 'wordlist.10000':
<pre>aim ima mai
arc rca car
asp spa pas
ate tea eat
ips psi sip
Total words: 10000, and 5 are circular.
</pre>

=={{header|C}}==
{{libheader|GLib}}
<syntaxhighlight lang="c">#include <stdbool.h>
#include <stdio.h>
#include <stdlib.h>
#include <glib.h>

int string_compare(gconstpointer p1, gconstpointer p2) {
const char* const* s1 = p1;
const char* const* s2 = p2;
return strcmp(*s1, *s2);
}

GPtrArray* load_dictionary(const char* file, GError** error_ptr) {
GError* error = NULL;
GIOChannel* channel = g_io_channel_new_file(file, "r", &error);
if (channel == NULL) {
g_propagate_error(error_ptr, error);
return NULL;
}
GPtrArray* dict = g_ptr_array_new_full(1024, g_free);
GString* line = g_string_sized_new(64);
gsize term_pos;
while (g_io_channel_read_line_string(channel, line, &term_pos,
&error) == G_IO_STATUS_NORMAL) {
char* word = g_strdup(line->str);
word[term_pos] = '\0';
g_ptr_array_add(dict, word);
}
g_string_free(line, TRUE);
g_io_channel_unref(channel);
if (error != NULL) {
g_propagate_error(error_ptr, error);
g_ptr_array_free(dict, TRUE);
return NULL;
}
g_ptr_array_sort(dict, string_compare);
return dict;
}

void rotate(char* str, size_t len) {
char c = str[0];
memmove(str, str + 1, len - 1);
str[len - 1] = c;
}

char* dictionary_search(const GPtrArray* dictionary, const char* word) {
char** result = bsearch(&word, dictionary->pdata, dictionary->len,
sizeof(char*), string_compare);
return result != NULL ? *result : NULL;
}

void find_teacup_words(GPtrArray* dictionary) {
GHashTable* found = g_hash_table_new(g_str_hash, g_str_equal);
GPtrArray* teacup_words = g_ptr_array_new();
GString* temp = g_string_sized_new(8);
for (size_t i = 0, n = dictionary->len; i < n; ++i) {
char* word = g_ptr_array_index(dictionary, i);
size_t len = strlen(word);
if (len < 3 || g_hash_table_contains(found, word))
continue;
g_ptr_array_set_size(teacup_words, 0);
g_string_assign(temp, word);
bool is_teacup_word = true;
for (size_t i = 0; i < len - 1; ++i) {
rotate(temp->str, len);
char* w = dictionary_search(dictionary, temp->str);
if (w == NULL) {
is_teacup_word = false;
break;
}
if (strcmp(word, w) != 0 && !g_ptr_array_find(teacup_words, w, NULL))
g_ptr_array_add(teacup_words, w);
}
if (is_teacup_word && teacup_words->len > 0) {
printf("%s", word);
g_hash_table_add(found, word);
for (size_t i = 0; i < teacup_words->len; ++i) {
char* teacup_word = g_ptr_array_index(teacup_words, i);
printf(" %s", teacup_word);
g_hash_table_add(found, teacup_word);
}
printf("\n");
}
}
g_string_free(temp, TRUE);
g_ptr_array_free(teacup_words, TRUE);
g_hash_table_destroy(found);
}

int main(int argc, char** argv) {
if (argc != 2) {
fprintf(stderr, "usage: %s dictionary\n", argv[0]);
return EXIT_FAILURE;
}
GError* error = NULL;
GPtrArray* dictionary = load_dictionary(argv[1], &error);
if (dictionary == NULL) {
if (error != NULL) {
fprintf(stderr, "Cannot load dictionary file '%s': %s\n",
argv[1], error->message);
g_error_free(error);
}
return EXIT_FAILURE;
}
find_teacup_words(dictionary);
g_ptr_array_free(dictionary, TRUE);
return EXIT_SUCCESS;
}</syntaxhighlight>

{{out}}
With unixdict.txt:
<pre>
apt pta tap
arc rca car
ate tea eat
</pre>
With wordlist.10000:
<pre>
aim ima mai
arc rca car
asp spa pas
ate tea eat
ips psi sip
</pre>

=={{header|C++}}==
<syntaxhighlight lang="cpp">#include <algorithm>
#include <fstream>
#include <iostream>
#include <set>
#include <string>
#include <vector>

// filename is expected to contain one lowercase word per line
std::set<std::string> load_dictionary(const std::string& filename) {
std::ifstream in(filename);
if (!in)
throw std::runtime_error("Cannot open file " + filename);
std::set<std::string> words;
std::string word;
while (getline(in, word))
words.insert(word);
return words;
}

void find_teacup_words(const std::set<std::string>& words) {
std::vector<std::string> teacup_words;
std::set<std::string> found;
for (auto w = words.begin(); w != words.end(); ++w) {
std::string word = *w;
size_t len = word.size();
if (len < 3 || found.find(word) != found.end())
continue;
teacup_words.clear();
teacup_words.push_back(word);
for (size_t i = 0; i + 1 < len; ++i) {
std::rotate(word.begin(), word.begin() + 1, word.end());
if (word == *w || words.find(word) == words.end())
break;
teacup_words.push_back(word);
}
if (teacup_words.size() == len) {
found.insert(teacup_words.begin(), teacup_words.end());
std::cout << teacup_words[0];
for (size_t i = 1; i < len; ++i)
std::cout << ' ' << teacup_words[i];
std::cout << '\n';
}
}
}

int main(int argc, char** argv) {
if (argc != 2) {
std::cerr << "usage: " << argv[0] << " dictionary\n";
return EXIT_FAILURE;
}
try {
find_teacup_words(load_dictionary(argv[1]));
} catch (const std::exception& ex) {
std::cerr << ex.what() << '\n';
return EXIT_FAILURE;
}
return EXIT_SUCCESS;
}</syntaxhighlight>

{{out}}
With unixdict.txt:
<pre>
apt pta tap
arc rca car
ate tea eat
</pre>
With wordlist.10000:
<pre>
aim ima mai
arc rca car
asp spa pas
ate tea eat
ips psi sip
</pre>

=={{header|F_Sharp|F#}}==
<syntaxhighlight lang="fsharp">
// Teacup rim text. Nigel Galloway: August 7th., 2019
let N=System.IO.File.ReadAllLines("dict.txt")|>Array.filter(fun n->String.length n=3 && Seq.length(Seq.distinct n)>1)|>Set.ofArray
let fG z=Set.map(fun n->System.String(Array.ofSeq (Seq.permute(fun g->(g+z)%3)n))) N
Set.intersectMany [N;fG 1;fG 2]|>Seq.distinctBy(Seq.sort>>Array.ofSeq>>System.String)|>Seq.iter(printfn "%s")
</syntaxhighlight>
{{out}}
<pre>
aim
arc
asp
ate
ips
</pre>


=={{header|Factor}}==
=={{header|Factor}}==
<lang factor>USING: fry hash-sets http.client kernel math prettyprint
<syntaxhighlight lang="factor">USING: combinators.short-circuit fry grouping hash-sets
http.client kernel math prettyprint sequences sequences.extras
sequences sequences.extras sets sorting splitting ;
sets sorting splitting ;


"https://www.mit.edu/~ecprice/wordlist.10000" http-get nip
"https://www.mit.edu/~ecprice/wordlist.10000" http-get nip
"\n" split [ length 2 > ] filter
"\n" split [ { [ length 3 < ] [ all-equal? ] } 1|| ] reject
[ [ all-rotations ] map ] [ >hash-set ] bi
[ [ all-rotations ] map ] [ >hash-set ] bi
'[ [ _ in? ] all? ] filter [ natural-sort ] map members .</lang>
'[ [ _ in? ] all? ] filter [ natural-sort ] map members .</syntaxhighlight>
{{out}}
{{out}}
<pre>
<pre>
{
{
{ "aaa" "aaa" "aaa" }
{ "aim" "ima" "mai" }
{ "aim" "ima" "mai" }
{ "arc" "car" "rca" }
{ "arc" "car" "rca" }
{ "asp" "pas" "spa" }
{ "asp" "pas" "spa" }
{ "ate" "eat" "tea" }
{ "ate" "eat" "tea" }
{ "iii" "iii" "iii" }
{ "ips" "psi" "sip" }
{ "ips" "psi" "sip" }
{ "ooo" "ooo" "ooo" }
{ "www" "www" "www" }
{ "xxx" "xxx" "xxx" }
}
}
</pre>
</pre>


=={{header|Go}}==
=={{header|Go}}==
<lang go>package main
<syntaxhighlight lang="go">package main


import (
import (
Line 72: Line 513:


func main() {
func main() {
words := readWords("mit_10000.txt") // using local copy
dicts := []string{"mit_10000.txt", "unixdict.txt"} // local copies
n := len(words)
for _, dict := range dicts {
fmt.Printf("Using %s:\n\n", dict)
outer:
for _, word := range words {
words := readWords(dict)
runes := []rune(word)
n := len(words)
variants := []string{word}
used := make(map[string]bool)
outer:
for i := 0; i < len(runes)-1; i++ {
rotate(runes)
for _, word := range words {
word2 := string(runes)
runes := []rune(word)
ix := sort.SearchStrings(words, word2)
variants := []string{word}
if ix == n || words[ix] != word2 {
for i := 0; i < len(runes)-1; i++ {
continue outer
rotate(runes)
word2 := string(runes)
if word == word2 || used[word2] {
continue outer
}
ix := sort.SearchStrings(words, word2)
if ix == n || words[ix] != word2 {
continue outer
}
variants = append(variants, word2)
}
}
variants = append(variants, word2)
for _, variant := range variants {
used[variant] = true
}
fmt.Println(variants)
}
}
fmt.Println(variants)
fmt.Println()
}
}
}</lang>
}</syntaxhighlight>


{{out}}
{{out}}
<pre>
<pre>
Using mit_10000.txt:
[aaa aaa aaa]

[aim ima mai]
[aim ima mai]
[arc rca car]
[arc rca car]
[asp spa pas]
[asp spa pas]
[ate tea eat]
[ate tea eat]
[car arc rca]
[eat ate tea]
[iii iii iii]
[ima mai aim]
[ips psi sip]
[ips psi sip]

[mai aim ima]
Using unixdict.txt:
[ooo ooo ooo]

[pas asp spa]
[psi sip ips]
[apt pta tap]
[rca car arc]
[arc rca car]
[sip ips psi]
[ate tea eat]
[spa pas asp]
[tea eat ate]
[www www www]
[xxx xxx xxx]
</pre>
</pre>


=={{header|Haskell}}==
=={{header|Haskell}}==
===Using Data.Set===
Circular words of more than 2 characters in a local copy of unixdict.txt
Circular words of more than 2 characters in a local copy of a word list.
<lang haskell>import Control.Applicative (liftA2)
<syntaxhighlight lang="haskell">import Data.List (groupBy, intercalate, sort, sortBy)
import qualified Data.Set as S
import qualified Data.Set as S
import Data.Ord (comparing)
import Data.Function (on)


main :: IO ()
main :: IO ()
main =
main = readFile "unixdict.txt" >>= (print . circularWords . lines)
readFile "mitWords.txt" >>= (putStrLn . showGroups . circularWords . lines)


circularWords :: [String] -> [String]
circularWords :: [String] -> [String]
Line 129: Line 579:


isCircular :: S.Set String -> String -> Bool
isCircular :: S.Set String -> String -> Bool
isCircular lex w =
isCircular lex w = 2 < length w && all (`S.member` lex) (rotations w)

2 < length w && all (`S.member` lex) (rotations w)
rotations :: [a] -> [[a]]
rotations :: [a] -> [[a]]
rotations = liftA2 fmap rotated (enumFromTo 0 . pred . length)
rotations = fmap <$> rotated <*> (enumFromTo 0 . pred . length)


rotated :: [a] -> Int -> [a]
rotated :: [a] -> Int -> [a]
rotated [] _ = []
rotated [] _ = []
rotated xs n = zipWith const (drop n (cycle xs)) xs</lang>
rotated xs n = zipWith const (drop n (cycle xs)) xs

showGroups :: [String] -> String
showGroups xs =
unlines $
intercalate " -> " . fmap snd <$>
filter
((1 <) . length)
(groupBy (on (==) fst) (sortBy (comparing fst) (((,) =<< sort) <$> xs)))</syntaxhighlight>
{{Out}}
{{Out}}
<pre>arc -> car -> rca
<pre>["aaa","apt","arc","ate","car","eat","iii","pta","rca","tap","tea"]</pre>
ate -> eat -> tea
aim -> ima -> mai
asp -> pas -> spa
ips -> psi -> sip</pre>

===Filtering anagrams===

Or taking a different approach, we can avoid the use of Data.Set by obtaining the groups of anagrams (of more than two characters) in the lexicon, and filtering out a circular subset of these:
<syntaxhighlight lang="haskell">import Data.Function (on)
import Data.List (groupBy, intercalate, sort, sortOn)
import Data.Ord (comparing)

main :: IO ()
main =
readFile "mitWords.txt"
>>= ( putStrLn
. unlines
. fmap (intercalate " -> ")
. (circularOnly =<<)
. anagrams
. lines
)

anagrams :: [String] -> [[String]]
anagrams ws =
let harvest group px
| px = [fmap snd group]
| otherwise = []
in groupBy
(on (==) fst)
(sortOn fst (((,) =<< sort) <$> ws))
>>= (harvest <*> ((> 2) . length))

circularOnly :: [String] -> [[String]]
circularOnly ws
| (length h - 1) > length rs = []
| otherwise = [h : rs]
where
h = head ws
rs = filter (isRotation h) (tail ws)

isRotation :: String -> String -> Bool
isRotation xs ys =
xs
/= until
( (||)
. (ys ==)
<*> (xs ==)
)
rotated
(rotated xs)

rotated :: [a] -> [a]
rotated [] = []
rotated (x : xs) = xs <> [x]</syntaxhighlight>
{{Out}}
<pre>arc -> rca -> car
ate -> tea -> eat
aim -> ima -> mai
asp -> spa -> pas
ips -> psi -> sip</pre>

=={{header|J}}==
<syntaxhighlight lang="j"> >@{.@> (#~ (=&#>@{.)@> * 2 < #@>)(</.~ {.@/:~@(|."0 1~ i.@#)L:0)cutLF fread'unixdict.txt'
apt
arc
ate</syntaxhighlight>

In other words, group words by their canonical rotation (from all rotations: the earliest, alphabetically), select groups with at least three different words, where the word count matches the letter count, then extract the first word from each group.

=={{header|Java}}==
{{trans|C++}}
<syntaxhighlight lang="java">import java.io.*;
import java.util.*;

public class Teacup {
public static void main(String[] args) {
if (args.length != 1) {
System.err.println("usage: java Teacup dictionary");
System.exit(1);
}
try {
findTeacupWords(loadDictionary(args[0]));
} catch (Exception ex) {
System.err.println(ex.getMessage());
}
}

// The file is expected to contain one lowercase word per line
private static Set<String> loadDictionary(String fileName) throws IOException {
Set<String> words = new TreeSet<>();
try (BufferedReader reader = new BufferedReader(new FileReader(fileName))) {
String word;
while ((word = reader.readLine()) != null)
words.add(word);
return words;
}
}

private static void findTeacupWords(Set<String> words) {
List<String> teacupWords = new ArrayList<>();
Set<String> found = new HashSet<>();
for (String word : words) {
int len = word.length();
if (len < 3 || found.contains(word))
continue;
teacupWords.clear();
teacupWords.add(word);
char[] chars = word.toCharArray();
for (int i = 0; i < len - 1; ++i) {
String rotated = new String(rotate(chars));
if (rotated.equals(word) || !words.contains(rotated))
break;
teacupWords.add(rotated);
}
if (teacupWords.size() == len) {
found.addAll(teacupWords);
System.out.print(word);
for (int i = 1; i < len; ++i)
System.out.print(" " + teacupWords.get(i));
System.out.println();
}
}
}

private static char[] rotate(char[] ch) {
char c = ch[0];
System.arraycopy(ch, 1, ch, 0, ch.length - 1);
ch[ch.length - 1] = c;
return ch;
}
}</syntaxhighlight>

{{out}}
With unixdict.txt:
<pre>
apt pta tap
arc rca car
ate tea eat
</pre>
With wordlist.10000:
<pre>
aim ima mai
arc rca car
asp spa pas
ate tea eat
ips psi sip
</pre>


=={{header|JavaScript}}==
=={{header|JavaScript}}==
===Set() objects===
Reading a local dictionary with a macOS JS for Automation library:
Reading a local dictionary with the macOS JS for Automation library:
{{Works with|JXA}}
{{Works with|JXA}}
<lang javascript>(() => {
<syntaxhighlight lang="javascript">(() => {
'use strict';
'use strict';


// main :: IO ()
// main :: IO ()
const main = () =>
const main = () =>
showLog(
showGroups(
circularWords(
circularWords(
lines(readFile('~/unixdict.txt'))
// Local copy of:
// https://www.mit.edu/~ecprice/wordlist.10000
lines(readFile('~/mitWords.txt'))
)
)
);
);
Line 165: Line 773:
([i, bln, s]) => iLast < i || !bln,
([i, bln, s]) => iLast < i || !bln,
([i, bln, s]) => [1 + i, lexicon.has(s), rotated(s)],
([i, bln, s]) => [1 + i, lexicon.has(s), rotated(s)],
[0, true, w]
[0, true, rotated(w)]
)[1];
)[1];
}
};

// DISPLAY --------------------------------------------

// showGroups :: [String] -> String
const showGroups = xs =>
unlines(map(
gp => map(snd, gp).join(' -> '),
groupBy(
(a, b) => fst(a) === fst(b),
sortBy(
comparing(fst),
map(x => Tuple(concat(sort(chars(x))), x),
xs
)
)
).filter(gp => 1 < gp.length)
));




Line 191: Line 816:


// GENERIC FUNCTIONS ----------------------------------
// GENERIC FUNCTIONS ----------------------------------

// Tuple (,) :: a -> b -> (a, b)
const Tuple = (a, b) => ({
type: 'Tuple',
'0': a,
'1': b,
length: 2
});

// chars :: String -> [Char]
const chars = s => s.split('');

// comparing :: (a -> b) -> (a -> a -> Ordering)
const comparing = f =>
(x, y) => {
const
a = f(x),
b = f(y);
return a < b ? -1 : (a > b ? 1 : 0);
};

// concat :: [[a]] -> [a]
// concat :: [String] -> String
const concat = xs =>
0 < xs.length ? (() => {
const unit = 'string' !== typeof xs[0] ? (
[]
) : '';
return unit.concat.apply(unit, xs);
})() : [];

// fst :: (a, b) -> a
const fst = tpl => tpl[0];

// groupBy :: (a -> a -> Bool) -> [a] -> [[a]]
const groupBy = (f, xs) => {
const tpl = xs.slice(1)
.reduce((a, x) => {
const h = a[1].length > 0 ? a[1][0] : undefined;
return (undefined !== h) && f(h, x) ? (
Tuple(a[0], a[1].concat([x]))
) : Tuple(a[0].concat([a[1]]), [x]);
}, Tuple([], 0 < xs.length ? [xs[0]] : []));
return tpl[0].concat([tpl[1]]);
};


// lines :: String -> [String]
// lines :: String -> [String]
const lines = s => s.split(/[\r\n]/);
const lines = s => s.split(/[\r\n]/);

// map :: (a -> b) -> [a] -> [b]
const map = (f, xs) =>
(Array.isArray(xs) ? (
xs
) : xs.split('')).map(f);


// rotated :: String -> String
// rotated :: String -> String
Line 206: Line 882:
.join(' -> ')
.join(' -> ')
);
);

// snd :: (a, b) -> b
const snd = tpl => tpl[1];

// sort :: Ord a => [a] -> [a]
const sort = xs => xs.slice()
.sort((a, b) => a < b ? -1 : (a > b ? 1 : 0));

// sortBy :: (a -> a -> Ordering) -> [a] -> [a]
const sortBy = (f, xs) =>
xs.slice()
.sort(f);

// unlines :: [String] -> String
const unlines = xs => xs.join('\n');


// until :: (a -> Bool) -> (a -> a) -> a -> a
// until :: (a -> Bool) -> (a -> a) -> a -> a
Line 216: Line 907:
// MAIN ---
// MAIN ---
return main();
return main();
})();</lang>
})();</syntaxhighlight>
{{Out}}
{{Out}}
<pre>arc -> car -> rca
<pre>["aaa","apt","arc","ate","car","eat","iii","pta","rca","tap","tea"]</pre>
ate -> eat -> tea
aim -> ima -> mai
asp -> pas -> spa
ips -> psi -> sip</pre>


===Anagram filtering===
=={{header|Julia}}==
Reading a local dictionary with the macOS JS for Automation library:
Using the MIT 10000 word list, and excluding words of less than three letters, to reduce output length.
{{Works with|JXA}}
<lang julia>using HTTP
<syntaxhighlight lang="javascript">(() => {
'use strict';


// main :: IO ()
function getwords()
const main = () =>
req = HTTP.request("GET", "https://www.mit.edu/~ecprice/wordlist.10000")
anagrams(lines(readFile('~/mitWords.txt')))
Dict{String, Int}((string(x), 1) for x in split(String(req.body), r"\s+"))
.flatMap(circularOnly)
end
.map(xs => xs.join(' -> '))
.join('\n')


// anagrams :: [String] -> [[String]]
rotate(s, n) = String(circshift(Vector{UInt8}(s), n))
const anagrams = ws =>
groupBy(
on(eq, fst),
sortBy(
comparing(fst),
ws.map(w => Tuple(sort(chars(w)).join(''), w))
)
).flatMap(
gp => 2 < gp.length ? [
gp.map(snd)
] : []
)


// circularOnly :: [String] -> [[String]]
isliketea(w, d) = (n = length(w); n > 2 && all(i -> haskey(d, rotate(w, i)), 1:n-1))
const circularOnly = ws => {
const h = ws[0];
return ws.length < h.length ? (
[]
) : (() => {
const rs = rotations(h);
return rs.every(r => ws.includes(r)) ? (
[rs]
) : [];
})();
};


// rotations :: String -> [String]
function getteawords()
wordlistdict = getwords()
const rotations = s =>
takeIterate(s.length, rotated, s)
for word in collect(keys(wordlistdict))
if isliketea(word, wordlistdict)
println(word, ": ", [rotate(word, i) for i in 1:length(word)-1])
end
end
end


// rotated :: [a] -> [a]
getteawords()
const rotated = xs => xs.slice(1).concat(xs[0]);
</lang>{{out}}


// GENERIC FUNCTIONS ----------------------------

// Tuple (,) :: a -> b -> (a, b)
const Tuple = (a, b) => ({
type: 'Tuple',
'0': a,
'1': b,
length: 2
});

// chars :: String -> [Char]
const chars = s => s.split('');

// comparing :: (a -> b) -> (a -> a -> Ordering)
const comparing = f =>
(x, y) => {
const
a = f(x),
b = f(y);
return a < b ? -1 : (a > b ? 1 : 0);
};

// eq (==) :: Eq a => a -> a -> Bool
const eq = (a, b) => a === b

// fst :: (a, b) -> a
const fst = tpl => tpl[0];

// groupBy :: (a -> a -> Bool) -> [a] -> [[a]]
const groupBy = (f, xs) => {
const tpl = xs.slice(1)
.reduce((a, x) => {
const h = a[1].length > 0 ? a[1][0] : undefined;
return (undefined !== h) && f(h, x) ? (
Tuple(a[0], a[1].concat([x]))
) : Tuple(a[0].concat([a[1]]), [x]);
}, Tuple([], 0 < xs.length ? [xs[0]] : []));
return tpl[0].concat([tpl[1]]);
};

// lines :: String -> [String]
const lines = s => s.split(/[\r\n]/);

// mapAccumL :: (acc -> x -> (acc, y)) -> acc -> [x] -> (acc, [y])
const mapAccumL = (f, acc, xs) =>
xs.reduce((a, x, i) => {
const pair = f(a[0], x, i);
return Tuple(pair[0], a[1].concat(pair[1]));
}, Tuple(acc, []));

// on :: (b -> b -> c) -> (a -> b) -> a -> a -> c
const on = (f, g) => (a, b) => f(g(a), g(b));

// readFile :: FilePath -> IO String
const readFile = fp => {
const
e = $(),
uw = ObjC.unwrap,
s = uw(
$.NSString.stringWithContentsOfFileEncodingError(
$(fp)
.stringByStandardizingPath,
$.NSUTF8StringEncoding,
e
)
);
return undefined !== s ? (
s
) : uw(e.localizedDescription);
};

// snd :: (a, b) -> b
const snd = tpl => tpl[1];

// sort :: Ord a => [a] -> [a]
const sort = xs => xs.slice()
.sort((a, b) => a < b ? -1 : (a > b ? 1 : 0));

// sortBy :: (a -> a -> Ordering) -> [a] -> [a]
const sortBy = (f, xs) =>
xs.slice()
.sort(f);

// takeIterate :: Int -> (a -> a) -> a -> [a]
const takeIterate = (n, f, x) =>
snd(mapAccumL((a, _, i) => {
const v = 0 !== i ? f(a) : x;
return [v, v];
}, x, Array.from({
length: n
})));

// MAIN ---
return main();
})();</syntaxhighlight>
{{Out}}
<pre>arc -> rca -> car
ate -> tea -> eat
aim -> ima -> mai
asp -> spa -> pas
ips -> psi -> sip</pre>

=={{header|jq}}==
{{works with|jq}}
'''Works with gojq, the Go implementation of jq''' (*)

(*) To run the program below using gojq, change `keys_unsorted` to
`keys`; this slows it down a lot.

<syntaxhighlight lang="jq"># Output: an array of the words when read around the rim
def read_teacup:
. as $in
| [range(0; length) | $in[.:] + $in[:.] ];

# Boolean
def is_teacup_word($dict):
. as $in
| all( range(1; length); . as $i | $dict[ $in[$i:] + $in[:$i] ]) ;

# Output: a stream of the eligible teacup words
def teacup_words:
def same_letters:
explode
| .[0] as $first
| all( .[1:][]; . == $first);

# Only consider one word in a teacup cycle
def consider: explode | .[0] == min;

# Create the dictionary
reduce (inputs
| select(length>2 and (same_letters|not))) as $w ( {};
.[$w]=true )
| . as $dict
| keys[]
| select(consider and is_teacup_word($dict)) ;

# The task:
teacup_words
| read_teacup</syntaxhighlight>
{{out}}
Invocation example: jq -nRc -f teacup-rim.jq unixdict.txt
<pre>
["apt","pta","tap"]
["arc","rca","car"]
["ate","tea","eat"]
</pre>


=={{header|Julia}}==
Using the MIT 10000 word list, and excluding words of less than three letters, to reduce output length.
<syntaxhighlight lang="julia">using HTTP
rotate(s, n) = String(circshift(Vector{UInt8}(s), n))
isliketea(w, d) = (n = length(w); n > 2 && any(c -> c != w[1], w) &&
all(i -> haskey(d, rotate(w, i)), 1:n-1))
function getteawords(listuri)
req = HTTP.request("GET", listuri)
wdict = Dict{String, Int}((lowercase(string(x)), 1) for x in split(String(req.body), r"\s+"))
sort(unique([sort([rotate(word, i) for i in 1:length(word)])
for word in collect(keys(wdict)) if isliketea(word, wdict)]))
end
foreach(println, getteawords("https://www.mit.edu/~ecprice/wordlist.10000"))
</syntaxhighlight>{{out}}
<pre>
<pre>
pas: ["spa", "asp"]
["aim", "ima", "mai"]
xxx: ["xxx", "xxx"]
["arc", "car", "rca"]
iii: ["iii", "iii"]
["asp", "pas", "spa"]
asp: ["pas", "spa"]
["ate", "eat", "tea"]
tea: ["ate", "eat"]
["ips", "psi", "sip"]
spa: ["asp", "pas"]
ate: ["eat", "tea"]
aim: ["mai", "ima"]
aaa: ["aaa", "aaa"]
car: ["rca", "arc"]
ooo: ["ooo", "ooo"]
sip: ["psi", "ips"]
arc: ["car", "rca"]
ips: ["sip", "psi"]
www: ["www", "www"]
mai: ["ima", "aim"]
rca: ["arc", "car"]
eat: ["tea", "ate"]
psi: ["ips", "sip"]
ima: ["aim", "mai"]
</pre>
</pre>


Line 272: Line 1,143:
Using https://www.mit.edu/~ecprice/wordlist.10000 as per the Julia example.
Using https://www.mit.edu/~ecprice/wordlist.10000 as per the Julia example.


<lang javascript>
<syntaxhighlight lang="javascript">
const wc = new CS.System.Net.WebClient();
const wc = new CS.System.Net.WebClient();
const lines = wc.DownloadString("https://www.mit.edu/~ecprice/wordlist.10000");
const lines = wc.DownloadString("http://wiki.puzzlers.org/pub/wordlists/unixdict.txt");
const words = lines.split(/\n/g);
const words = lines.split(/\n/g);
const collection = {};
const collection = {};
Line 301: Line 1,172:
.filter(key => collection[key].length > 1)
.filter(key => collection[key].length > 1)
.forEach(key => console.log("%s", collection[key].join(", ")));
.forEach(key => console.log("%s", collection[key].join(", ")));
</syntaxhighlight>
</lang>
<pre>
<pre>
aim, ima, mai
apt, pta, tap
arc, car, rca
arc, car, rca
asp, pas, spa
ate, eat, tea
ate, eat, tea
ips, psi, sip
</pre>
</pre>


=={{header|Perl 6}}==
=={{header|Mathematica}}/{{header|Wolfram Language}}==
<syntaxhighlight lang="mathematica">ClearAll[Teacuppable]
{{works with|Rakudo|2019.07.1}}
TeacuppableHelper[set_List] := Module[{f, s},
Using the same file as the reference implementation (Lychen), downloaded to a local file to give my connection a break.
f = First[set];
s = StringRotateLeft[f, #] & /@ Range[Length[set]];
Sort[s] == Sort[set]
]
Teacuppable[set_List] := Module[{ss, l},
l = StringLength[First[set]];
ss = Subsets[set, {l}];
Select[ss, TeacuppableHelper]
]
s = Import["http://wiki.puzzlers.org/pub/wordlists/unixdict.txt", "String"];
s //= StringSplit[#, "\n"] &;
s //= Select[StringLength /* GreaterThan[2]];
s //= Map[ToLowerCase];
s //= Map[{#, Sort[Characters[#]]} &];
s //= GatherBy[#, Last] &;
s //= Select[Length /* GreaterEqualThan[2]];
s = s[[All, All, 1]];
s //= Select[StringLength[First[#]] <= Length[#] &];
Flatten[Teacuppable /@ s, 1]</syntaxhighlight>
{{out}}
<pre>{{"apt", "pta", "tap"}, {"arc", "car", "rca"}, {"ate", "eat", "tea"}}</pre>


=={{header|Nim}}==
There doesn't seem to be any restriction that the word needs to consist only of lowercase letters, so words of any case are included. Since the example code specifically shows the example words (TEA, EAT, ATE) in uppercase, I elected to uppercase the found words. Most of them are anyway, as this list seems to be full of abbreviations, acronyms and initialisms. Ah well. I didn't choose the word list.


<syntaxhighlight lang="nim">import sequtils, sets, sugar
<lang perl6>my %words;
'./words.txt'.IO.slurp.words.map: { %words{.uc.comb.sort.join}.push: $_.uc };


let words = collect(initHashSet, for word in "unixdict.txt".lines: {word})
for %words.keys { %words{$_}:delete if %words{$_}.elems < 2 or $_.chars < 3 };


proc rotate(s: var string) =
my @teacups;
let first = s[0]
my %seen;
for i in 1..s.high: s[i - 1] = s[i]
s[^1] = first


var result: seq[string]
for %words.values -> @these {
for word in "unixdict.txt".lines:
MAYBE: for @these {
if word.len >= 3:
block checkWord:
var w = word
for _ in 1..w.len:
w.rotate()
if w notin words or w in result:
# Not present in dictionary or already encountered.
break checkWord
if word.anyIt(it != word[0]):
# More then one letter.
result.add word

for word in result:
var w = word
stdout.write w
for _ in 2..w.len:
w.rotate()
stdout.write " → ", w
echo()</syntaxhighlight>

{{out}}
<pre>apt → pta → tap
arc → rca → car
ate → tea → eat</pre>

=={{header|Perl}}==
{{trans|Raku}}
<syntaxhighlight lang="perl">use strict;
use warnings;
use feature 'say';
use List::Util qw(uniqstr any);

my(%words,@teacups,%seen);

open my $fh, '<', 'ref/wordlist.10000';
while (<$fh>) {
chomp(my $w = uc $_);
next if length $w < 3;
push @{$words{join '', sort split //, $w}}, $w;}

for my $these (values %words) {
next if @$these < 3;
MAYBE: for (@$these) {
my $maybe = $_;
my $maybe = $_;
next if %seen{$_};
next if $seen{$_};
my @print;
my @print;
for ^$maybe.chars {
for my $i (0 .. length $maybe) {
if $maybe @these {
if (any { $maybe eq $_ } @$these) {
@print.push: $maybe;
push @print, $maybe;
$maybe = $maybe.comb.list.rotate.join;
$maybe = substr($maybe,1) . substr($maybe,0,1)
} else {
} else {
@print = ();
@print = () and next MAYBE
last
}
}
}
}
if @print.elems {
if (@print) {
@teacups.push: @print;
push @teacups, [@print];
%seen{$_}++ for @print;
$seen{$_}++ for @print;
}
}
}
}
}
}


say .unique.join(", ") if .elems for sort @teacups;</lang>
say join ', ', uniqstr @$_ for sort @teacups;</syntaxhighlight>
{{out}}
{{out}}
<pre>AAE, AEA, EAA
<pre>ARC, RCA, CAR
AAF, AFA, FAA
ATE, TEA, EAT
AAH, AHA, HAA
AAM, AMA, MAA
AAS, ASA, SAA
ABB, BBA, BAB
ABD, BDA, DAB
ABI, BIA, IAB
ABL, BLA, LAB
ABM, BMA, MAB
ABR, BRA, RAB
ABS, BSA, SAB
ABV, BVA, VAB
ACC, CCA, CAC
ACD, CDA, DAC
ACF, CFA, FAC
ACH, CHA, HAC
ACM, CMA, MAC
ACP, CPA, PAC
ACS, CSA, SAC
ACT, CTA, TAC
ACV, CVA, VAC
ACW, CWA, WAC
ADAD, DADA
ADAR, DARA, ARAD, RADA
ADB, DBA, BAD
ADC, DCA, CAD
ADD, DDA, DAD
ADE, DEA, EAD
ADF, DFA, FAD
ADI, DIA, IAD
ADM, DMA, MAD
ADN, DNA, NAD
ADO, DOA, OAD
ADP, DPA, PAD
ADS, DSA, SAD
AEC, ECA, CAE
AER, ERA, RAE
AES, ESA, SAE
AET, ETA, TAE
AFC, FCA, CAF
AFI, FIA, IAF
AFL, FLA, LAF
AFR, FRA, RAF
AGAG, GAGA
AGC, GCA, CAG
AGD, GDA, DAG
AGH, GHA, HAG
AGR, GRA, RAG
AGS, GSA, SAG
AGT, GTA, TAG
AHI, HIA, IAH
AIC, ICA, CAI
AIL, ILA, LAI
AIM, IMA, MAI
AIM, IMA, MAI
AIR, IRA, RAI
AIS, ISA, SAI
AIT, ITA, TAI
AKE, KEA, EAK
AKH, KHA, HAK
AKO, KOA, OAK
ALC, LCA, CAL
ALE, LEA, EAL
ALG, LGA, GAL
ALI, LIA, IAL
ALIT, LITA, ITAL, TALI
ALT, LTA, TAL
AMAN, MANA, ANAM, NAMA
AMAR, MARA, ARAM, RAMA
AMB, MBA, BAM
AMC, MCA, CAM
AME, MEA, EAM
AMEL, MELA, ELAM, LAME
AMEN, MENA, ENAM, NAME
AMI, MIA, IAM
AMO, MOA, OAM
AMOR, MORA, ORAM, RAMO
AMP, MPA, PAM
AMR, MRA, RAM
AMS, MSA, SAM
AMT, MTA, TAM
AMU, MUA, UAM
AMY, MYA, YAM
ANAN, NANA
ANC, NCA, CAN
AND, NDA, DAN
ANE, NEA, EAN
ANG, NGA, GAN
ANH, NHA, HAN
ANI, NIA, IAN
ANIL, NILA, ILAN, LANI
ANS, NSA, SAN
ANY, NYA, YAN
AOB, OBA, BAO
AOL, OLA, LAO
AOP, OPA, PAO
AOR, ORA, RAO
AOS, OSA, SAO
APC, PCA, CAP
APG, PGA, GAP
APH, PHA, HAP
APL, PLA, LAP
APM, PMA, MAP
APO, POA, OAP
APP, PPA, PAP
APR, PRA, RAP
APS, PSA, SAP
APT, PTA, TAP
ARAR, RARA
ARAS, RASA, ASAR, SARA
ARC, RCA, CAR
ARD, RDA, DAR
ARE, REA, EAR
ARF, RFA, FAR
ARIS, RISA, ISAR, SARI
ARM, RMA, MAR
ARN, RNA, NAR
ARO, ROA, OAR
ARS, RSA, SAR
ART, RTA, TAR
ARU, RUA, UAR
ARY, RYA, YAR
ASB, SBA, BAS
ASC, SCA, CAS
ASE, SEA, EAS
ASEL, SELA, ELAS, LASE
ASER, SERA, ERAS, RASE
ASH, SHA, HAS
ASI, SIA, IAS
ASK, SKA, KAS
ASM, SMA, MAS
ASN, SNA, NAS
ASP, SPA, PAS
ASP, SPA, PAS
ASR, SRA, RAS
IPS, PSI, SIP</pre>

ASS, SSA, SAS
=={{header|Phix}}==
AST, STA, TAS
Filters anagram lists
ASW, SWA, WAS
<!--<syntaxhighlight lang="phix">-->
ATB, TBA, BAT
<span style="color: #008080;">procedure</span> <span style="color: #000000;">filter_set</span><span style="color: #0000FF;">(</span><span style="color: #004080;">sequence</span> <span style="color: #000000;">anagrams</span><span style="color: #0000FF;">)</span>
ATC, TCA, CAT
<span style="color: #000080;font-style:italic;">-- anagrams is a (small) set of words that are all anagrams of each other
ATE, TEA, EAT
-- for example: {"angel","angle","galen","glean","lange"}
ATH, THA, HAT
-- print any set(s) for which every rotation is also present (marking as
ATM, TMA, MAT
-- you go to prevent the same set appearing with each word being first)</span>
ATO, TOA, OAT
<span style="color: #004080;">sequence</span> <span style="color: #000000;">used</span> <span style="color: #0000FF;">=</span> <span style="color: #7060A8;">repeat</span><span style="color: #0000FF;">(</span><span style="color: #004600;">false</span><span style="color: #0000FF;">,</span><span style="color: #7060A8;">length</span><span style="color: #0000FF;">(</span><span style="color: #000000;">anagrams</span><span style="color: #0000FF;">))</span>
ATR, TRA, RAT
<span style="color: #008080;">for</span> <span style="color: #000000;">i</span><span style="color: #0000FF;">=</span><span style="color: #000000;">1</span> <span style="color: #008080;">to</span> <span style="color: #7060A8;">length</span><span style="color: #0000FF;">(</span><span style="color: #000000;">anagrams</span><span style="color: #0000FF;">)</span> <span style="color: #008080;">do</span>
ATV, TVA, VAT
<span style="color: #008080;">if</span> <span style="color: #008080;">not</span> <span style="color: #000000;">used</span><span style="color: #0000FF;">[</span><span style="color: #000000;">i</span><span style="color: #0000FF;">]</span> <span style="color: #008080;">then</span>
AUC, UCA, CAU
<span style="color: #000000;">used</span><span style="color: #0000FF;">[</span><span style="color: #000000;">i</span><span style="color: #0000FF;">]</span> <span style="color: #0000FF;">=</span> <span style="color: #004600;">true</span>
AUD, UDA, DAU
<span style="color: #004080;">string</span> <span style="color: #000000;">word</span> <span style="color: #0000FF;">=</span> <span style="color: #000000;">anagrams</span><span style="color: #0000FF;">[</span><span style="color: #000000;">i</span><span style="color: #0000FF;">]</span>
AUL, ULA, LAU
<span style="color: #004080;">sequence</span> <span style="color: #000000;">res</span> <span style="color: #0000FF;">=</span> <span style="color: #0000FF;">{</span><span style="color: #000000;">word</span><span style="color: #0000FF;">}</span>
AUM, UMA, MAU
<span style="color: #008080;">for</span> <span style="color: #000000;">r</span><span style="color: #0000FF;">=</span><span style="color: #000000;">2</span> <span style="color: #008080;">to</span> <span style="color: #7060A8;">length</span><span style="color: #0000FF;">(</span><span style="color: #000000;">word</span><span style="color: #0000FF;">)</span> <span style="color: #008080;">do</span>
AUN, UNA, NAU
<span style="color: #000000;">word</span> <span style="color: #0000FF;">=</span> <span style="color: #000000;">word</span><span style="color: #0000FF;">[</span><span style="color: #000000;">2</span><span style="color: #0000FF;">..$]&</span><span style="color: #000000;">word</span><span style="color: #0000FF;">[</span><span style="color: #000000;">1</span><span style="color: #0000FF;">]</span>
AUS, USA, SAU
<span style="color: #004080;">integer</span> <span style="color: #000000;">k</span> <span style="color: #0000FF;">=</span> <span style="color: #7060A8;">find</span><span style="color: #0000FF;">(</span><span style="color: #000000;">word</span><span style="color: #0000FF;">,</span><span style="color: #000000;">anagrams</span><span style="color: #0000FF;">)</span>
AVG, VGA, GAV
<span style="color: #008080;">if</span> <span style="color: #000000;">k</span><span style="color: #0000FF;">=</span><span style="color: #000000;">0</span> <span style="color: #008080;">then</span> <span style="color: #000000;">res</span> <span style="color: #0000FF;">=</span> <span style="color: #0000FF;">{}</span> <span style="color: #008080;">exit</span> <span style="color: #008080;">end</span> <span style="color: #008080;">if</span>
AYH, YHA, HAY
<span style="color: #008080;">if</span> <span style="color: #008080;">not</span> <span style="color: #7060A8;">find</span><span style="color: #0000FF;">(</span><span style="color: #000000;">word</span><span style="color: #0000FF;">,</span><span style="color: #000000;">res</span><span style="color: #0000FF;">)</span> <span style="color: #008080;">then</span>
AYM, YMA, MAY
<span style="color: #000000;">res</span> <span style="color: #0000FF;">=</span> <span style="color: #7060A8;">append</span><span style="color: #0000FF;">(</span><span style="color: #000000;">res</span><span style="color: #0000FF;">,</span><span style="color: #000000;">word</span><span style="color: #0000FF;">)</span>
BCR, CRB, RBC
<span style="color: #008080;">end</span> <span style="color: #008080;">if</span>
BCS, CSB, SBC
<span style="color: #000000;">used</span><span style="color: #0000FF;">[</span><span style="color: #000000;">k</span><span style="color: #0000FF;">]</span> <span style="color: #0000FF;">=</span> <span style="color: #004600;">true</span>
BDC, DCB, CBD
<span style="color: #008080;">end</span> <span style="color: #008080;">for</span>
BDT, DTB, TBD
<span style="color: #008080;">if</span> <span style="color: #7060A8;">length</span><span style="color: #0000FF;">(</span><span style="color: #000000;">res</span><span style="color: #0000FF;">)</span> <span style="color: #008080;">then</span> <span style="color: #0000FF;">?</span><span style="color: #000000;">res</span> <span style="color: #008080;">end</span> <span style="color: #008080;">if</span>
BEC, ECB, CBE
<span style="color: #008080;">end</span> <span style="color: #008080;">if</span>
BED, EDB, DBE
<span style="color: #008080;">end</span> <span style="color: #008080;">for</span>
BER, ERB, RBE
<span style="color: #008080;">end</span> <span style="color: #008080;">procedure</span>
BES, ESB, SBE
BID, IDB, DBI
<span style="color: #008080;">procedure</span> <span style="color: #000000;">teacup</span><span style="color: #0000FF;">(</span><span style="color: #004080;">string</span> <span style="color: #000000;">filename</span><span style="color: #0000FF;">,</span> <span style="color: #004080;">integer</span> <span style="color: #000000;">minlen</span><span style="color: #0000FF;">=</span><span style="color: #000000;">3</span><span style="color: #0000FF;">,</span> <span style="color: #004080;">bool</span> <span style="color: #000000;">allow_mono</span><span style="color: #0000FF;">=</span><span style="color: #004600;">false</span><span style="color: #0000FF;">)</span>
BLL, LLB, LBL
<span style="color: #004080;">sequence</span> <span style="color: #000000;">letters</span><span style="color: #0000FF;">,</span> <span style="color: #000080;font-style:italic;">-- a sorted word, eg "ate" -&gt; "aet".</span>
BLO, LOB, OBL
<span style="color: #000000;">words</span> <span style="color: #0000FF;">=</span> <span style="color: #0000FF;">{},</span> <span style="color: #000080;font-style:italic;">-- in eg {{"aet","ate"},...} form</span>
BMG, MGB, GBM
<span style="color: #000000;">anagrams</span> <span style="color: #0000FF;">=</span> <span style="color: #0000FF;">{},</span> <span style="color: #000080;font-style:italic;">-- a set with same letters</span>
BMI, MIB, IBM
<span style="color: #000000;">last</span> <span style="color: #0000FF;">=</span> <span style="color: #008000;">""</span> <span style="color: #000080;font-style:italic;">-- (for building such sets)</span>
BMP, MPB, PBM
<span style="color: #004080;">object</span> <span style="color: #000000;">word</span>
BOM, OMB, MBO
BOO, OOB, OBO
<span style="color: #7060A8;">printf</span><span style="color: #0000FF;">(</span><span style="color: #000000;">1</span><span style="color: #0000FF;">,</span><span style="color: #008000;">"using %s"</span><span style="color: #0000FF;">,</span><span style="color: #000000;">filename</span><span style="color: #0000FF;">)</span>
BOT, OTB, TBO
<span style="color: #004080;">integer</span> <span style="color: #000000;">fn</span> <span style="color: #0000FF;">=</span> <span style="color: #7060A8;">open</span><span style="color: #0000FF;">(</span><span style="color: #000000;">filename</span><span style="color: #0000FF;">,</span><span style="color: #008000;">"r"</span><span style="color: #0000FF;">)</span>
BRC, RCB, CBR
<span style="color: #008080;">if</span> <span style="color: #000000;">fn</span><span style="color: #0000FF;">=-</span><span style="color: #000000;">1</span> <span style="color: #008080;">then</span> <span style="color: #7060A8;">crash</span><span style="color: #0000FF;">(</span><span style="color: #000000;">filename</span><span style="color: #0000FF;">&</span><span style="color: #008000;">" not found"</span><span style="color: #0000FF;">)</span> <span style="color: #008080;">end</span> <span style="color: #008080;">if</span>
BSC, SCB, CBS
<span style="color: #008080;">while</span> <span style="color: #000000;">1</span> <span style="color: #008080;">do</span>
BSD, SDB, DBS
<span style="color: #000000;">word</span> <span style="color: #0000FF;">=</span> <span style="color: #7060A8;">lower</span><span style="color: #0000FF;">(</span><span style="color: #7060A8;">trim</span><span style="color: #0000FF;">(</span><span style="color: #7060A8;">gets</span><span style="color: #0000FF;">(</span><span style="color: #000000;">fn</span><span style="color: #0000FF;">)))</span>
BSO, SOB, OBS
<span style="color: #008080;">if</span> <span style="color: #004080;">atom</span><span style="color: #0000FF;">(</span><span style="color: #000000;">word</span><span style="color: #0000FF;">)</span> <span style="color: #008080;">then</span> <span style="color: #008080;">exit</span> <span style="color: #008080;">end</span> <span style="color: #008080;">if</span>
BSS, SSB, SBS
<span style="color: #008080;">if</span> <span style="color: #7060A8;">length</span><span style="color: #0000FF;">(</span><span style="color: #000000;">word</span><span style="color: #0000FF;">)>=</span><span style="color: #000000;">minlen</span> <span style="color: #008080;">then</span>
BST, STB, TBS
<span style="color: #000000;">letters</span> <span style="color: #0000FF;">=</span> <span style="color: #7060A8;">sort</span><span style="color: #0000FF;">(</span><span style="color: #000000;">word</span><span style="color: #0000FF;">)</span>
BSW, SWB, WBS
<span style="color: #000000;">words</span> <span style="color: #0000FF;">=</span> <span style="color: #7060A8;">append</span><span style="color: #0000FF;">(</span><span style="color: #000000;">words</span><span style="color: #0000FF;">,</span> <span style="color: #0000FF;">{</span><span style="color: #000000;">letters</span><span style="color: #0000FF;">,</span> <span style="color: #000000;">word</span><span style="color: #0000FF;">})</span>
BUS, USB, SBU
<span style="color: #008080;">end</span> <span style="color: #008080;">if</span>
BYO, YOB, OBY
<span style="color: #008080;">end</span> <span style="color: #008080;">while</span>
CCD, CDC, DCC
<span style="color: #7060A8;">close</span><span style="color: #0000FF;">(</span><span style="color: #000000;">fn</span><span style="color: #0000FF;">)</span>
CCF, CFC, FCC
<span style="color: #7060A8;">printf</span><span style="color: #0000FF;">(</span><span style="color: #000000;">1</span><span style="color: #0000FF;">,</span><span style="color: #008000;">", %d words read\n"</span><span style="color: #0000FF;">,</span><span style="color: #7060A8;">length</span><span style="color: #0000FF;">(</span><span style="color: #000000;">words</span><span style="color: #0000FF;">))</span>
CCI, CIC, ICC
<span style="color: #008080;">if</span> <span style="color: #7060A8;">length</span><span style="color: #0000FF;">(</span><span style="color: #000000;">words</span><span style="color: #0000FF;">)!=</span><span style="color: #000000;">0</span> <span style="color: #008080;">then</span>
CCM, CMC, MCC
<span style="color: #000000;">words</span> <span style="color: #0000FF;">=</span> <span style="color: #7060A8;">sort</span><span style="color: #0000FF;">(</span><span style="color: #000000;">words</span><span style="color: #0000FF;">)</span> <span style="color: #000080;font-style:italic;">-- group by anagram</span>
CCP, CPC, PCC
<span style="color: #008080;">for</span> <span style="color: #000000;">i</span><span style="color: #0000FF;">=</span><span style="color: #000000;">1</span> <span style="color: #008080;">to</span> <span style="color: #7060A8;">length</span><span style="color: #0000FF;">(</span><span style="color: #000000;">words</span><span style="color: #0000FF;">)</span> <span style="color: #008080;">do</span>
CCR, CRC, RCC
<span style="color: #0000FF;">{</span><span style="color: #000000;">letters</span><span style="color: #0000FF;">,</span><span style="color: #000000;">word</span><span style="color: #0000FF;">}</span> <span style="color: #0000FF;">=</span> <span style="color: #000000;">words</span><span style="color: #0000FF;">[</span><span style="color: #000000;">i</span><span style="color: #0000FF;">]</span>
CCS, CSC, SCC
<span style="color: #008080;">if</span> <span style="color: #000000;">letters</span><span style="color: #0000FF;">=</span><span style="color: #000000;">last</span> <span style="color: #008080;">then</span>
CCT, CTC, TCC
<span style="color: #000000;">anagrams</span> <span style="color: #0000FF;">=</span> <span style="color: #7060A8;">append</span><span style="color: #0000FF;">(</span><span style="color: #000000;">anagrams</span><span style="color: #0000FF;">,</span><span style="color: #000000;">word</span><span style="color: #0000FF;">)</span>
CCW, CWC, WCC
<span style="color: #008080;">else</span>
CDI, DIC, ICD
<span style="color: #008080;">if</span> <span style="color: #000000;">allow_mono</span> <span style="color: #008080;">or</span> <span style="color: #7060A8;">length</span><span style="color: #0000FF;">(</span><span style="color: #000000;">anagrams</span><span style="color: #0000FF;">)>=</span><span style="color: #7060A8;">length</span><span style="color: #0000FF;">(</span><span style="color: #000000;">last</span><span style="color: #0000FF;">)</span> <span style="color: #008080;">then</span>
CDN, DNC, NCD
<span style="color: #000000;">filter_set</span><span style="color: #0000FF;">(</span><span style="color: #000000;">anagrams</span><span style="color: #0000FF;">)</span>
CDO, DOC, OCD
<span style="color: #008080;">end</span> <span style="color: #008080;">if</span>
CDS, DSC, SCD
<span style="color: #000000;">last</span> <span style="color: #0000FF;">=</span> <span style="color: #000000;">letters</span>
CDU, DUC, UCD
<span style="color: #000000;">anagrams</span> <span style="color: #0000FF;">=</span> <span style="color: #0000FF;">{</span><span style="color: #000000;">word</span><span style="color: #0000FF;">}</span>
CED, EDC, DCE
<span style="color: #008080;">end</span> <span style="color: #008080;">if</span>
CEE, EEC, ECE
<span style="color: #008080;">end</span> <span style="color: #008080;">for</span>
CEN, ENC, NCE
<span style="color: #008080;">if</span> <span style="color: #000000;">allow_mono</span> <span style="color: #008080;">or</span> <span style="color: #7060A8;">length</span><span style="color: #0000FF;">(</span><span style="color: #000000;">anagrams</span><span style="color: #0000FF;">)>=</span><span style="color: #7060A8;">length</span><span style="color: #0000FF;">(</span><span style="color: #000000;">last</span><span style="color: #0000FF;">)</span> <span style="color: #008080;">then</span>
CFE, FEC, ECF
<span style="color: #000000;">filter_set</span><span style="color: #0000FF;">(</span><span style="color: #000000;">anagrams</span><span style="color: #0000FF;">)</span>
CFM, FMC, MCF
<span style="color: #008080;">end</span> <span style="color: #008080;">if</span>
CFP, FPC, PCF
<span style="color: #008080;">end</span> <span style="color: #008080;">if</span>
CFR, FRC, RCF
<span style="color: #008080;">end</span> <span style="color: #008080;">procedure</span>
CGM, GMC, MCG
CHI, HIC, ICH
<span style="color: #000000;">teacup</span><span style="color: #0000FF;">(</span><span style="color: #7060A8;">join_path</span><span style="color: #0000FF;">({</span><span style="color: #008000;">"demo"</span><span style="color: #0000FF;">,</span><span style="color: #008000;">"unixdict.txt"</span><span style="color: #0000FF;">}))</span>
CHM, HMC, MCH
<span style="color: #000080;font-style:italic;">-- These match output from other entries:
CHO, HOC, OCH
--teacup(join_path({"demo","unixdict.txt"}),allow_mono:=true)
CHS, HSC, SCH
--teacup(join_path({"demo","rosetta","mit.wordlist.10000.txt"}))
CID, IDC, DCI
--teacup(join_path({"demo","rosetta","words.txt"}),4,true)
CIM, IMC, MCI
-- Note that allow_mono is needed to display eg {"agag","gaga"}</span>
CIO, IOC, OCI
<!--</syntaxhighlight>-->
CIP, IPC, PCI
{{out}}
CIR, IRC, RCI
<pre>
CIS, ISC, SCI
using demo\unixdict.txt, 24948 words read
CLE, LEC, ECL
{"arc","rca","car"}
CLR, LRC, RCL
{"ate","tea","eat"}
CLU, LUC, UCL
{"apt","pta","tap"}
CLY, LYC, YCL
</pre>
CMD, MDC, DCM

CMI, MIC, ICM
=={{header|PicoLisp}}==
CML, MLC, LCM
<syntaxhighlight lang="picolisp">(de rotw (W)
CMS, MSC, SCM
(let W (chop W)
CMT, MTC, TCM
(unless (or (apply = W) (not (cddr W)))
CNM, NMC, MCN
(make
CNR, NRC, RCN
(do (length W)
CON, ONC, NCO
(link (pack (copy W)))
COP, OPC, PCO
(rot W) ) ) ) ) )
COR, ORC, RCO
(off D)
COS, OSC, SCO
(put 'D 'v (cons))
CPI, PIC, ICP
(mapc
CPL, PLC, LCP
'((W)
CPM, PMC, MCP
(idx 'D (cons (hash W) W) T) )
CPR, PRC, RCP
(setq Words
CPS, PSC, SCP
(make (in "wordlist.10000" (while (line T) (link @)))) ) )
CRE, REC, ECR
(mapc
CRL, RLC, LCR
println
CRO, ROC, OCR
(extract
CRS, RSC, SCR
'((W)
CRT, RTC, TCR
(let? Lst (rotw W)
CRU, RUC, UCR
(when
CSE, SEC, ECS
(and
CSF, SFC, FCS
(fully
CSI, SIC, ICS
'((L) (idx 'D (cons (hash L) L)))
CSL, SLC, LCS
Lst )
CSM, SMC, MCS
(not
CSO, SOC, OCS
(member (car Lst) (car (get 'D 'v))) ) )
CSP, SPC, PCS
(mapc
CSR, SRC, RCS
'((L) (push (get 'D 'v) L))
CSS, SSC, SCS
Lst )
CST, STC, TCS
Lst ) ) )
CTD, TDC, DCT
Words ) )</syntaxhighlight>
CTE, TEC, ECT
{{out}}
CTF, TFC, FCT
<pre>
CTG, TGC, GCT
("aim" "mai" "ima")
CTO, TOC, OCT
("arc" "car" "rca")
CTT, TTC, TCT
("asp" "pas" "spa")
CUE, UEC, ECU
("ate" "eat" "tea")
CUR, URC, RCU
("ips" "sip" "psi")
DDE, DED, EDD
</pre>
DDS, DSD, SDD

DDT, DTD, TDD
=={{header|PureBasic}}==
DEN, END, NDE
<syntaxhighlight lang="purebasic">DataSection
DENI, ENID, NIDE, IDEN
dname:
DEP, EPD, PDE
Data.s "./Data/unixdict.txt"
DET, ETD, TDE
Data.s "./Data/wordlist.10000.txt"
DFE, FED, EDF
Data.s ""
DFI, FID, IDF
EndDataSection
DIM, IMD, MDI

DIN, IND, NDI
EnableExplicit
DIT, ITD, TDI
Dim c.s{1}(2)
DIU, IUD, UDI
Define.s txt, bset, res, dn
DLI, LID, IDL
Define.i i,q, cw
DLL, LLD, LDL
Restore dname : Read.s dn
DLS, LSD, SDL
While OpenConsole() And ReadFile(0,dn)
DME, MED, EDM
While Not Eof(0)
DMI, MID, IDM
cw+1
DMS, MSD, SDM
txt=ReadString(0)
DMT, MTD, TDM
If Len(txt)=3 : bset+txt+";" : EndIf
DMV, MVD, VDM
Wend
DNI, NID, IDN
CloseFile(0)
DOE, OED, EDO
For i=1 To CountString(bset,";")
DOI, OID, IDO
PokeS(c(),StringField(bset,i,";"))
DOLI, OLID, LIDO, IDOL
If FindString(res,c(0)+c(1)+c(2)) : Continue : EndIf
DOS, OSD, SDO
If c(0)=c(1) Or c(1)=c(2) Or c(0)=c(2) : Continue : EndIf
DOU, OUD, UDO
If FindString(bset,c(1)+c(2)+c(0)) And FindString(bset,c(2)+c(0)+c(1))
DPE, PED, EDP
res+c(0)+c(1)+c(2)+~"\t"+c(1)+c(2)+c(0)+~"\t"+c(2)+c(0)+c(1)+~"\n"
DPI, PID, IDP
EndIf
DPP, PPD, PDP
Next
DRU, RUD, UDR
PrintN(res+Str(cw)+" words, "+Str(CountString(res,~"\n"))+" circular") : Input()
DSE, SED, EDS
bset="" : res="" : cw=0
DSI, SID, IDS
Read.s dn
DSM, SMD, MDS
Wend</syntaxhighlight>
DSO, SOD, ODS
{{out}}
DSP, SPD, PDS
<pre>apt pta tap
DSR, SRD, RDS
arc rca car
DSS, SSD, SDS
ate tea eat
DSU, SUD, UDS
25104 words, 3 circular
DTE, TED, EDT

DTP, TPD, PDT
aim ima mai
DYE, YED, EDY
arc rca car
DZO, ZOD, ODZ
asp spa pas
EEK, EKE, KEE
ate tea eat
EEL, ELE, LEE
ips psi sip
EEM, EME, MEE
10000 words, 5 circular</pre>
EEN, ENE, NEE
EER, ERE, REE
EFT, FTE, TEF
EGOR, GORE, OREG, REGO
EGP, GPE, PEG
EHF, HFE, FEH
EHR, HRE, REH
EIN, INE, NEI
EIR, IRE, REI
EIS, ISE, SEI
ELM, LME, MEL
ELS, LSE, SEL
EMM, MME, MEM
EMP, MPE, PEM
EMR, MRE, REM
EMS, MSE, SEM
ENOL, NOLE, OLEN, LENO
ENS, NSE, SEN
EOM, OME, MEO
EON, ONE, NEO
EPP, PPE, PEP
EPS, PSE, SEP
ERF, RFE, FER
ERI, RIE, IER
ERL, RLE, LER
ERS, RSE, SER
ERT, RTE, TER
ERY, RYE, YER
ESH, SHE, HES
ESL, SLE, LES
ESM, SME, MES
ESO, SOE, OES
ESOP, SOPE, OPES, PESO
ESP, SPE, PES
ESS, SSE, SES
EST, STE, TES
ETH, THE, HET
ETS, TSE, SET
ETY, TYE, YET
EYN, YNE, NEY
FMS, MSF, SFM
FOO, OOF, OFO
FOS, OSF, SFO
FOU, OUF, UFO
FRI, RIF, IFR
FRT, RTF, TFR
FSH, SHF, HFS
FSU, SUF, UFS
GON, ONG, NGO
GPI, PIG, IGP
GPS, PSG, SGP
GTO, TOG, OGT
HIN, INH, NHI
HIP, IPH, PHI
HIS, ISH, SHI
HMP, MPH, PHM
HMS, MSH, SHM
HMT, MTH, THM
HOO, OOH, OHO
HOP, OPH, PHO
HPO, POH, OHP
HRS, RSH, SHR
HSU, SUH, UHS
HTS, TSH, SHT
HUS, USH, SHU
ILO, LOI, OIL
ILS, LSI, SIL
IMS, MSI, SIM
IMT, MTI, TIM
IOR, ORI, RIO
IPL, PLI, LIP
IPR, PRI, RIP
IPS, PSI, SIP
IPT, PTI, TIP
IRM, RMI, MIR
IRO, ROI, OIR
ISIS, SISI
ISM, SMI, MIS
ISS, SSI, SIS
IST, STI, TIS
ITO, TOI, OIT
ITS, TSI, SIT
ITU, TUI, UIT
IXM, XMI, MIX
KTS, TSK, SKT
KUS, USK, SKU
LLP, LPL, PLL
LOT, OTL, TLO
LPP, PPL, PLP
LPS, PSL, SLP
LRS, RSL, SLR
LSM, SML, MLS
LSP, SPL, PLS
MMS, MSM, SMM
MMU, MUM, UMM
MOP, OPM, PMO
MORO, OROM, ROMO, OMOR
MOT, OTM, TMO
MPS, PSM, SMP
MRS, RSM, SMR
MSN, SNM, NMS
MSO, SOM, OMS
MSR, SRM, RMS
MSS, SSM, SMS
MST, STM, TMS
MTP, TPM, PMT
MTS, TSM, SMT
MTU, TUM, UMT
MVO, VOM, OMV
NOO, OON, ONO
NPP, PPN, PNP
NRO, RON, ONR
NSO, SON, ONS
NSU, SUN, UNS
NTO, TON, ONT
NUS, USN, SNU
OOS, OSO, SOO
OOT, OTO, TOO
OPS, PSO, SOP
OPT, PTO, TOP
OSS, SSO, SOS
OTR, TRO, ROT
OTS, TSO, SOT
OUS, USO, SOU
PPS, PSP, SPP
PSR, SRP, RPS
PSS, SSP, SPS
PST, STP, TPS
PSU, SUP, UPS
PTS, TSP, SPT
PTT, TTP, TPT
PUS, USP, SPU
RSS, SSR, SRS
RSU, SUR, URS
RSV, SVR, VRS
SST, STS, TSS
SSU, SUS, USS
YSO, SOY, OYS</pre>


=={{header|Python}}==
=={{header|Python}}==
===Functional===
===Functional===
Composing generic functions, and taking only circular words of more than two characters.
Composing generic functions, and considering only anagram groups.
<syntaxhighlight lang="python">'''Teacup rim text'''
{{Trans|JavaScript}}
<lang python>'''Teacup rim text'''


from itertools import chain, groupby
from os.path import expanduser
from os.path import expanduser
from functools import reduce




# main :: IO ()
# main :: IO ()
def main():
def main():
'''Circular words of more than two characters
'''Circular anagram groups, of more than one word,
in a local copy of unixdict.txt
and containing words of length > 2, found in:
https://www.mit.edu/~ecprice/wordlist.10000
'''
'''
print(
print('\n'.join(
circularWords(
concatMap(circularGroup)(
lines(readFile('~/unixdict.txt'))
anagrams(3)(
# Reading from a local copy.
lines(readFile('~/mitWords.txt'))
)
)
)
)
))




# circularWords :: [String] -> [String]
# anagrams :: Int -> [String] -> [[String]]
def circularWords(ws):
def anagrams(n):
'''The subset of those words in the given list
'''Groups of anagrams, of minimum group size n,
which are circular.
found in the given word list.
'''
'''
lexicon = set(ws)
def go(ws):
def f(xs):
return list(filter(isCircular(lexicon), ws))
return [
[snd(x) for x in xs]
] if n <= len(xs) >= len(xs[0][0]) else []
return concatMap(f)(groupBy(fst)(sorted(
[(''.join(sorted(w)), w) for w in ws],
key=fst
)))
return go


# circularGroup :: [String] -> [String]
def circularGroup(ws):
'''Either an empty list, or a list containing
a string showing any circular subset found in ws.
'''
lex = set(ws)
iLast = len(ws) - 1
# If the set contains one word that is circular,
# then it must contain all of them.
(i, blnCircular) = until(
lambda tpl: tpl[1] or (tpl[0] > iLast)
)(
lambda tpl: (1 + tpl[0], isCircular(lex)(ws[tpl[0]]))
)(
(0, False)
)
return [' -> '.join(allRotations(ws[i]))] if blnCircular else []




# isCircular :: Set String -> String -> Bool
# isCircular :: Set String -> String -> Bool
def isCircular(lexicon):
def isCircular(lexicon):
'''True if the given word contains more than
'''True if all of a word's rotations
two characters, and all of its rotations
are found in the given lexicon.
are found in the lexicon.
'''
'''
def go(w):
def go(w):
iLast = len(w) - 1

def p(tpl):
(i, bln, _) = tpl
return iLast < i or (not bln)

def f(tpl):
def f(tpl):
(i, _, x) = tpl
(i, _, x) = tpl
return (1 + i, x in lexicon, rotated(x))
return (1 + i, x in lexicon, rotated(x))


return 1 < iLast and until(p)(f)(
iLast = len(w) - 1
(0, True, w)
return until(
lambda tpl: iLast < tpl[0] or (not tpl[1])
)(f)(
(0, True, rotated(w))
)[1]
)[1]
return go



return lambda s: go(s)
# allRotations :: String -> [String]
def allRotations(w):
'''All rotations of the string w.'''
return takeIterate(len(w) - 1)(
rotated
)(w)




# GENERIC -------------------------------------------------
# GENERIC -------------------------------------------------

# concatMap :: (a -> [b]) -> [a] -> [b]
def concatMap(f):
'''A concatenated list over which a function has been mapped.
The list monad can be derived by using a function f which
wraps its output in a list,
(using an empty list to represent computational failure).
'''
def go(xs):
return chain.from_iterable(map(f, xs))
return go


# fst :: (a, b) -> a
def fst(tpl):
'''First member of a pair.'''
return tpl[0]


# groupBy :: (a -> b) -> [a] -> [[a]]
def groupBy(f):
'''The elements of xs grouped,
preserving order, by equality
in terms of the key function f.
'''
def go(xs):
return [
list(x[1]) for x in groupby(xs, key=f)
]
return go



# lines :: String -> [String]
# lines :: String -> [String]
Line 851: Line 1,584:
'''
'''
return s.splitlines()
return s.splitlines()


# mapAccumL :: (acc -> x -> (acc, y)) -> acc -> [x] -> (acc, [y])
def mapAccumL(f):
'''A tuple of an accumulation and a list derived by a
combined map and fold,
with accumulation from left to right.
'''
def go(a, x):
tpl = f(a[0], x)
return (tpl[0], a[1] + [tpl[1]])
return lambda acc: lambda xs: (
reduce(go, xs, (acc, []))
)




Line 864: Line 1,611:
# rotated :: String -> String
# rotated :: String -> String
def rotated(s):
def rotated(s):
'''A list rotated 1 character to the right.'''
'''A string rotated 1 character to the right.'''
return s[1:] + s[0]
return s[1:] + s[0]


# snd :: (a, b) -> b
def snd(tpl):
'''Second member of a pair.'''
return tpl[1]


# takeIterate :: Int -> (a -> a) -> a -> [a]
def takeIterate(n):
'''Each value of n iterations of f
over a start value of x.
'''
def go(f):
def g(x):
def h(a, i):
v = f(a) if i else x
return (v, v)
return mapAccumL(h)(x)(
range(0, 1 + n)
)[1]
return g
return go




Line 873: Line 1,643:
The initial seed value is x.
The initial seed value is x.
'''
'''
def go(f, x):
def go(f):
v = x
def g(x):
while not p(v):
v = x
v = f(v)
while not p(v):
return v
v = f(v)
return lambda f: lambda x: go(f, x)
return v
return g
return go




# MAIN ---
# MAIN ---
if __name__ == '__main__':
if __name__ == '__main__':
main()</lang>
main()</syntaxhighlight>
{{Out}}
{{Out}}
<pre>arc -> rca -> car
<pre>['aaa', 'apt', 'arc', 'ate', 'car', 'eat', 'iii', 'pta', 'rca', 'tap', 'tea']</pre>
ate -> tea -> eat
aim -> ima -> mai
asp -> spa -> pas
ips -> psi -> sip</pre>

=={{header|Raku}}==
(formerly Perl 6)
{{works with|Rakudo|2019.07.1}}
There doesn't seem to be any restriction that the word needs to consist only of lowercase letters, so words of any case are included. Since the example code specifically shows the example words (TEA, EAT, ATE) in uppercase, I elected to uppercase the found words.

As the specs keep changing, this version will accept ANY text file as its dictionary and accepts parameters to configure the minimum number of characters in a word to consider and whether to allow mono-character words.

Defaults to unixdict.txt, minimum 3 characters and mono-character 'words' disallowed. Feed a file name to use a different word list, an integer to --min-chars and/or a truthy value to --mono to allow mono-chars.

<syntaxhighlight lang="raku" line>my %*SUB-MAIN-OPTS = :named-anywhere;

unit sub MAIN ( $dict = 'unixdict.txt', :$min-chars = 3, :$mono = False );

my %words;
$dict.IO.slurp.words.map: { .chars < $min-chars ?? (next) !! %words{.uc.comb.sort.join}.push: .uc };

my @teacups;
my %seen;

for %words.values -> @these {
next if !$mono && @these < 2;
MAYBE: for @these {
my $maybe = $_;
next if %seen{$_};
my @print;
for ^$maybe.chars {
if $maybe ∈ @these {
@print.push: $maybe;
$maybe = $maybe.comb.list.rotate.join;
} else {
@print = ();
next MAYBE
}
}
if @print.elems {
@teacups.push: @print;
%seen{$_}++ for @print;
}
}
}

say .unique.join(", ") for sort @teacups;</syntaxhighlight>
{{out|Defaults}}
Command line: <tt>raku teacup.p6</tt>
<pre>APT, PTA, TAP
ARC, RCA, CAR
ATE, TEA, EAT</pre>
{{out|Allow mono-chars}}
Command line: <tt>raku teacup.p6 --mono=1</tt>
<pre>AAA
APT, PTA, TAP
ARC, RCA, CAR
ATE, TEA, EAT
III</pre>
{{out|Using a larger dictionary}}
words.txt file from https://github.com/dwyl/english-words

Command line: <tt>raku teacup.p6 words.txt --min-chars=4 --mono=Allow</tt>
<pre>AAAA
AAAAAA
ADAD, DADA
ADAR, DARA, ARAD, RADA
AGAG, GAGA
ALIT, LITA, ITAL, TALI
AMAN, MANA, ANAM, NAMA
AMAR, MARA, ARAM, RAMA
AMEL, MELA, ELAM, LAME
AMEN, MENA, ENAM, NAME
AMOR, MORA, ORAM, RAMO
ANAN, NANA
ANIL, NILA, ILAN, LANI
ARAR, RARA
ARAS, RASA, ASAR, SARA
ARIS, RISA, ISAR, SARI
ASEL, SELA, ELAS, LASE
ASER, SERA, ERAS, RASE
DENI, ENID, NIDE, IDEN
DOLI, OLID, LIDO, IDOL
EGOR, GORE, OREG, REGO
ENOL, NOLE, OLEN, LENO
ESOP, SOPE, OPES, PESO
ISIS, SISI
MMMM
MORO, OROM, ROMO, OMOR
OOOO</pre>


=={{header|REXX}}==
=={{header|REXX}}==
All words that contained non─letter (Latin) characters &nbsp; (periods, decimal digits, minus signs,
All words that contained non─letter (Latin) characters &nbsp; (periods, decimal digits, minus signs,
underbars, or embedded blanks) &nbsp; weren't considered as candidates for circular words.
underbars, or embedded blanks) &nbsp;
<br>weren't considered as candidates for circular words.

Duplicated words (such as &nbsp; '''sop''' &nbsp; and &nbsp; '''SOP''') &nbsp; are ignored &nbsp; (just the 2<sup>nd</sup> and subsequent duplicated words are deleted).

All words in the dictionary are treated as caseless.


The dictionary wasn't assumed to be sorted in any way.
The dictionary wasn't assumed to be sorted in any way.
<lang rexx>/*REXX pgm finds circular words (length>2), using a dictionary, suppress permutations.*/
<syntaxhighlight lang="rexx">/*REXX pgm finds circular words (length>2), using a dictionary, suppress permutations.*/
parse arg iFID L . /*obtain optional arguments from the CL*/
parse arg iFID L . /*obtain optional arguments from the CL*/
if iFID==''|iFID=="," then iFID= 'wordlist.10k' /*Not specified? Then use the default.*/
if iFID==''|iFID=="," then iFID= 'wordlist.10k' /*Not specified? Then use the default.*/
Line 898: Line 1,765:
#= 0 /*number of words in dictionary, Len>L.*/
#= 0 /*number of words in dictionary, Len>L.*/
@.= /*stemmed array of non─duplicated words*/
@.= /*stemmed array of non─duplicated words*/
do r=0 while lines(iFID)\==0 /*read all lines (words) in dictionary.*/
do r=0 while lines(iFID) \== 0 /*read all lines (words) in dictionary.*/
z= space( linein(iFID) ) /*obtain get a word from the dictionary*/
parse upper value linein(iFID) with z . /*obtain a word from the dictionary. */
if length(z)<L | @.z\=='' then iterate /*length must be L or more, no dups.*/
if length(z)<L | @.z\=='' then iterate /*length must be L or more, no dups.*/
if \datatype(z, 'M') then iterate /*Word contains non-letters? Then skip*/
if \datatype(z, 'U') then iterate /*Word contains non-letters? Then skip*/
@.z = z /*assign a word from the dictionary. */
@.z = z /*assign a word from the dictionary. */
#= # + 1; $.#= z /*bump word count; append word to list.*/
#= # + 1; $.#= z /*bump word count; append word to list.*/
end /*r*/ /* [↑] dictionary need not be sorted. */
end /*r*/ /* [↑] dictionary need not be sorted. */
cw= 0 /*the number of circular words (so far)*/

say "There're " r ' entries in the dictionary (of all types): ' iFID
say "There're " r ' entries in the dictionary (of all types): ' iFID
say "There're " # ' words in the dictionary of at least length ' L
say "There're " # ' words in the dictionary of at least length ' L
say
say
cw= 0 /*the number of circular words (so far)*/
do j=1 for #; x= $.j; y= x /*obtain the Jth word in the list. */
do j=1 for #; x= $.j; y= x /*obtain the Jth word in the list. */
if x=='' then iterate /*if a null, don't show variants. */
if x=='' then iterate /*if a null, don't show variants. */
yy= y /*the start of a list of the variants. */
yy= y /*the start of a list of the variants. */
do k=1 for length(x)-1 /*"circulate" the litters in the word. */
do k=1 for length(x)-1 /*"circulate" the litters in the word. */
y= substr(y, 2)left(y, 1) /*add the left letter to the right end.*/
y= substr(y, 2)left(y, 1) /*add the left letter to the right end.*/
if @.y=='' then iterate j /*if not a word, then skip this word. */
if @.y=='' then iterate j /*if not a word, then skip this word. */
yy= yy',' y /*append to the list of the variants. */
yy= yy',' y /*append to the list of the variants. */
@.y= /*nullify word to suppress permutations*/
if y\==x then @.y= /*nullify word to suppress permutations*/
end /*k*/
end /*k*/ /* [↓] ··· except for monolithic words.*/
cw= cw + 1 /*bump counter of circular words found.*/
cw= cw + 1 /*bump counter of circular words found.*/
say 'circular word: ' yy /*display a circular word and variants.*/
end /*j*/
say 'circular word: ' yy /*display a circular word to the term. */
end /*j*/
say
say
say cw ' circular words were found.' /*stick a fork in it, we're all done. */</lang>
say cw ' circular words were found.' /*stick a fork in it, we're all done. */</syntaxhighlight>
{{out|output|text=&nbsp; when using the default inputs:}}
{{out|output|text=&nbsp; when using the default inputs:}}
<pre>
<pre>
Line 929: Line 1,795:
There're 9578 words in the dictionary of at least length 3
There're 9578 words in the dictionary of at least length 3


circular word: aaa, aaa, aaa
circular word: AIM, IMA, MAI
circular word: aim, ima, mai
circular word: ARC, RCA, CAR
circular word: arc, rca, car
circular word: ASP, SPA, PAS
circular word: asp, spa, pas
circular word: ATE, TEA, EAT
circular word: ate, tea, eat
circular word: IPS, PSI, SIP
circular word: iii, iii, iii
circular word: ips, psi, sip
circular word: ooo, ooo, ooo
circular word: www, www, www
circular word: xxx, xxx, xxx


10 circular words were found.
5 circular words were found.
</pre>

=={{header|Ruby}}==
"woordenlijst.txt" is a Dutch wordlist. It has 413125 words > 2 chars and takes about two minutes.
<syntaxhighlight lang="ruby">lists = ["unixdict.txt", "wordlist.10000", "woordenlijst.txt"]

lists.each do |list|
words = open(list).readlines( chomp: true).reject{|w| w.size < 3 }
grouped_by_size = words.group_by(&:size)
tea_words = words.filter_map do |word|
chars = word.chars
next unless chars.none?{|c| c < chars.first }
next if chars.uniq.size == 1
rotations = word.size.times.map {|i| chars.rotate(i).join }
rotations if rotations.all?{|rot| grouped_by_size[rot.size].include? rot }
end
puts "", list + ":"
tea_words.uniq(&:to_set).each{|ar| puts ar.join(", ") }
end
</syntaxhighlight>
{{out}}
<pre>
unixdict.txt:
apt, pta, tap
arc, rca, car
ate, tea, eat

wordlist.10000:
aim, ima, mai
arc, rca, car
asp, spa, pas
ate, tea, eat
ips, psi, sip

woordenlijst.txt:
ast, sta, tas
een, ene, nee
eer, ere, ree
</pre>

=={{header|Rust}}==
<syntaxhighlight lang="rust">use std::collections::BTreeSet;
use std::collections::HashSet;
use std::fs::File;
use std::io::{self, BufRead};
use std::iter::FromIterator;

fn load_dictionary(filename: &str) -> std::io::Result<BTreeSet<String>> {
let file = File::open(filename)?;
let mut dict = BTreeSet::new();
for line in io::BufReader::new(file).lines() {
let word = line?;
dict.insert(word);
}
Ok(dict)
}

fn find_teacup_words(dict: &BTreeSet<String>) {
let mut teacup_words: Vec<&String> = Vec::new();
let mut found: HashSet<&String> = HashSet::new();
for word in dict {
let len = word.len();
if len < 3 || found.contains(word) {
continue;
}
teacup_words.clear();
let mut is_teacup_word = true;
let mut chars: Vec<char> = word.chars().collect();
for _ in 1..len {
chars.rotate_left(1);
if let Some(w) = dict.get(&String::from_iter(&chars)) {
if !w.eq(word) && !teacup_words.contains(&w) {
teacup_words.push(w);
}
} else {
is_teacup_word = false;
break;
}
}
if !is_teacup_word || teacup_words.is_empty() {
continue;
}
print!("{}", word);
found.insert(word);
for w in &teacup_words {
found.insert(w);
print!(" {}", w);
}
println!();
}
}

fn main() {
let args: Vec<String> = std::env::args().collect();
if args.len() != 2 {
eprintln!("Usage: teacup dictionary");
std::process::exit(1);
}
let dict = load_dictionary(&args[1]);
match dict {
Ok(dict) => find_teacup_words(&dict),
Err(error) => eprintln!("Cannot open file {}: {}", &args[1], error),
}
}</syntaxhighlight>

{{out}}
With unixdict.txt:
<pre>
apt pta tap
arc rca car
ate tea eat
</pre>
With wordlist.10000:
<pre>
aim ima mai
arc rca car
asp spa pas
ate tea eat
ips psi sip
</pre>

=={{header|Swift}}==
<syntaxhighlight lang="swift">import Foundation

func loadDictionary(_ path: String) throws -> Set<String> {
let contents = try String(contentsOfFile: path, encoding: String.Encoding.ascii)
return Set<String>(contents.components(separatedBy: "\n").filter{!$0.isEmpty})
}

func rotate<T>(_ array: inout [T]) {
guard array.count > 1 else {
return
}
let first = array[0]
array.replaceSubrange(0..<array.count-1, with: array[1...])
array[array.count - 1] = first
}

func findTeacupWords(_ dictionary: Set<String>) {
var teacupWords: [String] = []
var found = Set<String>()
for word in dictionary {
if word.count < 3 || found.contains(word) {
continue
}
teacupWords.removeAll()
var isTeacupWord = true
var chars = Array(word)
for _ in 1..<word.count {
rotate(&chars)
let w = String(chars)
if (!dictionary.contains(w)) {
isTeacupWord = false
break
}
if w != word && !teacupWords.contains(w) {
teacupWords.append(w)
}
}
if !isTeacupWord || teacupWords.isEmpty {
continue
}
print(word, terminator: "")
found.insert(word)
for w in teacupWords {
found.insert(w)
print(" \(w)", terminator: "")
}
print()
}
}

do {
let dictionary = try loadDictionary("unixdict.txt")
findTeacupWords(dictionary)
} catch {
print(error)
}</syntaxhighlight>

{{out}}
<pre>
car arc rca
eat ate tea
pta tap apt
</pre>

=={{header|Wren}}==
{{trans|Go}}
{{libheader|Wren-str}}
{{libheader|Wren-sort}}
<syntaxhighlight lang="wren">import "io" for File
import "./str" for Str
import "./sort" for Find

var readWords = Fn.new { |fileName|
var dict = File.read(fileName).split("\n")
return dict.where { |w| w.count >= 3 }.toList
}

var dicts = ["mit10000.txt", "unixdict.txt"]
for (dict in dicts) {
System.print("Using %(dict):\n")
var words = readWords.call(dict)
var n = words.count
var used = {}
for (word in words) {
var outer = false
var variants = [word]
var word2 = word
for (i in 0...word.count-1) {
word2 = Str.lshift(word2)
if (word == word2 || used[word2]) {
outer = true
break
}
var ix = Find.first(words, word2)
if (ix == n || words[ix] != word2) {
outer = true
break
}
variants.add(word2)
}
if (!outer) {
for (variant in variants) used[variant] = true
System.print(variants)
}
}
System.print()
}</syntaxhighlight>

{{out}}
<pre>
Using mit10000.txt:

[aim, ima, mai]
[arc, rca, car]
[asp, spa, pas]
[ate, tea, eat]
[ips, psi, sip]

Using unixdict.txt:

[apt, pta, tap]
[arc, rca, car]
[ate, tea, eat]
</pre>
</pre>


=={{header|zkl}}==
=={{header|zkl}}==
<syntaxhighlight lang="zkl">// Limited to ASCII
<lang zkl>// this is limited to the max items a Dictionary can hold
// This is limited to the max items a Dictionary can hold
words:=File("mit_wordlist_10000.txt").pump(Dictionary().add.fp1(True),"strip");
fcn teacut(wordFile){
seen :=Dictionary();
words:=File(wordFile).pump(Dictionary().add.fp1(True),"strip");
foreach word in (words.keys){
rots,w,sz := List(), word, word.len();
seen :=Dictionary();
if(sz>2 and not seen.holds(word)){
foreach word in (words.keys){
do(sz-1){
rots,w,sz := List(), word, word.len();
if(sz>2 and word.unique().len()>2 and not seen.holds(word)){
w=String(w[-1],w[0,-1]); // rotate one character
do(sz-1){
if(not words.holds(w)) continue(2); // not a word, do next word
w=String(w[-1],w[0,-1]); // rotate one character
rots.append(w); // I'd like to see all the rotations
if(not words.holds(w)) continue(2); // not a word, skip these
rots.append(w); // I'd like to see all the rotations
}
println(rots.append(word).sort().concat(" "));
rots.pump(seen.add.fp1(True)); // we've seen these rotations
}
}
println(rots.append(word).sort().concat(" "));
rots.pump(seen.add.fp1(True)); // we've seen these rotations
}
}
}</lang>
}</syntaxhighlight>
<syntaxhighlight lang="zkl">println("\nunixdict:"); teacut("unixdict.txt");
println("\nmit_wordlist_10000:"); teacut("mit_wordlist_10000.txt");</syntaxhighlight>
{{out}}
{{out}}
<pre>
<pre>
unixdict:
www www www
apt pta tap
ate eat tea
arc car rca

mit_wordlist_10000:
asp pas spa
asp pas spa
ips psi sip
ips psi sip
iii iii iii
ate eat tea
ate eat tea
aaa aaa aaa
aim ima mai
aim ima mai
arc car rca
arc car rca
xxx xxx xxx
ooo ooo ooo
</pre>
</pre>

{{omit from|6502 Assembly|unixdict.txt is much larger than the CPU's address space.}}
{{omit from|8080 Assembly|See 6502 Assembly.}}
{{omit from|Z80 Assembly|See 6502 Assembly.}}

Latest revision as of 11:40, 13 February 2024

Task
Teacup rim text
You are encouraged to solve this task according to the task description, using any language you may know.

On a set of coasters we have, there's a picture of a teacup.   On the rim of the teacup the word   TEA   appears a number of times separated by bullet characters   (•).

It occurred to me that if the bullet were removed and the words run together,   you could start at any letter and still end up with a meaningful three-letter word.

So start at the   T   and read   TEA.   Start at the   E   and read   EAT,   or start at the   A   and read   ATE.

That got me thinking that maybe there are other words that could be used rather that   TEA.   And that's just English.   What about Italian or Greek or ... um ... Telugu.

For English, we will use the unixdict (now) located at:   unixdict.txt.

(This will maintain continuity with other Rosetta Code tasks that also use it.)


Task

Search for a set of words that could be printed around the edge of a teacup.   The words in each set are to be of the same length, that length being greater than two (thus precluding   AH   and   HA,   for example.)

Having listed a set, for example   [ate tea eat],   refrain from displaying permutations of that set, e.g.:   [eat tea ate]   etc.

The words should also be made of more than one letter   (thus precluding   III   and   OOO   etc.)

The relationship between these words is (using ATE as an example) that the first letter of the first becomes the last letter of the second.   The first letter of the second becomes the last letter of the third.   So   ATE   becomes   TEA   and   TEA   becomes   EAT.

All of the possible permutations, using this particular permutation technique, must be words in the list.

The set you generate for   ATE   will never included the word   ETA   as that cannot be reached via the first-to-last movement method.

Display one line for each set of teacup rim words.


Other tasks related to string operations:
Metrics
Counting
Remove/replace
Anagrams/Derangements/shuffling
Find/Search/Determine
Formatting
Song lyrics/poems/Mad Libs/phrases
Tokenize
Sequences



11l

F rotated(String s)
   R s[1..]‘’s[0]

V s = Set(File(‘unixdict.txt’).read().rtrim("\n").split("\n"))
L !s.empty
   L(=word) s // `=` is needed here because otherwise after `s.remove(word)` `word` becomes invalid
      s.remove(word)
      I word.len < 3
         L.break

      V w = word
      L 0 .< word.len - 1
         w = rotated(w)
         I w C s
            s.remove(w)
         E
            L.break
      L.was_no_break
         print(word, end' ‘’)
         w = word
         L 0 .< word.len - 1
            w = rotated(w)
            print(‘ -> ’w, end' ‘’)
         print()

      L.break
Output:
apt -> pta -> tap
arc -> rca -> car
ate -> tea -> eat

Arturo

wordset: map read.lines relative "unixdict.txt" => strip

rotateable?: function [w][
    loop 1..dec size w 'i [
        rotated: rotate w i
        if or? [rotated = w][not? contains? wordset rotated] -> 
            return false
    ]
    return true
]

results: new []
loop select wordset 'word [3 =< size word] 'word [
    if rotateable? word ->
        'results ++ @[ sort map 1..size word 'i [ rotate word i ]]
]

loop sort unique results 'result [
    root: first result
    print join.with: " -> " map 1..size root 'i [ rotate.left root i]
]
Output:
tea -> eat -> ate
rca -> car -> arc
pta -> tap -> apt

AutoHotkey

Teacup_rim_text(wList){
    oWord := [], oRes := [], n := 0
    for i, w in StrSplit(wList, "`n", "`r")
        if StrLen(w) >= 3
            oWord[StrLen(w), w] := true
    
    for l, obj in oWord
    {
        for w, bool in obj
        {
            loop % l
                if oWord[l, rotate(w)]
                {
                    oWord[l, w] := 0
                    if (A_Index = 1)
                        n++, oRes[n] := w 
                    if (A_Index < l)
                        oRes[n] := oRes[n] "," (w := rotate(w))
                }
            if (StrSplit(oRes[n], ",").Count() <> l)
                oRes.RemoveAt(n)
        }
    }
    return oRes
}

rotate(w){
    return SubStr(w, 2) . SubStr(w, 1, 1)
}

Examples:

FileRead, wList, % A_Desktop "\unixdict.txt"
result := ""
for i, v in Teacup_rim_text(wList)
	result .= v "`n"
MsgBox % result
return
Output:
apt,pta,tap
arc,rca,car
ate,tea,eat

AWK

# syntax: GAWK -f TEACUP_RIM_TEXT.AWK UNIXDICT.TXT
#
# sorting:
#   PROCINFO["sorted_in"] is used by GAWK
#   SORTTYPE is used by Thompson Automation's TAWK
#
{   for (i=1; i<=NF; i++) {
      arr[tolower($i)] = 0
    }
}
END {
    PROCINFO["sorted_in"] = "@ind_str_asc" ; SORTTYPE = 1
    for (i in arr) {
      leng = length(i)
      if (leng > 2) {
        delete tmp_arr
        words = str = i
        tmp_arr[i] = ""
        for (j=2; j<=leng; j++) {
          str = substr(str,2) substr(str,1,1)
          if (str in arr) {
            words = words " " str
            tmp_arr[str] = ""
          }
        }
        if (length(tmp_arr) == leng) {
          count = 0
          for (j in tmp_arr) {
            (arr[j] == 0) ? arr[j]++ : count++
          }
          if (count == 0) {
            printf("%s\n",words)
            circular++
          }
        }
      }
    }
    printf("%d words, %d circular\n",length(arr),circular)
    exit(0)
}
Output:

using UNIXDICT.TXT

apt pta tap
arc rca car
ate tea eat
25104 words, 3 circular

using MIT10000.TXT

aim ima mai
arc rca car
asp spa pas
ate tea eat
ips psi sip
10000 words, 5 circular

BaCon

OPTION COLLAPSE TRUE

dict$ = LOAD$(DIRNAME$(ME$) & "/unixdict.txt")

FOR word$ IN dict$ STEP NL$
    IF LEN(word$) = 3 AND AMOUNT(UNIQ$(EXPLODE$(word$, 1))) = 3 THEN domain$ = APPEND$(domain$, 0, word$)
NEXT

FOR w1$ IN domain$
    w2$ = RIGHT$(w1$, 2) & LEFT$(w1$, 1)
    w3$ = RIGHT$(w2$, 2) & LEFT$(w2$, 1)
    IF TALLY(domain$, w2$) AND TALLY(domain$, w3$) AND NOT(TALLY(result$, w1$)) THEN
        result$ = APPEND$(result$, 0, w1$ & " " & w2$ & " " & w3$, NL$)
    ENDIF
NEXT

PRINT result$
PRINT "Total words: ", AMOUNT(dict$, NL$), ", and ", AMOUNT(result$, NL$), " are circular."
Output:

Using 'unixdict.txt':

apt pta tap
arc rca car
ate tea eat
Total words: 25104, and 3 are circular.

Using 'wordlist.10000':

aim ima mai
arc rca car
asp spa pas
ate tea eat
ips psi sip
Total words: 10000, and 5 are circular.

C

Library: GLib
#include <stdbool.h>
#include <stdio.h>
#include <stdlib.h>
#include <glib.h>

int string_compare(gconstpointer p1, gconstpointer p2) {
    const char* const* s1 = p1;
    const char* const* s2 = p2;
    return strcmp(*s1, *s2);
}

GPtrArray* load_dictionary(const char* file, GError** error_ptr) {
    GError* error = NULL;
    GIOChannel* channel = g_io_channel_new_file(file, "r", &error);
    if (channel == NULL) {
        g_propagate_error(error_ptr, error);
        return NULL;
    }
    GPtrArray* dict = g_ptr_array_new_full(1024, g_free);
    GString* line = g_string_sized_new(64);
    gsize term_pos;
    while (g_io_channel_read_line_string(channel, line, &term_pos,
                                         &error) == G_IO_STATUS_NORMAL) {
        char* word = g_strdup(line->str);
        word[term_pos] = '\0';
        g_ptr_array_add(dict, word);
    }
    g_string_free(line, TRUE);
    g_io_channel_unref(channel);
    if (error != NULL) {
        g_propagate_error(error_ptr, error);
        g_ptr_array_free(dict, TRUE);
        return NULL;
    }
    g_ptr_array_sort(dict, string_compare);
    return dict;
}

void rotate(char* str, size_t len) {
    char c = str[0];
    memmove(str, str + 1, len - 1);
    str[len - 1] = c;
}

char* dictionary_search(const GPtrArray* dictionary, const char* word) {
    char** result = bsearch(&word, dictionary->pdata, dictionary->len,
                            sizeof(char*), string_compare);
    return result != NULL ? *result : NULL;
}

void find_teacup_words(GPtrArray* dictionary) {
    GHashTable* found = g_hash_table_new(g_str_hash, g_str_equal);
    GPtrArray* teacup_words = g_ptr_array_new();
    GString* temp = g_string_sized_new(8);
    for (size_t i = 0, n = dictionary->len; i < n; ++i) {
        char* word = g_ptr_array_index(dictionary, i);
        size_t len = strlen(word);
        if (len < 3 || g_hash_table_contains(found, word))
            continue;
        g_ptr_array_set_size(teacup_words, 0);
        g_string_assign(temp, word);
        bool is_teacup_word = true;
        for (size_t i = 0; i < len - 1; ++i) {
            rotate(temp->str, len);
            char* w = dictionary_search(dictionary, temp->str);
            if (w == NULL) {
                is_teacup_word = false;
                break;
            }
            if (strcmp(word, w) != 0 && !g_ptr_array_find(teacup_words, w, NULL))
                g_ptr_array_add(teacup_words, w);
        }
        if (is_teacup_word && teacup_words->len > 0) {
            printf("%s", word);
            g_hash_table_add(found, word);
            for (size_t i = 0; i < teacup_words->len; ++i) {
                char* teacup_word = g_ptr_array_index(teacup_words, i);
                printf(" %s", teacup_word);
                g_hash_table_add(found, teacup_word);
            }
            printf("\n");
        }
    }
    g_string_free(temp, TRUE);
    g_ptr_array_free(teacup_words, TRUE);
    g_hash_table_destroy(found);
}

int main(int argc, char** argv) {
    if (argc != 2) {
        fprintf(stderr, "usage: %s dictionary\n", argv[0]);
        return EXIT_FAILURE;
    }
    GError* error = NULL;
    GPtrArray* dictionary = load_dictionary(argv[1], &error);
    if (dictionary == NULL) {
        if (error != NULL) {
            fprintf(stderr, "Cannot load dictionary file '%s': %s\n",
                    argv[1], error->message);
            g_error_free(error);
        }
        return EXIT_FAILURE;
    }
    find_teacup_words(dictionary);
    g_ptr_array_free(dictionary, TRUE);
    return EXIT_SUCCESS;
}
Output:

With unixdict.txt:

apt pta tap
arc rca car
ate tea eat

With wordlist.10000:

aim ima mai
arc rca car
asp spa pas
ate tea eat
ips psi sip

C++

#include <algorithm>
#include <fstream>
#include <iostream>
#include <set>
#include <string>
#include <vector>

// filename is expected to contain one lowercase word per line
std::set<std::string> load_dictionary(const std::string& filename) {
    std::ifstream in(filename);
    if (!in)
        throw std::runtime_error("Cannot open file " + filename);
    std::set<std::string> words;
    std::string word;
    while (getline(in, word))
        words.insert(word);
    return words;
}

void find_teacup_words(const std::set<std::string>& words) {
    std::vector<std::string> teacup_words;
    std::set<std::string> found;
    for (auto w = words.begin(); w != words.end(); ++w) {
        std::string word = *w;
        size_t len = word.size();
        if (len < 3 || found.find(word) != found.end())
            continue;
        teacup_words.clear();
        teacup_words.push_back(word);
        for (size_t i = 0; i + 1 < len; ++i) {
            std::rotate(word.begin(), word.begin() + 1, word.end());
            if (word == *w || words.find(word) == words.end())
                break;
            teacup_words.push_back(word);
        }
        if (teacup_words.size() == len) {
            found.insert(teacup_words.begin(), teacup_words.end());
            std::cout << teacup_words[0];
            for (size_t i = 1; i < len; ++i)
                std::cout << ' ' << teacup_words[i];
            std::cout << '\n';
        }
    }
}

int main(int argc, char** argv) {
    if (argc != 2) {
        std::cerr << "usage: " << argv[0] << " dictionary\n";
        return EXIT_FAILURE;
    }
    try {
        find_teacup_words(load_dictionary(argv[1]));
    } catch (const std::exception& ex) {
        std::cerr << ex.what() << '\n';
        return EXIT_FAILURE;
    }
    return EXIT_SUCCESS;
}
Output:

With unixdict.txt:

apt pta tap
arc rca car
ate tea eat

With wordlist.10000:

aim ima mai
arc rca car
asp spa pas
ate tea eat
ips psi sip

F#

// Teacup rim text. Nigel Galloway: August 7th., 2019
let  N=System.IO.File.ReadAllLines("dict.txt")|>Array.filter(fun n->String.length n=3 && Seq.length(Seq.distinct n)>1)|>Set.ofArray
let fG z=Set.map(fun n->System.String(Array.ofSeq (Seq.permute(fun g->(g+z)%3)n))) N
Set.intersectMany [N;fG 1;fG 2]|>Seq.distinctBy(Seq.sort>>Array.ofSeq>>System.String)|>Seq.iter(printfn "%s")
Output:
aim
arc
asp
ate
ips

Factor

USING: combinators.short-circuit fry grouping hash-sets
http.client kernel math prettyprint sequences sequences.extras
sets sorting splitting ;

"https://www.mit.edu/~ecprice/wordlist.10000" http-get nip
"\n" split [ { [ length 3 < ] [ all-equal? ] } 1|| ] reject
[ [ all-rotations ] map ] [ >hash-set ] bi
'[ [ _ in? ] all? ] filter [ natural-sort ] map members .
Output:
{
    { "aim" "ima" "mai" }
    { "arc" "car" "rca" }
    { "asp" "pas" "spa" }
    { "ate" "eat" "tea" }
    { "ips" "psi" "sip" }
}

Go

package main

import (
    "bufio"
    "fmt"
    "log"
    "os"
    "sort"
    "strings"
)

func check(err error) {
    if err != nil {
        log.Fatal(err)
    }
}

func readWords(fileName string) []string {
    file, err := os.Open(fileName)
    check(err)
    defer file.Close()
    var words []string
    scanner := bufio.NewScanner(file)
    for scanner.Scan() {
        word := strings.ToLower(strings.TrimSpace(scanner.Text()))
        if len(word) >= 3 {
            words = append(words, word)
        }
    }
    check(scanner.Err())
    return words
}

func rotate(runes []rune) {
    first := runes[0]
    copy(runes, runes[1:])
    runes[len(runes)-1] = first
}

func main() {
    dicts := []string{"mit_10000.txt", "unixdict.txt"} // local copies
    for _, dict := range dicts {
        fmt.Printf("Using %s:\n\n", dict)
        words := readWords(dict)
        n := len(words)
        used := make(map[string]bool)
    outer:
        for _, word := range words {
            runes := []rune(word)
            variants := []string{word}
            for i := 0; i < len(runes)-1; i++ {
                rotate(runes)
                word2 := string(runes)
                if word == word2 || used[word2] {
                    continue outer
                }
                ix := sort.SearchStrings(words, word2)
                if ix == n || words[ix] != word2 {
                    continue outer
                }
                variants = append(variants, word2)
            }
            for _, variant := range variants {
                used[variant] = true
            }
            fmt.Println(variants)
        }
        fmt.Println()
    }
}
Output:
Using mit_10000.txt:

[aim ima mai]
[arc rca car]
[asp spa pas]
[ate tea eat]
[ips psi sip]

Using unixdict.txt:

[apt pta tap]
[arc rca car]
[ate tea eat]

Haskell

Using Data.Set

Circular words of more than 2 characters in a local copy of a word list.

import Data.List (groupBy, intercalate, sort, sortBy)
import qualified Data.Set as S
import Data.Ord (comparing)
import Data.Function (on)

main :: IO ()
main =
  readFile "mitWords.txt" >>= (putStrLn . showGroups . circularWords . lines)

circularWords :: [String] -> [String]
circularWords ws =
  let lexicon = S.fromList ws
  in filter (isCircular lexicon) ws

isCircular :: S.Set String -> String -> Bool
isCircular lex w = 2 < length w && all (`S.member` lex) (rotations w)

rotations :: [a] -> [[a]]
rotations = fmap <$> rotated <*> (enumFromTo 0 . pred . length)

rotated :: [a] -> Int -> [a]
rotated [] _ = []
rotated xs n = zipWith const (drop n (cycle xs)) xs

showGroups :: [String] -> String
showGroups xs =
  unlines $
  intercalate " -> " . fmap snd <$>
  filter
    ((1 <) . length)
    (groupBy (on (==) fst) (sortBy (comparing fst) (((,) =<< sort) <$> xs)))
Output:
arc -> car -> rca
ate -> eat -> tea
aim -> ima -> mai
asp -> pas -> spa
ips -> psi -> sip

Filtering anagrams

Or taking a different approach, we can avoid the use of Data.Set by obtaining the groups of anagrams (of more than two characters) in the lexicon, and filtering out a circular subset of these:

import Data.Function (on)
import Data.List (groupBy, intercalate, sort, sortOn)
import Data.Ord (comparing)

main :: IO ()
main =
  readFile "mitWords.txt"
    >>= ( putStrLn
            . unlines
            . fmap (intercalate " -> ")
            . (circularOnly =<<)
            . anagrams
            . lines
        )

anagrams :: [String] -> [[String]]
anagrams ws =
  let harvest group px
        | px = [fmap snd group]
        | otherwise = []
   in groupBy
        (on (==) fst)
        (sortOn fst (((,) =<< sort) <$> ws))
        >>= (harvest <*> ((> 2) . length))

circularOnly :: [String] -> [[String]]
circularOnly ws
  | (length h - 1) > length rs = []
  | otherwise = [h : rs]
  where
    h = head ws
    rs = filter (isRotation h) (tail ws)

isRotation :: String -> String -> Bool
isRotation xs ys =
  xs
    /= until
      ( (||)
          . (ys ==)
          <*> (xs ==)
      )
      rotated
      (rotated xs)

rotated :: [a] -> [a]
rotated [] = []
rotated (x : xs) = xs <> [x]
Output:
arc -> rca -> car
ate -> tea -> eat
aim -> ima -> mai
asp -> spa -> pas
ips -> psi -> sip

J

    >@{.@> (#~ (=&#>@{.)@> * 2 < #@>)(</.~ {.@/:~@(|."0 1~ i.@#)L:0)cutLF fread'unixdict.txt'
apt
arc
ate

In other words, group words by their canonical rotation (from all rotations: the earliest, alphabetically), select groups with at least three different words, where the word count matches the letter count, then extract the first word from each group.

Java

Translation of: C++
import java.io.*;
import java.util.*;

public class Teacup {
    public static void main(String[] args) {
        if (args.length != 1) {
            System.err.println("usage: java Teacup dictionary");
            System.exit(1);
        }
        try {
            findTeacupWords(loadDictionary(args[0]));
        } catch (Exception ex) {
            System.err.println(ex.getMessage());
        }
    }

    // The file is expected to contain one lowercase word per line
    private static Set<String> loadDictionary(String fileName) throws IOException {
        Set<String> words = new TreeSet<>();
        try (BufferedReader reader = new BufferedReader(new FileReader(fileName))) {
            String word;
            while ((word = reader.readLine()) != null)
                words.add(word);
            return words;
        }
    }

    private static void findTeacupWords(Set<String> words) {
        List<String> teacupWords = new ArrayList<>();
        Set<String> found = new HashSet<>();
        for (String word : words) {
            int len = word.length();
            if (len < 3 || found.contains(word))
                continue;
            teacupWords.clear();
            teacupWords.add(word);
            char[] chars = word.toCharArray();
            for (int i = 0; i < len - 1; ++i) {
                String rotated = new String(rotate(chars));
                if (rotated.equals(word) || !words.contains(rotated))
                    break;
                teacupWords.add(rotated);
            }
            if (teacupWords.size() == len) {
                found.addAll(teacupWords);
                System.out.print(word);
                for (int i = 1; i < len; ++i)
                    System.out.print(" " + teacupWords.get(i));
                System.out.println();
            }
        }
    }

    private static char[] rotate(char[] ch) {
        char c = ch[0];
        System.arraycopy(ch, 1, ch, 0, ch.length - 1);
        ch[ch.length - 1] = c;
        return ch;
    }
}
Output:

With unixdict.txt:

apt pta tap
arc rca car
ate tea eat

With wordlist.10000:

aim ima mai
arc rca car
asp spa pas
ate tea eat
ips psi sip

JavaScript

Set() objects

Reading a local dictionary with the macOS JS for Automation library:

Works with: JXA
(() => {
    'use strict';

    // main :: IO ()
    const main = () =>
        showGroups(
            circularWords(
                // Local copy of:
                // https://www.mit.edu/~ecprice/wordlist.10000
                lines(readFile('~/mitWords.txt'))
            )
        );

    // circularWords :: [String] -> [String]
    const circularWords = ws =>
        ws.filter(isCircular(new Set(ws)), ws);

    // isCircular :: Set String -> String -> Bool
    const isCircular = lexicon => w => {
        const iLast = w.length - 1;
        return 1 < iLast && until(
            ([i, bln, s]) => iLast < i || !bln,
            ([i, bln, s]) => [1 + i, lexicon.has(s), rotated(s)],
            [0, true, rotated(w)]
        )[1];
    };

    // DISPLAY --------------------------------------------

    // showGroups :: [String] -> String
    const showGroups = xs =>
        unlines(map(
            gp => map(snd, gp).join(' -> '),
            groupBy(
                (a, b) => fst(a) === fst(b),
                sortBy(
                    comparing(fst),
                    map(x => Tuple(concat(sort(chars(x))), x),
                        xs
                    )
                )
            ).filter(gp => 1 < gp.length)
        ));


    // MAC OS JS FOR AUTOMATION ---------------------------

    // readFile :: FilePath -> IO String
    const readFile = fp => {
        const
            e = $(),
            uw = ObjC.unwrap,
            s = uw(
                $.NSString.stringWithContentsOfFileEncodingError(
                    $(fp)
                    .stringByStandardizingPath,
                    $.NSUTF8StringEncoding,
                    e
                )
            );
        return undefined !== s ? (
            s
        ) : uw(e.localizedDescription);
    };

    // GENERIC FUNCTIONS ----------------------------------

    // Tuple (,) :: a -> b -> (a, b)
    const Tuple = (a, b) => ({
        type: 'Tuple',
        '0': a,
        '1': b,
        length: 2
    });

    // chars :: String -> [Char]
    const chars = s => s.split('');

    // comparing :: (a -> b) -> (a -> a -> Ordering)
    const comparing = f =>
        (x, y) => {
            const
                a = f(x),
                b = f(y);
            return a < b ? -1 : (a > b ? 1 : 0);
        };

    // concat :: [[a]] -> [a]
    // concat :: [String] -> String
    const concat = xs =>
        0 < xs.length ? (() => {
            const unit = 'string' !== typeof xs[0] ? (
                []
            ) : '';
            return unit.concat.apply(unit, xs);
        })() : [];

    // fst :: (a, b) -> a
    const fst = tpl => tpl[0];

    // groupBy :: (a -> a -> Bool) -> [a] -> [[a]]
    const groupBy = (f, xs) => {
        const tpl = xs.slice(1)
            .reduce((a, x) => {
                const h = a[1].length > 0 ? a[1][0] : undefined;
                return (undefined !== h) && f(h, x) ? (
                    Tuple(a[0], a[1].concat([x]))
                ) : Tuple(a[0].concat([a[1]]), [x]);
            }, Tuple([], 0 < xs.length ? [xs[0]] : []));
        return tpl[0].concat([tpl[1]]);
    };

    // lines :: String -> [String]
    const lines = s => s.split(/[\r\n]/);

    // map :: (a -> b) -> [a] -> [b]
    const map = (f, xs) =>
        (Array.isArray(xs) ? (
            xs
        ) : xs.split('')).map(f);

    // rotated :: String -> String
    const rotated = xs =>
        xs.slice(1) + xs[0];

    // showLog :: a -> IO ()
    const showLog = (...args) =>
        console.log(
            args
            .map(JSON.stringify)
            .join(' -> ')
        );

    // snd :: (a, b) -> b
    const snd = tpl => tpl[1];

    // sort :: Ord a => [a] -> [a]
    const sort = xs => xs.slice()
        .sort((a, b) => a < b ? -1 : (a > b ? 1 : 0));

    // sortBy :: (a -> a -> Ordering) -> [a] -> [a]
    const sortBy = (f, xs) =>
        xs.slice()
        .sort(f);

    // unlines :: [String] -> String
    const unlines = xs => xs.join('\n');

    // until :: (a -> Bool) -> (a -> a) -> a -> a
    const until = (p, f, x) => {
        let v = x;
        while (!p(v)) v = f(v);
        return v;
    };

    // MAIN ---
    return main();
})();
Output:
arc -> car -> rca
ate -> eat -> tea
aim -> ima -> mai
asp -> pas -> spa
ips -> psi -> sip

Anagram filtering

Reading a local dictionary with the macOS JS for Automation library:

Works with: JXA
(() => {
    'use strict';

    // main :: IO ()
    const main = () =>
        anagrams(lines(readFile('~/mitWords.txt')))
        .flatMap(circularOnly)
        .map(xs => xs.join(' -> '))
        .join('\n')

    // anagrams :: [String] -> [[String]]
    const anagrams = ws =>
        groupBy(
            on(eq, fst),
            sortBy(
                comparing(fst),
                ws.map(w => Tuple(sort(chars(w)).join(''), w))
            )
        ).flatMap(
            gp => 2 < gp.length ? [
                gp.map(snd)
            ] : []
        )

    // circularOnly :: [String] -> [[String]]
    const circularOnly = ws => {
        const h = ws[0];
        return ws.length < h.length ? (
            []
        ) : (() => {
            const rs = rotations(h);
            return rs.every(r => ws.includes(r)) ? (
                [rs]
            ) : [];
        })();
    };

    // rotations :: String -> [String]
    const rotations = s =>
        takeIterate(s.length, rotated, s)

    // rotated :: [a] -> [a]
    const rotated = xs => xs.slice(1).concat(xs[0]);


    // GENERIC FUNCTIONS ----------------------------

    // Tuple (,) :: a -> b -> (a, b)
    const Tuple = (a, b) => ({
        type: 'Tuple',
        '0': a,
        '1': b,
        length: 2
    });

    // chars :: String -> [Char]
    const chars = s => s.split('');

    // comparing :: (a -> b) -> (a -> a -> Ordering)
    const comparing = f =>
        (x, y) => {
            const
                a = f(x),
                b = f(y);
            return a < b ? -1 : (a > b ? 1 : 0);
        };

    // eq (==) :: Eq a => a -> a -> Bool
    const eq = (a, b) => a === b

    // fst :: (a, b) -> a
    const fst = tpl => tpl[0];

    // groupBy :: (a -> a -> Bool) -> [a] -> [[a]]
    const groupBy = (f, xs) => {
        const tpl = xs.slice(1)
            .reduce((a, x) => {
                const h = a[1].length > 0 ? a[1][0] : undefined;
                return (undefined !== h) && f(h, x) ? (
                    Tuple(a[0], a[1].concat([x]))
                ) : Tuple(a[0].concat([a[1]]), [x]);
            }, Tuple([], 0 < xs.length ? [xs[0]] : []));
        return tpl[0].concat([tpl[1]]);
    };

    // lines :: String -> [String]
    const lines = s => s.split(/[\r\n]/);

    // mapAccumL :: (acc -> x -> (acc, y)) -> acc -> [x] -> (acc, [y])
    const mapAccumL = (f, acc, xs) =>
        xs.reduce((a, x, i) => {
            const pair = f(a[0], x, i);
            return Tuple(pair[0], a[1].concat(pair[1]));
        }, Tuple(acc, []));

    // on :: (b -> b -> c) -> (a -> b) -> a -> a -> c
    const on = (f, g) => (a, b) => f(g(a), g(b));

    // readFile :: FilePath -> IO String
    const readFile = fp => {
        const
            e = $(),
            uw = ObjC.unwrap,
            s = uw(
                $.NSString.stringWithContentsOfFileEncodingError(
                    $(fp)
                    .stringByStandardizingPath,
                    $.NSUTF8StringEncoding,
                    e
                )
            );
        return undefined !== s ? (
            s
        ) : uw(e.localizedDescription);
    };

    // snd :: (a, b) -> b
    const snd = tpl => tpl[1];

    // sort :: Ord a => [a] -> [a]
    const sort = xs => xs.slice()
        .sort((a, b) => a < b ? -1 : (a > b ? 1 : 0));

    // sortBy :: (a -> a -> Ordering) -> [a] -> [a]
    const sortBy = (f, xs) =>
        xs.slice()
        .sort(f);

    // takeIterate :: Int -> (a -> a) -> a -> [a]
    const takeIterate = (n, f, x) =>
        snd(mapAccumL((a, _, i) => {
            const v = 0 !== i ? f(a) : x;
            return [v, v];
        }, x, Array.from({
            length: n
        })));

    // MAIN ---
    return main();
})();
Output:
arc -> rca -> car
ate -> tea -> eat
aim -> ima -> mai
asp -> spa -> pas
ips -> psi -> sip

jq

Works with: jq

Works with gojq, the Go implementation of jq (*)

(*) To run the program below using gojq, change `keys_unsorted` to `keys`; this slows it down a lot.

# Output: an array of the words when read around the rim
def read_teacup:
  . as $in
  | [range(0; length) | $in[.:] + $in[:.] ];

# Boolean
def is_teacup_word($dict):
  . as $in
  | all( range(1; length); . as $i | $dict[ $in[$i:] + $in[:$i] ]) ;

# Output: a stream of the eligible teacup words
def teacup_words:
  def same_letters:
     explode
     | .[0] as $first
     | all( .[1:][]; . == $first);

  # Only consider one word in a teacup cycle
  def consider: explode | .[0] == min;

 # Create the dictionary
  reduce (inputs
           | select(length>2 and (same_letters|not))) as $w ( {};
     .[$w]=true )
  | . as $dict
  | keys[]
  | select(consider and is_teacup_word($dict)) ;

# The task:
teacup_words
| read_teacup
Output:

Invocation example: jq -nRc -f teacup-rim.jq unixdict.txt

["apt","pta","tap"]
["arc","rca","car"]
["ate","tea","eat"]


Julia

Using the MIT 10000 word list, and excluding words of less than three letters, to reduce output length.

using HTTP
 
rotate(s, n) = String(circshift(Vector{UInt8}(s), n))
 
isliketea(w, d) = (n = length(w); n > 2 && any(c -> c != w[1], w) && 
    all(i -> haskey(d, rotate(w, i)), 1:n-1))
 
function getteawords(listuri)
    req = HTTP.request("GET", listuri)
    wdict = Dict{String, Int}((lowercase(string(x)), 1) for x in split(String(req.body), r"\s+"))
    sort(unique([sort([rotate(word, i) for i in 1:length(word)]) 
        for word in collect(keys(wdict)) if isliketea(word, wdict)]))
end
 
foreach(println, getteawords("https://www.mit.edu/~ecprice/wordlist.10000"))
Output:
["aim", "ima", "mai"]
["arc", "car", "rca"]
["asp", "pas", "spa"]
["ate", "eat", "tea"]
["ips", "psi", "sip"]

Lychen

Lychen is V8 JavaScript wrapped in C#, exposing C# into JavaScript.

Using https://www.mit.edu/~ecprice/wordlist.10000 as per the Julia example.

const wc = new CS.System.Net.WebClient();
const lines = wc.DownloadString("http://wiki.puzzlers.org/pub/wordlists/unixdict.txt");
const words = lines.split(/\n/g);
const collection = {};
words.filter(word => word.length > 2).forEach(word => {
  let allok = true;
  let newword = word;
  for (let i = 0; i < word.length - 1; i++) {
    newword = newword.substr(1) + newword.substr(0, 1);
    if (!words.includes(newword)) {
      allok = false;
      break;
    }
  }
  if (allok) {
    const key = word.split("").sort().join("");
    if (!collection[key]) {
      collection[key] = [word];
    } else {
      if (!collection[key].includes(word)) {
        collection[key].push(word);
      }
    }
  }
});
Object.keys(collection)
.filter(key => collection[key].length > 1)
.forEach(key => console.log("%s", collection[key].join(", ")));
apt, pta, tap
arc, car, rca
ate, eat, tea

Mathematica/Wolfram Language

ClearAll[Teacuppable]
TeacuppableHelper[set_List] := Module[{f, s},
  f = First[set];
  s = StringRotateLeft[f, #] & /@ Range[Length[set]];
  Sort[s] == Sort[set]
  ]
Teacuppable[set_List] := Module[{ss, l},
  l = StringLength[First[set]];
  ss = Subsets[set, {l}];
  Select[ss, TeacuppableHelper]
  ]
s = Import["http://wiki.puzzlers.org/pub/wordlists/unixdict.txt", "String"];
s //= StringSplit[#, "\n"] &;
s //= Select[StringLength /* GreaterThan[2]];
s //= Map[ToLowerCase];
s //= Map[{#, Sort[Characters[#]]} &];
s //= GatherBy[#, Last] &;
s //= Select[Length /* GreaterEqualThan[2]];
s = s[[All, All, 1]];
s //= Select[StringLength[First[#]] <= Length[#] &];
Flatten[Teacuppable /@ s, 1]
Output:
{{"apt", "pta", "tap"}, {"arc", "car", "rca"}, {"ate", "eat", "tea"}}

Nim

import sequtils, sets, sugar

let words = collect(initHashSet, for word in "unixdict.txt".lines: {word})

proc rotate(s: var string) =
  let first = s[0]
  for i in 1..s.high: s[i - 1] = s[i]
  s[^1] = first

var result: seq[string]
for word in "unixdict.txt".lines:
  if word.len >= 3:
    block checkWord:
      var w = word
      for _ in 1..w.len:
        w.rotate()
        if w notin words or w in result:
          # Not present in dictionary or already encountered.
          break checkWord
      if word.anyIt(it != word[0]):
        # More then one letter.
        result.add word

for word in result:
  var w = word
  stdout.write w
  for _ in 2..w.len:
    w.rotate()
    stdout.write " → ", w
  echo()
Output:
apt → pta → tap
arc → rca → car
ate → tea → eat

Perl

Translation of: Raku
use strict;
use warnings;
use feature 'say';
use List::Util qw(uniqstr any);

my(%words,@teacups,%seen);

open my $fh, '<', 'ref/wordlist.10000';
while (<$fh>) {
    chomp(my $w = uc $_);
    next if length $w < 3;
    push @{$words{join '', sort split //, $w}}, $w;}

for my $these (values %words) {
    next if @$these < 3;
    MAYBE: for (@$these) {
        my $maybe = $_;
        next if $seen{$_};
        my @print;
        for my $i (0 .. length $maybe) {
            if (any { $maybe eq $_ } @$these) {
                push @print, $maybe;
                $maybe = substr($maybe,1) . substr($maybe,0,1)
            } else {
                @print = () and next MAYBE
            }
        }
        if (@print) {
            push @teacups, [@print];
            $seen{$_}++ for @print;
        }
    }
}

say join ', ', uniqstr @$_ for sort @teacups;
Output:
ARC, RCA, CAR
ATE, TEA, EAT
AIM, IMA, MAI
ASP, SPA, PAS
IPS, PSI, SIP

Phix

Filters anagram lists

procedure filter_set(sequence anagrams)
    -- anagrams is a (small) set of words that are all anagrams of each other
    --  for example: {"angel","angle","galen","glean","lange"}
    -- print any set(s) for which every rotation is also present (marking as
    -- you go to prevent the same set appearing with each word being first)
    sequence used = repeat(false,length(anagrams))
    for i=1 to length(anagrams) do
        if not used[i] then
            used[i] = true
            string word = anagrams[i]
            sequence res = {word}
            for r=2 to length(word) do
                word = word[2..$]&word[1]
                integer k = find(word,anagrams)
                if k=0 then res = {} exit end if
                if not find(word,res) then
                    res = append(res,word)
                end if
                used[k] = true
            end for
            if length(res) then ?res end if
        end if
    end for
end procedure
 
procedure teacup(string filename, integer minlen=3, bool allow_mono=false)
    sequence letters,       -- a sorted word, eg "ate" -> "aet".
             words = {},    -- in eg {{"aet","ate"},...} form
             anagrams = {}, -- a set with same letters
             last = ""      -- (for building such sets)
    object word
 
    printf(1,"using %s",filename)
    integer fn = open(filename,"r")
    if fn=-1 then crash(filename&" not found") end if
    while 1 do
        word = lower(trim(gets(fn)))
        if atom(word) then exit end if
        if length(word)>=minlen then
            letters = sort(word)
            words = append(words, {letters, word})
        end if
    end while
    close(fn)
    printf(1,", %d words read\n",length(words))
    if length(words)!=0 then
        words = sort(words) -- group by anagram
        for i=1 to length(words) do
            {letters,word} = words[i]
            if letters=last then
                anagrams = append(anagrams,word)
            else
                if allow_mono or length(anagrams)>=length(last) then
                    filter_set(anagrams) 
                end if
                last = letters
                anagrams = {word}
            end if
        end for
        if allow_mono or length(anagrams)>=length(last) then
            filter_set(anagrams) 
        end if
    end if
end procedure
 
teacup(join_path({"demo","unixdict.txt"}))
-- These match output from other entries:
--teacup(join_path({"demo","unixdict.txt"}),allow_mono:=true)
--teacup(join_path({"demo","rosetta","mit.wordlist.10000.txt"}))
--teacup(join_path({"demo","rosetta","words.txt"}),4,true)
-- Note that allow_mono is needed to display eg {"agag","gaga"}
Output:
using demo\unixdict.txt, 24948 words read
{"arc","rca","car"}
{"ate","tea","eat"}
{"apt","pta","tap"}

PicoLisp

(de rotw (W)
   (let W (chop W)
      (unless (or (apply = W) (not (cddr W)))
         (make
            (do (length W)
               (link (pack (copy W)))
               (rot W) ) ) ) ) )
(off D)
(put 'D 'v (cons))
(mapc
   '((W)
      (idx 'D (cons (hash W) W) T) )
   (setq Words
      (make (in "wordlist.10000" (while (line T) (link @)))) ) )
(mapc
   println
   (extract
      '((W)
         (let? Lst (rotw W)
            (when
               (and
                  (fully
                     '((L) (idx 'D (cons (hash L) L)))
                     Lst )
                  (not
                     (member (car Lst) (car (get 'D 'v))) ) )
               (mapc
                  '((L) (push (get 'D 'v) L))
                  Lst )
               Lst ) ) )
      Words ) )
Output:
("aim" "mai" "ima")
("arc" "car" "rca")
("asp" "pas" "spa")
("ate" "eat" "tea")
("ips" "sip" "psi")

PureBasic

DataSection
  dname:
  Data.s "./Data/unixdict.txt"
  Data.s "./Data/wordlist.10000.txt"
  Data.s ""  
EndDataSection

EnableExplicit
Dim c.s{1}(2)  
Define.s txt, bset, res, dn
Define.i i,q, cw
Restore dname : Read.s dn
While OpenConsole() And ReadFile(0,dn)
  While Not Eof(0)
    cw+1
    txt=ReadString(0)
    If Len(txt)=3 : bset+txt+";" : EndIf
  Wend
  CloseFile(0)  
  For i=1 To CountString(bset,";")
    PokeS(c(),StringField(bset,i,";"))    
    If FindString(res,c(0)+c(1)+c(2)) : Continue : EndIf    
    If c(0)=c(1) Or c(1)=c(2) Or c(0)=c(2) : Continue : EndIf    
    If FindString(bset,c(1)+c(2)+c(0)) And FindString(bset,c(2)+c(0)+c(1))
      res+c(0)+c(1)+c(2)+~"\t"+c(1)+c(2)+c(0)+~"\t"+c(2)+c(0)+c(1)+~"\n"
    EndIf       
  Next
  PrintN(res+Str(cw)+" words, "+Str(CountString(res,~"\n"))+" circular") : Input()
  bset="" : res="" : cw=0
  Read.s dn
Wend
Output:
apt	pta	tap
arc	rca	car
ate	tea	eat
25104 words, 3 circular

aim	ima	mai
arc	rca	car
asp	spa	pas
ate	tea	eat
ips	psi	sip
10000 words, 5 circular

Python

Functional

Composing generic functions, and considering only anagram groups.

'''Teacup rim text'''

from itertools import chain, groupby
from os.path import expanduser
from functools import reduce


# main :: IO ()
def main():
    '''Circular anagram groups, of more than one word,
       and containing words of length > 2, found in:
       https://www.mit.edu/~ecprice/wordlist.10000
    '''
    print('\n'.join(
        concatMap(circularGroup)(
            anagrams(3)(
                # Reading from a local copy.
                lines(readFile('~/mitWords.txt'))
            )
        )
    ))


# anagrams :: Int -> [String] -> [[String]]
def anagrams(n):
    '''Groups of anagrams, of minimum group size n,
       found in the given word list.
    '''
    def go(ws):
        def f(xs):
            return [
                [snd(x) for x in xs]
            ] if n <= len(xs) >= len(xs[0][0]) else []
        return concatMap(f)(groupBy(fst)(sorted(
            [(''.join(sorted(w)), w) for w in ws],
            key=fst
        )))
    return go


# circularGroup :: [String] -> [String]
def circularGroup(ws):
    '''Either an empty list, or a list containing
       a string showing any circular subset found in ws.
    '''
    lex = set(ws)
    iLast = len(ws) - 1
    # If the set contains one word that is circular,
    # then it must contain all of them.
    (i, blnCircular) = until(
        lambda tpl: tpl[1] or (tpl[0] > iLast)
    )(
        lambda tpl: (1 + tpl[0], isCircular(lex)(ws[tpl[0]]))
    )(
        (0, False)
    )
    return [' -> '.join(allRotations(ws[i]))] if blnCircular else []


# isCircular :: Set String -> String -> Bool
def isCircular(lexicon):
    '''True if all of a word's rotations
       are found in the given lexicon.
    '''
    def go(w):
        def f(tpl):
            (i, _, x) = tpl
            return (1 + i, x in lexicon, rotated(x))

        iLast = len(w) - 1
        return until(
            lambda tpl: iLast < tpl[0] or (not tpl[1])
        )(f)(
            (0, True, rotated(w))
        )[1]
    return go


# allRotations :: String -> [String]
def allRotations(w):
    '''All rotations of the string w.'''
    return takeIterate(len(w) - 1)(
        rotated
    )(w)


# GENERIC -------------------------------------------------

# concatMap :: (a -> [b]) -> [a] -> [b]
def concatMap(f):
    '''A concatenated list over which a function has been mapped.
       The list monad can be derived by using a function f which
       wraps its output in a list,
       (using an empty list to represent computational failure).
    '''
    def go(xs):
        return chain.from_iterable(map(f, xs))
    return go


# fst :: (a, b) -> a
def fst(tpl):
    '''First member of a pair.'''
    return tpl[0]


# groupBy :: (a -> b) -> [a] -> [[a]]
def groupBy(f):
    '''The elements of xs grouped,
       preserving order, by equality
       in terms of the key function f.
    '''
    def go(xs):
        return [
            list(x[1]) for x in groupby(xs, key=f)
        ]
    return go


# lines :: String -> [String]
def lines(s):
    '''A list of strings,
       (containing no newline characters)
       derived from a single new-line delimited string.
    '''
    return s.splitlines()


# mapAccumL :: (acc -> x -> (acc, y)) -> acc -> [x] -> (acc, [y])
def mapAccumL(f):
    '''A tuple of an accumulation and a list derived by a
       combined map and fold,
       with accumulation from left to right.
    '''
    def go(a, x):
        tpl = f(a[0], x)
        return (tpl[0], a[1] + [tpl[1]])
    return lambda acc: lambda xs: (
        reduce(go, xs, (acc, []))
    )


# readFile :: FilePath -> IO String
def readFile(fp):
    '''The contents of any file at the path
       derived by expanding any ~ in fp.
    '''
    with open(expanduser(fp), 'r', encoding='utf-8') as f:
        return f.read()


# rotated :: String -> String
def rotated(s):
    '''A string rotated 1 character to the right.'''
    return s[1:] + s[0]


# snd :: (a, b) -> b
def snd(tpl):
    '''Second member of a pair.'''
    return tpl[1]


# takeIterate :: Int -> (a -> a) -> a -> [a]
def takeIterate(n):
    '''Each value of n iterations of f
       over a start value of x.
    '''
    def go(f):
        def g(x):
            def h(a, i):
                v = f(a) if i else x
                return (v, v)
            return mapAccumL(h)(x)(
                range(0, 1 + n)
            )[1]
        return g
    return go


# until :: (a -> Bool) -> (a -> a) -> a -> a
def until(p):
    '''The result of repeatedly applying f until p holds.
       The initial seed value is x.
    '''
    def go(f):
        def g(x):
            v = x
            while not p(v):
                v = f(v)
            return v
        return g
    return go


# MAIN ---
if __name__ == '__main__':
    main()
Output:
arc -> rca -> car
ate -> tea -> eat
aim -> ima -> mai
asp -> spa -> pas
ips -> psi -> sip

Raku

(formerly Perl 6)

Works with: Rakudo version 2019.07.1

There doesn't seem to be any restriction that the word needs to consist only of lowercase letters, so words of any case are included. Since the example code specifically shows the example words (TEA, EAT, ATE) in uppercase, I elected to uppercase the found words.

As the specs keep changing, this version will accept ANY text file as its dictionary and accepts parameters to configure the minimum number of characters in a word to consider and whether to allow mono-character words.

Defaults to unixdict.txt, minimum 3 characters and mono-character 'words' disallowed. Feed a file name to use a different word list, an integer to --min-chars and/or a truthy value to --mono to allow mono-chars.

my %*SUB-MAIN-OPTS = :named-anywhere;

unit sub MAIN ( $dict = 'unixdict.txt', :$min-chars = 3, :$mono = False );

my %words;
$dict.IO.slurp.words.map: { .chars < $min-chars ?? (next) !! %words{.uc.comb.sort.join}.push: .uc };

my @teacups;
my %seen;

for %words.values -> @these {
    next if !$mono && @these < 2;
    MAYBE: for @these {
        my $maybe = $_;
        next if %seen{$_};
        my @print;
        for ^$maybe.chars {
            if $maybe@these {
                @print.push: $maybe;
                $maybe = $maybe.comb.list.rotate.join;
            } else {
                @print = ();
                next MAYBE
            }
        }
        if @print.elems {
            @teacups.push: @print;
            %seen{$_}++ for @print;
        }
    }
}

say .unique.join(", ") for sort @teacups;
Defaults:

Command line: raku teacup.p6

APT, PTA, TAP
ARC, RCA, CAR
ATE, TEA, EAT
Allow mono-chars:

Command line: raku teacup.p6 --mono=1

AAA
APT, PTA, TAP
ARC, RCA, CAR
ATE, TEA, EAT
III
Using a larger dictionary:

words.txt file from https://github.com/dwyl/english-words

Command line: raku teacup.p6 words.txt --min-chars=4 --mono=Allow

AAAA
AAAAAA
ADAD, DADA
ADAR, DARA, ARAD, RADA
AGAG, GAGA
ALIT, LITA, ITAL, TALI
AMAN, MANA, ANAM, NAMA
AMAR, MARA, ARAM, RAMA
AMEL, MELA, ELAM, LAME
AMEN, MENA, ENAM, NAME
AMOR, MORA, ORAM, RAMO
ANAN, NANA
ANIL, NILA, ILAN, LANI
ARAR, RARA
ARAS, RASA, ASAR, SARA
ARIS, RISA, ISAR, SARI
ASEL, SELA, ELAS, LASE
ASER, SERA, ERAS, RASE
DENI, ENID, NIDE, IDEN
DOLI, OLID, LIDO, IDOL
EGOR, GORE, OREG, REGO
ENOL, NOLE, OLEN, LENO
ESOP, SOPE, OPES, PESO
ISIS, SISI
MMMM
MORO, OROM, ROMO, OMOR
OOOO

REXX

All words that contained non─letter (Latin) characters   (periods, decimal digits, minus signs, underbars, or embedded blanks)  
weren't considered as candidates for circular words.

Duplicated words (such as   sop   and   SOP)   are ignored   (just the 2nd and subsequent duplicated words are deleted).

All words in the dictionary are treated as caseless.

The dictionary wasn't assumed to be sorted in any way.

/*REXX pgm finds circular words (length>2),  using a dictionary,  suppress permutations.*/
parse arg iFID L .                               /*obtain optional arguments from the CL*/
if iFID==''|iFID==","  then iFID= 'wordlist.10k' /*Not specified?  Then use the default.*/
if    L==''|   L==","  then    L= 3              /* "      "         "   "   "     "    */
#= 0                                             /*number of words in dictionary, Len>L.*/
@.=                                              /*stemmed array of non─duplicated words*/
       do r=0  while lines(iFID) \== 0           /*read all lines (words) in dictionary.*/
       parse upper value  linein(iFID)  with z . /*obtain a word from the dictionary.   */
       if length(z)<L | @.z\==''  then iterate   /*length must be  L  or more,  no dups.*/
       if \datatype(z, 'U')       then iterate   /*Word contains non-letters?  Then skip*/
       @.z = z                                   /*assign a word from the dictionary.   */
       #= # + 1;     $.#= z                      /*bump word count; append word to list.*/
       end   /*r*/                               /* [↑]  dictionary need not be sorted. */
cw= 0                                            /*the number of circular words (so far)*/
say "There're "    r    ' entries in the dictionary (of all types):  '      iFID
say "There're "    #    ' words in the dictionary of at least length '      L
say
       do j=1  for #;      x= $.j;      y= x     /*obtain the  Jth  word in the list.   */
       if x==''  then iterate                    /*if a null, don't show variants.      */
       yy= y                                     /*the start of a list of the variants. */
                     do k=1  for length(x)-1     /*"circulate" the litters in the word. */
                     y= substr(y, 2)left(y, 1)   /*add the left letter to the right end.*/
                     if @.y==''  then iterate j  /*if not a word,  then skip this word. */
                     yy= yy','   y               /*append to the list of the variants.  */
                     @.y=                        /*nullify word to suppress permutations*/
                     end   /*k*/                 
       cw= cw + 1                                /*bump counter of circular words found.*/
       say 'circular word: '     yy              /*display a circular word and variants.*/
       end   /*j*/
say
say cw     ' circular words were found.'         /*stick a fork in it,  we're all done. */
output   when using the default inputs:
There're  10000  entries in the dictionary (of all types):   wordlist.10k
There're  9578  words in the dictionary of at least length  3

circular word:  AIM, IMA, MAI
circular word:  ARC, RCA, CAR
circular word:  ASP, SPA, PAS
circular word:  ATE, TEA, EAT
circular word:  IPS, PSI, SIP

5  circular words were found.

Ruby

"woordenlijst.txt" is a Dutch wordlist. It has 413125 words > 2 chars and takes about two minutes.

lists = ["unixdict.txt", "wordlist.10000", "woordenlijst.txt"]

lists.each do |list|
  words = open(list).readlines( chomp: true).reject{|w| w.size < 3 }
  grouped_by_size = words.group_by(&:size)
  tea_words = words.filter_map do |word|
    chars = word.chars
    next unless chars.none?{|c| c < chars.first }
    next if chars.uniq.size == 1
    rotations = word.size.times.map {|i| chars.rotate(i).join }
    rotations if rotations.all?{|rot| grouped_by_size[rot.size].include? rot }
  end
  puts "", list + ":"
  tea_words.uniq(&:to_set).each{|ar| puts ar.join(", ") }
end
Output:
unixdict.txt:
apt, pta, tap
arc, rca, car
ate, tea, eat

wordlist.10000:
aim, ima, mai
arc, rca, car
asp, spa, pas
ate, tea, eat
ips, psi, sip

woordenlijst.txt:
ast, sta, tas
een, ene, nee
eer, ere, ree

Rust

use std::collections::BTreeSet;
use std::collections::HashSet;
use std::fs::File;
use std::io::{self, BufRead};
use std::iter::FromIterator;

fn load_dictionary(filename: &str) -> std::io::Result<BTreeSet<String>> {
    let file = File::open(filename)?;
    let mut dict = BTreeSet::new();
    for line in io::BufReader::new(file).lines() {
        let word = line?;
        dict.insert(word);
    }
    Ok(dict)
}

fn find_teacup_words(dict: &BTreeSet<String>) {
    let mut teacup_words: Vec<&String> = Vec::new();
    let mut found: HashSet<&String> = HashSet::new();
    for word in dict {
        let len = word.len();
        if len < 3 || found.contains(word) {
            continue;
        }
        teacup_words.clear();
        let mut is_teacup_word = true;
        let mut chars: Vec<char> = word.chars().collect();
        for _ in 1..len {
            chars.rotate_left(1);
            if let Some(w) = dict.get(&String::from_iter(&chars)) {
                if !w.eq(word) && !teacup_words.contains(&w) {
                    teacup_words.push(w);
                }
            } else {
                is_teacup_word = false;
                break;
            }
        }
        if !is_teacup_word || teacup_words.is_empty() {
            continue;
        }
        print!("{}", word);
        found.insert(word);
        for w in &teacup_words {
            found.insert(w);
            print!(" {}", w);
        }
        println!();
    }
}

fn main() {
    let args: Vec<String> = std::env::args().collect();
    if args.len() != 2 {
        eprintln!("Usage: teacup dictionary");
        std::process::exit(1);
    }
    let dict = load_dictionary(&args[1]);
    match dict {
        Ok(dict) => find_teacup_words(&dict),
        Err(error) => eprintln!("Cannot open file {}: {}", &args[1], error),
    }
}
Output:

With unixdict.txt:

apt pta tap
arc rca car
ate tea eat

With wordlist.10000:

aim ima mai
arc rca car
asp spa pas
ate tea eat
ips psi sip

Swift

import Foundation

func loadDictionary(_ path: String) throws -> Set<String> {
    let contents = try String(contentsOfFile: path, encoding: String.Encoding.ascii)
    return Set<String>(contents.components(separatedBy: "\n").filter{!$0.isEmpty})
}

func rotate<T>(_ array: inout [T]) {
    guard array.count > 1 else {
        return
    }
    let first = array[0]
    array.replaceSubrange(0..<array.count-1, with: array[1...])
    array[array.count - 1] = first
}

func findTeacupWords(_ dictionary: Set<String>) {
    var teacupWords: [String] = []
    var found = Set<String>()
    for word in dictionary {
        if word.count < 3 || found.contains(word) {
            continue
        }
        teacupWords.removeAll()
        var isTeacupWord = true
        var chars = Array(word)
        for _ in 1..<word.count {
            rotate(&chars)
            let w = String(chars)
            if (!dictionary.contains(w)) {
                isTeacupWord = false
                break
            }
            if w != word && !teacupWords.contains(w) {
                teacupWords.append(w)
            }
        }
        if !isTeacupWord || teacupWords.isEmpty {
            continue
        }
        print(word, terminator: "")
        found.insert(word)
        for w in teacupWords {
            found.insert(w)
            print(" \(w)", terminator: "")
        }
        print()
    }
}

do {
    let dictionary = try loadDictionary("unixdict.txt")
    findTeacupWords(dictionary)
} catch {
    print(error)
}
Output:
car arc rca
eat ate tea
pta tap apt

Wren

Translation of: Go
Library: Wren-str
Library: Wren-sort
import "io" for File
import "./str" for Str
import "./sort" for Find

var readWords = Fn.new { |fileName|
    var dict = File.read(fileName).split("\n")
    return dict.where { |w| w.count >= 3 }.toList
}

var dicts = ["mit10000.txt", "unixdict.txt"]
for (dict in dicts) {
    System.print("Using %(dict):\n")
    var words = readWords.call(dict)
    var n = words.count
    var used = {}
    for (word in words) {
        var outer = false
        var variants = [word]
        var word2 = word
        for (i in 0...word.count-1) {
            word2 = Str.lshift(word2)
            if (word == word2 || used[word2]) {
                outer = true
                break
            }
            var ix = Find.first(words, word2)
            if (ix == n || words[ix] != word2) {
                outer = true
                break
            }
            variants.add(word2)
        }
        if (!outer) {
            for (variant in variants) used[variant] = true
            System.print(variants)
        }
    }
    System.print()
}
Output:
Using mit10000.txt:

[aim, ima, mai]
[arc, rca, car]
[asp, spa, pas]
[ate, tea, eat]
[ips, psi, sip]

Using unixdict.txt:

[apt, pta, tap]
[arc, rca, car]
[ate, tea, eat]

zkl

// Limited to ASCII
// This is limited to the max items a Dictionary can hold
fcn teacut(wordFile){
   words:=File(wordFile).pump(Dictionary().add.fp1(True),"strip");
   seen :=Dictionary();
   foreach word in (words.keys){
      rots,w,sz := List(), word, word.len();
      if(sz>2 and word.unique().len()>2 and not seen.holds(word)){
	 do(sz-1){ 
	    w=String(w[-1],w[0,-1]);	// rotate one character
	    if(not words.holds(w)) continue(2);	// not a word, skip these
	    rots.append(w); 		// I'd like to see all the rotations
	 }
	 println(rots.append(word).sort().concat(" ")); 
	 rots.pump(seen.add.fp1(True));	// we've seen these rotations
      }
   }
}
println("\nunixdict:");           teacut("unixdict.txt");
println("\nmit_wordlist_10000:"); teacut("mit_wordlist_10000.txt");
Output:
unixdict:
apt pta tap
ate eat tea
arc car rca

mit_wordlist_10000:
asp pas spa
ips psi sip
ate eat tea
aim ima mai
arc car rca