I'm working on modernizing Rosetta Code's infrastructure. Starting with communications. Please accept this time-limited open invite to RC's Slack.. --Michael Mol (talk) 20:59, 30 May 2020 (UTC)

# Teacup rim text

Teacup rim text
You are encouraged to solve this task according to the task description, using any language you may know.

On a set of coasters we have, there's a picture of a teacup.   On the rim of the teacup the word   TEA   appears a number of times separated by bullet characters   (•).

It occurred to me that if the bullet were removed and the words run together,   you could start at any letter and still end up with a meaningful three-letter word.

So start at the   T   and read   TEA.   Start at the   E   and read   EAT,   or start at the   A   and read   ATE.

That got me thinking that maybe there are other words that could be used rather that   TEA.   And that's just English.   What about Italian or Greek or ... um ... Telugu.

For English, we will use the unixdict (now) located at:   unixdict.txt.

(This will maintain continuity with other Rosetta Code tasks that also use it.)

Search for a set of words that could be printed around the edge of a teacup.   The words in each set are to be of the same length, that length being greater than two (thus precluding   AH   and   HA,   for example.)

Having listed a set, for example   [ate tea eat],   refrain from displaying permutations of that set, e.g.:   [eat tea ate]   etc.

The words should also be made of more than one letter   (thus precluding   III   and   OOO   etc.)

The relationship between these words is (using ATE as an example) that the first letter of the first becomes the last letter of the second.   The first letter of the second becomes the last letter of the third.   So   ATE   becomes   TEA   and   TEA   becomes   EAT.

All of the possible permutations, using this particular permutation technique, must be words in the list.

The set you generate for   ATE   will never included the word   ETA   as that cannot be reached via the first-to-last movement method.

Display one line for each set of teacup rim words.

## 11l

`F rotated(String s)   R s[1..]‘’s V s = Set(File(‘unixdict.txt’).read().rtrim("\n").split("\n"))L !s.empty   L(=word) s // `=` is needed here because otherwise after `s.remove(word)` `word` becomes invalid      s.remove(word)      I word.len < 3         L.break       V w = word      L 0 .< word.len - 1         w = rotated(w)         I w C s            s.remove(w)         E            L.break      L.was_no_break         print(word, end' ‘’)         w = word         L 0 .< word.len - 1            w = rotated(w)            print(‘ -> ’w, end' ‘’)         print()       L.break`
Output:
```apt -> pta -> tap
arc -> rca -> car
ate -> tea -> eat
```

## AutoHotkey

`Teacup_rim_text(wList){    oWord := [], oRes := [], n := 0    for i, w in StrSplit(wList, "`n", "`r")        if StrLen(w) >= 3            oWord[StrLen(w), w] := true     for l, obj in oWord    {        for w, bool in obj        {            loop % l                if oWord[l, rotate(w)]                {                    oWord[l, w] := 0                    if (A_Index = 1)                        n++, oRes[n] := w                     if (A_Index < l)                        oRes[n] := oRes[n] "," (w := rotate(w))                }            if (StrSplit(oRes[n], ",").Count() <> l)                oRes.RemoveAt(n)        }    }    return oRes} rotate(w){    return SubStr(w, 2) . SubStr(w, 1, 1)}`
Examples:
`FileRead, wList, % A_Desktop "\unixdict.txt"result := ""for i, v in Teacup_rim_text(wList)	result .= v "`n"MsgBox % resultreturn`
Output:
```apt,pta,tap
arc,rca,car
ate,tea,eat```

## AWK

` # syntax: GAWK -f TEACUP_RIM_TEXT.AWK UNIXDICT.TXT## sorting:#   PROCINFO["sorted_in"] is used by GAWK#   SORTTYPE is used by Thompson Automation's TAWK#{   for (i=1; i<=NF; i++) {      arr[tolower(\$i)] = 0    }}END {    PROCINFO["sorted_in"] = "@ind_str_asc" ; SORTTYPE = 1    for (i in arr) {      leng = length(i)      if (leng > 2) {        delete tmp_arr        words = str = i        tmp_arr[i] = ""        for (j=2; j<=leng; j++) {          str = substr(str,2) substr(str,1,1)          if (str in arr) {            words = words " " str            tmp_arr[str] = ""          }        }        if (length(tmp_arr) == leng) {          count = 0          for (j in tmp_arr) {            (arr[j] == 0) ? arr[j]++ : count++          }          if (count == 0) {            printf("%s\n",words)            circular++          }        }      }    }    printf("%d words, %d circular\n",length(arr),circular)    exit(0)} `
Output:

using UNIXDICT.TXT

```apt pta tap
arc rca car
ate tea eat
25104 words, 3 circular
```

using MIT10000.TXT

```aim ima mai
arc rca car
asp spa pas
ate tea eat
ips psi sip
10000 words, 5 circular
```

## BaCon

`OPTION COLLAPSE TRUE dict\$ = LOAD\$(DIRNAME\$(ME\$) & "/unixdict.txt") FOR word\$ IN dict\$ STEP NL\$    IF LEN(word\$) = 3 AND AMOUNT(UNIQ\$(EXPLODE\$(word\$, 1))) = 3 THEN domain\$ = APPEND\$(domain\$, 0, word\$)NEXT FOR w1\$ IN domain\$    w2\$ = RIGHT\$(w1\$, 2) & LEFT\$(w1\$, 1)    w3\$ = RIGHT\$(w2\$, 2) & LEFT\$(w2\$, 1)    IF TALLY(domain\$, w2\$) AND TALLY(domain\$, w3\$) AND NOT(TALLY(result\$, w1\$)) THEN        result\$ = APPEND\$(result\$, 0, w1\$ & " " & w2\$ & " " & w3\$, NL\$)    ENDIFNEXT PRINT result\$PRINT "Total words: ", AMOUNT(dict\$, NL\$), ", and ", AMOUNT(result\$, NL\$), " are circular."`
Output:

Using 'unixdict.txt':

```apt pta tap
arc rca car
ate tea eat
Total words: 25104, and 3 are circular.```

Using 'wordlist.10000':

```aim ima mai
arc rca car
asp spa pas
ate tea eat
ips psi sip
Total words: 10000, and 5 are circular.
```

## C

Library: GLib
`#include <stdbool.h>#include <stdio.h>#include <stdlib.h>#include <glib.h> int string_compare(gconstpointer p1, gconstpointer p2) {    const char* const* s1 = p1;    const char* const* s2 = p2;    return strcmp(*s1, *s2);} GPtrArray* load_dictionary(const char* file, GError** error_ptr) {    GError* error = NULL;    GIOChannel* channel = g_io_channel_new_file(file, "r", &error);    if (channel == NULL) {        g_propagate_error(error_ptr, error);        return NULL;    }    GPtrArray* dict = g_ptr_array_new_full(1024, g_free);    GString* line = g_string_sized_new(64);    gsize term_pos;    while (g_io_channel_read_line_string(channel, line, &term_pos,                                         &error) == G_IO_STATUS_NORMAL) {        char* word = g_strdup(line->str);        word[term_pos] = '\0';        g_ptr_array_add(dict, word);    }    g_string_free(line, TRUE);    g_io_channel_unref(channel);    if (error != NULL) {        g_propagate_error(error_ptr, error);        g_ptr_array_free(dict, TRUE);        return NULL;    }    g_ptr_array_sort(dict, string_compare);    return dict;} void rotate(char* str, size_t len) {    char c = str;    memmove(str, str + 1, len - 1);    str[len - 1] = c;} char* dictionary_search(const GPtrArray* dictionary, const char* word) {    char** result = bsearch(&word, dictionary->pdata, dictionary->len,                            sizeof(char*), string_compare);    return result != NULL ? *result : NULL;} void find_teacup_words(GPtrArray* dictionary) {    GHashTable* found = g_hash_table_new(g_str_hash, g_str_equal);    GPtrArray* teacup_words = g_ptr_array_new();    GString* temp = g_string_sized_new(8);    for (size_t i = 0, n = dictionary->len; i < n; ++i) {        char* word = g_ptr_array_index(dictionary, i);        size_t len = strlen(word);        if (len < 3 || g_hash_table_contains(found, word))            continue;        g_ptr_array_set_size(teacup_words, 0);        g_string_assign(temp, word);        bool is_teacup_word = true;        for (size_t i = 0; i < len - 1; ++i) {            rotate(temp->str, len);            char* w = dictionary_search(dictionary, temp->str);            if (w == NULL) {                is_teacup_word = false;                break;            }            if (strcmp(word, w) != 0 && !g_ptr_array_find(teacup_words, w, NULL))                g_ptr_array_add(teacup_words, w);        }        if (is_teacup_word && teacup_words->len > 0) {            printf("%s", word);            g_hash_table_add(found, word);            for (size_t i = 0; i < teacup_words->len; ++i) {                char* teacup_word = g_ptr_array_index(teacup_words, i);                printf(" %s", teacup_word);                g_hash_table_add(found, teacup_word);            }            printf("\n");        }    }    g_string_free(temp, TRUE);    g_ptr_array_free(teacup_words, TRUE);    g_hash_table_destroy(found);} int main(int argc, char** argv) {    if (argc != 2) {        fprintf(stderr, "usage: %s dictionary\n", argv);        return EXIT_FAILURE;    }    GError* error = NULL;    GPtrArray* dictionary = load_dictionary(argv, &error);    if (dictionary == NULL) {        if (error != NULL) {            fprintf(stderr, "Cannot load dictionary file '%s': %s\n",                    argv, error->message);            g_error_free(error);        }        return EXIT_FAILURE;    }    find_teacup_words(dictionary);    g_ptr_array_free(dictionary, TRUE);    return EXIT_SUCCESS;}`
Output:

With unixdict.txt:

```apt pta tap
arc rca car
ate tea eat
```

With wordlist.10000:

```aim ima mai
arc rca car
asp spa pas
ate tea eat
ips psi sip
```

## C++

`#include <algorithm>#include <fstream>#include <iostream>#include <set>#include <string>#include <vector> // filename is expected to contain one lowercase word per linestd::set<std::string> load_dictionary(const std::string& filename) {    std::ifstream in(filename);    if (!in)        throw std::runtime_error("Cannot open file " + filename);    std::set<std::string> words;    std::string word;    while (getline(in, word))        words.insert(word);    return words;} void find_teacup_words(const std::set<std::string>& words) {    std::vector<std::string> teacup_words;    std::set<std::string> found;    for (auto w = words.begin(); w != words.end(); ++w) {        std::string word = *w;        size_t len = word.size();        if (len < 3 || found.find(word) != found.end())            continue;        teacup_words.clear();        teacup_words.push_back(word);        for (size_t i = 0; i + 1 < len; ++i) {            std::rotate(word.begin(), word.begin() + 1, word.end());            if (word == *w || words.find(word) == words.end())                break;            teacup_words.push_back(word);        }        if (teacup_words.size() == len) {            found.insert(teacup_words.begin(), teacup_words.end());            std::cout << teacup_words;            for (size_t i = 1; i < len; ++i)                std::cout << ' ' << teacup_words[i];            std::cout << '\n';        }    }} int main(int argc, char** argv) {    if (argc != 2) {        std::cerr << "usage: " << argv << " dictionary\n";        return EXIT_FAILURE;    }    try {        find_teacup_words(load_dictionary(argv));    } catch (const std::exception& ex) {        std::cerr << ex.what() << '\n';        return EXIT_FAILURE;    }    return EXIT_SUCCESS;}`
Output:

With unixdict.txt:

```apt pta tap
arc rca car
ate tea eat
```

With wordlist.10000:

```aim ima mai
arc rca car
asp spa pas
ate tea eat
ips psi sip
```

## F#

` // Teacup rim text. Nigel Galloway: August 7th., 2019let  N=System.IO.File.ReadAllLines("dict.txt")|>Array.filter(fun n->String.length n=3 && Seq.length(Seq.distinct n)>1)|>Set.ofArraylet fG z=Set.map(fun n->System.String(Array.ofSeq (Seq.permute(fun g->(g+z)%3)n))) NSet.intersectMany [N;fG 1;fG 2]|>Seq.distinctBy(Seq.sort>>Array.ofSeq>>System.String)|>Seq.iter(printfn "%s") `
Output:
```aim
arc
asp
ate
ips
```

## Factor

`USING: combinators.short-circuit fry grouping hash-setshttp.client kernel math prettyprint sequences sequences.extrassets sorting splitting ; "https://www.mit.edu/~ecprice/wordlist.10000" http-get nip"\n" split [ { [ length 3 < ] [ all-equal? ] } 1|| ] reject[ [ all-rotations ] map ] [ >hash-set ] bi'[ [ _ in? ] all? ] filter [ natural-sort ] map members .`
Output:
```{
{ "aim" "ima" "mai" }
{ "arc" "car" "rca" }
{ "asp" "pas" "spa" }
{ "ate" "eat" "tea" }
{ "ips" "psi" "sip" }
}
```

## Go

`package main import (    "bufio"    "fmt"    "log"    "os"    "sort"    "strings") func check(err error) {    if err != nil {        log.Fatal(err)    }} func readWords(fileName string) []string {    file, err := os.Open(fileName)    check(err)    defer file.Close()    var words []string    scanner := bufio.NewScanner(file)    for scanner.Scan() {        word := strings.ToLower(strings.TrimSpace(scanner.Text()))        if len(word) >= 3 {            words = append(words, word)        }    }    check(scanner.Err())    return words} func rotate(runes []rune) {    first := runes    copy(runes, runes[1:])    runes[len(runes)-1] = first} func main() {    dicts := []string{"mit_10000.txt", "unixdict.txt"} // local copies    for _, dict := range dicts {        fmt.Printf("Using %s:\n\n", dict)        words := readWords(dict)        n := len(words)        used := make(map[string]bool)    outer:        for _, word := range words {            runes := []rune(word)            variants := []string{word}            for i := 0; i < len(runes)-1; i++ {                rotate(runes)                word2 := string(runes)                if word == word2 || used[word2] {                    continue outer                }                ix := sort.SearchStrings(words, word2)                if ix == n || words[ix] != word2 {                    continue outer                }                variants = append(variants, word2)            }            for _, variant := range variants {                used[variant] = true            }            fmt.Println(variants)        }        fmt.Println()    }}`
Output:
```Using mit_10000.txt:

[aim ima mai]
[arc rca car]
[asp spa pas]
[ate tea eat]
[ips psi sip]

Using unixdict.txt:

[apt pta tap]
[arc rca car]
[ate tea eat]
```

### Using Data.Set

Circular words of more than 2 characters in a local copy of a word list.

`import Data.List (groupBy, intercalate, sort, sortBy)import qualified Data.Set as Simport Data.Ord (comparing)import Data.Function (on) main :: IO ()main =  readFile "mitWords.txt" >>= (putStrLn . showGroups . circularWords . lines) circularWords :: [String] -> [String]circularWords ws =  let lexicon = S.fromList ws  in filter (isCircular lexicon) ws isCircular :: S.Set String -> String -> BoolisCircular lex w = 2 < length w && all (`S.member` lex) (rotations w) rotations :: [a] -> [[a]]rotations = fmap <\$> rotated <*> (enumFromTo 0 . pred . length) rotated :: [a] -> Int -> [a]rotated [] _ = []rotated xs n = zipWith const (drop n (cycle xs)) xs showGroups :: [String] -> StringshowGroups xs =  unlines \$  intercalate " -> " . fmap snd <\$>  filter    ((1 <) . length)    (groupBy (on (==) fst) (sortBy (comparing fst) (((,) =<< sort) <\$> xs)))`
Output:
```arc -> car -> rca
ate -> eat -> tea
aim -> ima -> mai
asp -> pas -> spa
ips -> psi -> sip```

### Filtering anagrams

Or taking a different approach, we can avoid the use of Data.Set by obtaining the groups of anagrams (of more than two characters) in the lexicon, and filtering out a circular subset of these:

`import Data.Function (on)import Data.List (groupBy, intercalate, sort, sortOn)import Data.Ord (comparing) main :: IO ()main =  readFile "mitWords.txt"    >>= ( putStrLn            . unlines            . fmap (intercalate " -> ")            . (circularOnly =<<)            . anagrams            . lines        ) anagrams :: [String] -> [[String]]anagrams ws =  let harvest group px        | px = [fmap snd group]        | otherwise = []   in groupBy        (on (==) fst)        (sortOn fst (((,) =<< sort) <\$> ws))        >>= (harvest <*> ((> 2) . length)) circularOnly :: [String] -> [[String]]circularOnly ws  | (length h - 1) > length rs = []  | otherwise = [h : rs]  where    h = head ws    rs = filter (isRotation h) (tail ws) isRotation :: String -> String -> BoolisRotation xs ys =  xs    /= until      ( (||)          . (ys ==)          <*> (xs ==)      )      rotated      (rotated xs) rotated :: [a] -> [a]rotated [] = []rotated (x : xs) = xs <> [x]`
Output:
```arc -> rca -> car
ate -> tea -> eat
aim -> ima -> mai
asp -> spa -> pas
ips -> psi -> sip```

## J

Definitions, which are developed following the solution

` read=: CR -.~ 1!:1@boxopen  NB. dang that line end!Filter=:(#~`)(`:6) prep=: (;~ /:~);._2 gba=: <@:([: ,/ (>@}."1))/.~ 0&{"1ew=: (>:&# {.)S:_1 Filterle=: (2 < #@{.)S:_1 Filterra=: a: -.~ rotations&> NB. prep was separated for fun, not necessityteacup=: [email protected]:[email protected]:[email protected]:gba rotations=: 3 :0 subset=: 0 = #@:-. assert. 0 1 -: 'ab'(subset~ , subset)'cabag' N=. # {. y for_word. y do.  a=. N ]\ (, (<: N)&{.) word  if. a subset y do. word return. end. end. '') `

Solution finds "apt", "arc", and "ate".

`    NB. D includes the ordered anagram   D=: prep read'd:\tmp\dict'    NB. transposed samples of the input to teacup   |: ({~ (7 [email protected]\$ #)) D┌──────────┬────────────┬───────┬──────────┬──────┬────────┬────────┐│ aaegnprty│ aadeiloprtz│ gmpsuy│ eimnnptuu│ aipst│ agggint│ effoprr│├──────────┼────────────┼───────┼──────────┼──────┼────────┼────────┤│pageantry │trapezoidal │gypsum │neptunium │tapis │tagging │proffer │└──────────┴────────────┴───────┴──────────┴──────┴────────┴────────┘    teacup D┌───┬───┬───┐│apt│arc│ate│└───┴───┴───┘ `

The action of the individual verbs shown here along with intermediate pronouns having such descriptive names as to describe the proverbial names demonstrates the construction of teacup.

`    TEST_DICTIONARY=: 'abc ah ate bac bca blort cab eat ha rat tar tea tra '   TEST=: prep TEST_DICTIONARY    ] GROUPS_BY_ANAGRAM=: gba TEST┌───┬──┬───┬─────┬───┐│abc│ah│ate│blort│rat││bac│ha│eat│     │tar││bca│  │tea│     │tra││cab│  │   │     │   │└───┴──┴───┴─────┴───┘   ] ENOUGH_WORDS=: ew GROUPS_BY_ANAGRAM┌───┬──┬───┬───┐│abc│ah│ate│rat││bac│ha│eat│tar││bca│  │tea│tra││cab│  │   │   │└───┴──┴───┴───┘   ] LONG_ENOUGH=: le ENOUGH_WORDS┌───┬───┬───┐│abc│ate│rat││bac│eat│tar││bca│tea│tra││cab│   │   │└───┴───┴───┘    ] SOLUTION=: ROTATIONS_ACCEPTABLE=: ra LONG_ENOUGH┌───┬───┐│abc│ate│└───┴───┘ `

## Java

Translation of: C++
`import java.io.*;import java.util.*; public class Teacup {    public static void main(String[] args) {        if (args.length != 1) {            System.err.println("usage: java Teacup dictionary");            System.exit(1);        }        try {            findTeacupWords(loadDictionary(args));        } catch (Exception ex) {            System.err.println(ex.getMessage());        }    }     // The file is expected to contain one lowercase word per line    private static Set<String> loadDictionary(String fileName) throws IOException {        Set<String> words = new TreeSet<>();        try (BufferedReader reader = new BufferedReader(new FileReader(fileName))) {            String word;            while ((word = reader.readLine()) != null)                words.add(word);            return words;        }    }     private static void findTeacupWords(Set<String> words) {        List<String> teacupWords = new ArrayList<>();        Set<String> found = new HashSet<>();        for (String word : words) {            int len = word.length();            if (len < 3 || found.contains(word))                continue;            teacupWords.clear();            teacupWords.add(word);            char[] chars = word.toCharArray();            for (int i = 0; i < len - 1; ++i) {                String rotated = new String(rotate(chars));                if (rotated.equals(word) || !words.contains(rotated))                    break;                teacupWords.add(rotated);            }            if (teacupWords.size() == len) {                found.addAll(teacupWords);                System.out.print(word);                for (int i = 1; i < len; ++i)                    System.out.print(" " + teacupWords.get(i));                System.out.println();            }        }    }     private static char[] rotate(char[] ch) {        char c = ch;        System.arraycopy(ch, 1, ch, 0, ch.length - 1);        ch[ch.length - 1] = c;        return ch;    }}`
Output:

With unixdict.txt:

```apt pta tap
arc rca car
ate tea eat
```

With wordlist.10000:

```aim ima mai
arc rca car
asp spa pas
ate tea eat
ips psi sip
```

## JavaScript

### Set() objects

Reading a local dictionary with the macOS JS for Automation library:

Works with: JXA
`(() => {    'use strict';     // main :: IO ()    const main = () =>        showGroups(            circularWords(                // Local copy of:                // https://www.mit.edu/~ecprice/wordlist.10000                lines(readFile('~/mitWords.txt'))            )        );     // circularWords :: [String] -> [String]    const circularWords = ws =>        ws.filter(isCircular(new Set(ws)), ws);     // isCircular :: Set String -> String -> Bool    const isCircular = lexicon => w => {        const iLast = w.length - 1;        return 1 < iLast && until(            ([i, bln, s]) => iLast < i || !bln,            ([i, bln, s]) => [1 + i, lexicon.has(s), rotated(s)],            [0, true, rotated(w)]        );    };     // DISPLAY --------------------------------------------     // showGroups :: [String] -> String    const showGroups = xs =>        unlines(map(            gp => map(snd, gp).join(' -> '),            groupBy(                (a, b) => fst(a) === fst(b),                sortBy(                    comparing(fst),                    map(x => Tuple(concat(sort(chars(x))), x),                        xs                    )                )            ).filter(gp => 1 < gp.length)        ));      // MAC OS JS FOR AUTOMATION ---------------------------     // readFile :: FilePath -> IO String    const readFile = fp => {        const            e = \$(),            uw = ObjC.unwrap,            s = uw(                \$.NSString.stringWithContentsOfFileEncodingError(                    \$(fp)                    .stringByStandardizingPath,                    \$.NSUTF8StringEncoding,                    e                )            );        return undefined !== s ? (            s        ) : uw(e.localizedDescription);    };     // GENERIC FUNCTIONS ----------------------------------     // Tuple (,) :: a -> b -> (a, b)    const Tuple = (a, b) => ({        type: 'Tuple',        '0': a,        '1': b,        length: 2    });     // chars :: String -> [Char]    const chars = s => s.split('');     // comparing :: (a -> b) -> (a -> a -> Ordering)    const comparing = f =>        (x, y) => {            const                a = f(x),                b = f(y);            return a < b ? -1 : (a > b ? 1 : 0);        };     // concat :: [[a]] -> [a]    // concat :: [String] -> String    const concat = xs =>        0 < xs.length ? (() => {            const unit = 'string' !== typeof xs ? (                []            ) : '';            return unit.concat.apply(unit, xs);        })() : [];     // fst :: (a, b) -> a    const fst = tpl => tpl;     // groupBy :: (a -> a -> Bool) -> [a] -> [[a]]    const groupBy = (f, xs) => {        const tpl = xs.slice(1)            .reduce((a, x) => {                const h = a.length > 0 ? a : undefined;                return (undefined !== h) && f(h, x) ? (                    Tuple(a, a.concat([x]))                ) : Tuple(a.concat([a]), [x]);            }, Tuple([], 0 < xs.length ? [xs] : []));        return tpl.concat([tpl]);    };     // lines :: String -> [String]    const lines = s => s.split(/[\r\n]/);     // map :: (a -> b) -> [a] -> [b]    const map = (f, xs) =>        (Array.isArray(xs) ? (            xs        ) : xs.split('')).map(f);     // rotated :: String -> String    const rotated = xs =>        xs.slice(1) + xs;     // showLog :: a -> IO ()    const showLog = (...args) =>        console.log(            args            .map(JSON.stringify)            .join(' -> ')        );     // snd :: (a, b) -> b    const snd = tpl => tpl;     // sort :: Ord a => [a] -> [a]    const sort = xs => xs.slice()        .sort((a, b) => a < b ? -1 : (a > b ? 1 : 0));     // sortBy :: (a -> a -> Ordering) -> [a] -> [a]    const sortBy = (f, xs) =>        xs.slice()        .sort(f);     // unlines :: [String] -> String    const unlines = xs => xs.join('\n');     // until :: (a -> Bool) -> (a -> a) -> a -> a    const until = (p, f, x) => {        let v = x;        while (!p(v)) v = f(v);        return v;    };     // MAIN ---    return main();})();`
Output:
```arc -> car -> rca
ate -> eat -> tea
aim -> ima -> mai
asp -> pas -> spa
ips -> psi -> sip```

### Anagram filtering

Reading a local dictionary with the macOS JS for Automation library:

Works with: JXA
`(() => {    'use strict';     // main :: IO ()    const main = () =>        anagrams(lines(readFile('~/mitWords.txt')))        .flatMap(circularOnly)        .map(xs => xs.join(' -> '))        .join('\n')     // anagrams :: [String] -> [[String]]    const anagrams = ws =>        groupBy(            on(eq, fst),            sortBy(                comparing(fst),                ws.map(w => Tuple(sort(chars(w)).join(''), w))            )        ).flatMap(            gp => 2 < gp.length ? [                gp.map(snd)            ] : []        )     // circularOnly :: [String] -> [[String]]    const circularOnly = ws => {        const h = ws;        return ws.length < h.length ? (            []        ) : (() => {            const rs = rotations(h);            return rs.every(r => ws.includes(r)) ? (                [rs]            ) : [];        })();    };     // rotations :: String -> [String]    const rotations = s =>        takeIterate(s.length, rotated, s)     // rotated :: [a] -> [a]    const rotated = xs => xs.slice(1).concat(xs);      // GENERIC FUNCTIONS ----------------------------     // Tuple (,) :: a -> b -> (a, b)    const Tuple = (a, b) => ({        type: 'Tuple',        '0': a,        '1': b,        length: 2    });     // chars :: String -> [Char]    const chars = s => s.split('');     // comparing :: (a -> b) -> (a -> a -> Ordering)    const comparing = f =>        (x, y) => {            const                a = f(x),                b = f(y);            return a < b ? -1 : (a > b ? 1 : 0);        };     // eq (==) :: Eq a => a -> a -> Bool    const eq = (a, b) => a === b     // fst :: (a, b) -> a    const fst = tpl => tpl;     // groupBy :: (a -> a -> Bool) -> [a] -> [[a]]    const groupBy = (f, xs) => {        const tpl = xs.slice(1)            .reduce((a, x) => {                const h = a.length > 0 ? a : undefined;                return (undefined !== h) && f(h, x) ? (                    Tuple(a, a.concat([x]))                ) : Tuple(a.concat([a]), [x]);            }, Tuple([], 0 < xs.length ? [xs] : []));        return tpl.concat([tpl]);    };     // lines :: String -> [String]    const lines = s => s.split(/[\r\n]/);     // mapAccumL :: (acc -> x -> (acc, y)) -> acc -> [x] -> (acc, [y])    const mapAccumL = (f, acc, xs) =>        xs.reduce((a, x, i) => {            const pair = f(a, x, i);            return Tuple(pair, a.concat(pair));        }, Tuple(acc, []));     // on :: (b -> b -> c) -> (a -> b) -> a -> a -> c    const on = (f, g) => (a, b) => f(g(a), g(b));     // readFile :: FilePath -> IO String    const readFile = fp => {        const            e = \$(),            uw = ObjC.unwrap,            s = uw(                \$.NSString.stringWithContentsOfFileEncodingError(                    \$(fp)                    .stringByStandardizingPath,                    \$.NSUTF8StringEncoding,                    e                )            );        return undefined !== s ? (            s        ) : uw(e.localizedDescription);    };     // snd :: (a, b) -> b    const snd = tpl => tpl;     // sort :: Ord a => [a] -> [a]    const sort = xs => xs.slice()        .sort((a, b) => a < b ? -1 : (a > b ? 1 : 0));     // sortBy :: (a -> a -> Ordering) -> [a] -> [a]    const sortBy = (f, xs) =>        xs.slice()        .sort(f);     // takeIterate :: Int -> (a -> a) -> a -> [a]    const takeIterate = (n, f, x) =>        snd(mapAccumL((a, _, i) => {            const v = 0 !== i ? f(a) : x;            return [v, v];        }, x, Array.from({            length: n        })));     // MAIN ---    return main();})();`
Output:
```arc -> rca -> car
ate -> tea -> eat
aim -> ima -> mai
asp -> spa -> pas
ips -> psi -> sip```

## jq

Works with: jq

Works with gojq, the Go implementation of jq (*)

(*) To run the program below using gojq, change `keys_unsorted` to `keys`; this slows it down a lot.

`# Output: an array of the words when read around the rimdef read_teacup:  . as \$in  | [range(0; length) | \$in[.:] + \$in[:.] ]; # Booleandef is_teacup_word(\$dict):  . as \$in  | all( range(1; length); . as \$i | \$dict[ \$in[\$i:] + \$in[:\$i] ]) ; # Output: a stream of the eligible teacup wordsdef teacup_words:  def same_letters:     explode     | . as \$first     | all( .[1:][]; . == \$first);   # Only consider one word in a teacup cycle  def consider: explode | . == min;  # Create the dictionary  reduce (inputs           | select(length>2 and (same_letters|not))) as \$w ( {};     .[\$w]=true )  | . as \$dict  | keys[]  | select(consider and is_teacup_word(\$dict)) ; # The task:teacup_words| read_teacup`
Output:

Invocation example: jq -nRc -f teacup-rim.jq unixdict.txt

```["apt","pta","tap"]
["arc","rca","car"]
["ate","tea","eat"]
```

## Julia

Using the MIT 10000 word list, and excluding words of less than three letters, to reduce output length.

`using HTTP rotate(s, n) = String(circshift(Vector{UInt8}(s), n)) isliketea(w, d) = (n = length(w); n > 2 && any(c -> c != w, w) &&     all(i -> haskey(d, rotate(w, i)), 1:n-1)) function getteawords(listuri)    req = HTTP.request("GET", listuri)    wdict = Dict{String, Int}((lowercase(string(x)), 1) for x in split(String(req.body), r"\s+"))    sort(unique([sort([rotate(word, i) for i in 1:length(word)])         for word in collect(keys(wdict)) if isliketea(word, wdict)]))end foreach(println, getteawords("https://www.mit.edu/~ecprice/wordlist.10000")) `
Output:
```["aim", "ima", "mai"]
["arc", "car", "rca"]
["asp", "pas", "spa"]
["ate", "eat", "tea"]
["ips", "psi", "sip"]
```

## Lychen

Lychen is V8 JavaScript wrapped in C#, exposing C# into JavaScript.

Using https://www.mit.edu/~ecprice/wordlist.10000 as per the Julia example.

` const wc = new CS.System.Net.WebClient();const lines = wc.DownloadString("http://wiki.puzzlers.org/pub/wordlists/unixdict.txt");const words = lines.split(/\n/g);const collection = {};words.filter(word => word.length > 2).forEach(word => {  let allok = true;  let newword = word;  for (let i = 0; i < word.length - 1; i++) {    newword = newword.substr(1) + newword.substr(0, 1);    if (!words.includes(newword)) {      allok = false;      break;    }  }  if (allok) {    const key = word.split("").sort().join("");    if (!collection[key]) {      collection[key] = [word];    } else {      if (!collection[key].includes(word)) {        collection[key].push(word);      }    }  }});Object.keys(collection).filter(key => collection[key].length > 1).forEach(key => console.log("%s", collection[key].join(", "))); `
```apt, pta, tap
arc, car, rca
ate, eat, tea
```

## Mathematica/Wolfram Language

`ClearAll[Teacuppable]TeacuppableHelper[set_List] := Module[{f, s},  f = First[set];  s = StringRotateLeft[f, #] & /@ Range[Length[set]];  Sort[s] == Sort[set]  ]Teacuppable[set_List] := Module[{ss, l},  l = StringLength[First[set]];  ss = Subsets[set, {l}];  Select[ss, TeacuppableHelper]  ]s = Import["http://wiki.puzzlers.org/pub/wordlists/unixdict.txt", "String"];s //= StringSplit[#, "\n"] &;s //= Select[StringLength /* GreaterThan];s //= Map[ToLowerCase];s //= Map[{#, Sort[Characters[#]]} &];s //= GatherBy[#, Last] &;s //= Select[Length /* GreaterEqualThan];s = s[[All, All, 1]];s //= Select[StringLength[First[#]] <= Length[#] &];Flatten[Teacuppable /@ s, 1]`
Output:
`{{"apt", "pta", "tap"}, {"arc", "car", "rca"}, {"ate", "eat", "tea"}}`

## Nim

`import sequtils, sets, sugar let words = collect(initHashSet, for word in "unixdict.txt".lines: {word}) proc rotate(s: var string) =  let first = s  for i in 1..s.high: s[i - 1] = s[i]  s[^1] = first var result: seq[string]for word in "unixdict.txt".lines:  if word.len >= 3:    block checkWord:      var w = word      for _ in 1..w.len:        w.rotate()        if w notin words or w in result:          # Not present in dictionary or already encountered.          break checkWord      if word.anyIt(it != word):        # More then one letter.        result.add word for word in result:  var w = word  stdout.write w  for _ in 2..w.len:    w.rotate()    stdout.write " → ", w  echo()`
Output:
```apt → pta → tap
arc → rca → car
ate → tea → eat```

## Perl

Translation of: Raku
`use strict;use warnings;use feature 'say';use List::Util qw(uniqstr any); my(%words,@teacups,%seen); open my \$fh, '<', 'ref/wordlist.10000';while (<\$fh>) {    chomp(my \$w = uc \$_);    next if length \$w < 3;    push @{\$words{join '', sort split //, \$w}}, \$w;} for my \$these (values %words) {    next if @\$these < 3;    MAYBE: for (@\$these) {        my \$maybe = \$_;        next if \$seen{\$_};        my @print;        for my \$i (0 .. length \$maybe) {            if (any { \$maybe eq \$_ } @\$these) {                push @print, \$maybe;                \$maybe = substr(\$maybe,1) . substr(\$maybe,0,1)            } else {                @print = () and next MAYBE            }        }        if (@print) {            push @teacups, [@print];            \$seen{\$_}++ for @print;        }    }} say join ', ', uniqstr @\$_ for sort @teacups;`
Output:
```ARC, RCA, CAR
ATE, TEA, EAT
AIM, IMA, MAI
ASP, SPA, PAS
IPS, PSI, SIP```

## Phix

Filters anagram lists

```procedure filter_set(sequence anagrams)
-- anagrams is a (small) set of words that are all anagrams of each other
--  for example: {"angel","angle","galen","glean","lange"}
-- print any set(s) for which every rotation is also present (marking as
-- you go to prevent the same set appearing with each word being first)
sequence used = repeat(false,length(anagrams))
for i=1 to length(anagrams) do
if not used[i] then
used[i] = true
string word = anagrams[i]
sequence res = {word}
for r=2 to length(word) do
word = word[2..\$]&word
integer k = find(word,anagrams)
if k=0 then res = {} exit end if
if not find(word,res) then
res = append(res,word)
end if
used[k] = true
end for
if length(res) then ?res end if
end if
end for
end procedure

procedure teacup(string filename, integer minlen=3, bool allow_mono=false)
sequence letters,       -- a sorted word, eg "ate" -> "aet".
words = {},    -- in eg {{"aet","ate"},...} form
anagrams = {}, -- a set with same letters
last = ""      -- (for building such sets)
object word

printf(1,"using %s",filename)
integer fn = open(filename,"r")
while 1 do
word = lower(trim(gets(fn)))
if atom(word) then exit end if
if length(word)>=minlen then
letters = sort(word)
words = append(words, {letters, word})
end if
end while
close(fn)
if length(words)!=0 then
words = sort(words) -- group by anagram
for i=1 to length(words) do
{letters,word} = words[i]
if letters=last then
anagrams = append(anagrams,word)
else
if allow_mono or length(anagrams)>=length(last) then
filter_set(anagrams)
end if
last = letters
anagrams = {word}
end if
end for
if allow_mono or length(anagrams)>=length(last) then
filter_set(anagrams)
end if
end if
end procedure

teacup(join_path({"demo","unixdict.txt"}))
-- These match output from other entries:
--teacup(join_path({"demo","unixdict.txt"}),allow_mono:=true)
--teacup(join_path({"demo","rosetta","mit.wordlist.10000.txt"}))
--teacup(join_path({"demo","rosetta","words.txt"}),4,true)
-- Note that allow_mono is needed to display eg {"agag","gaga"}
```
Output:
```using demo\unixdict.txt, 24948 words read
{"arc","rca","car"}
{"ate","tea","eat"}
{"apt","pta","tap"}
```

## PicoLisp

`(de rotw (W)   (let W (chop W)      (unless (or (apply = W) (not (cddr W)))         (make            (do (length W)               (link (pack (copy W)))               (rot W) ) ) ) ) )(off D)(put 'D 'v (cons))(mapc   '((W)      (idx 'D (cons (hash W) W) T) )   (setq Words      (make (in "wordlist.10000" (while (line T) (link @)))) ) )(mapc   println   (extract      '((W)         (let? Lst (rotw W)            (when               (and                  (fully                     '((L) (idx 'D (cons (hash L) L)))                     Lst )                  (not                     (member (car Lst) (car (get 'D 'v))) ) )               (mapc                  '((L) (push (get 'D 'v) L))                  Lst )               Lst ) ) )      Words ) )`
Output:
```("aim" "mai" "ima")
("arc" "car" "rca")
("asp" "pas" "spa")
("ate" "eat" "tea")
("ips" "sip" "psi")
```

## PureBasic

`DataSection  dname:  Data.s "./Data/unixdict.txt"  Data.s "./Data/wordlist.10000.txt"  Data.s ""  EndDataSection EnableExplicitDim c.s{1}(2)  Define.s txt, bset, res, dnDefine.i i,q, cwRestore dname : Read.s dnWhile OpenConsole() And ReadFile(0,dn)  While Not Eof(0)    cw+1    txt=ReadString(0)    If Len(txt)=3 : bset+txt+";" : EndIf  Wend  CloseFile(0)    For i=1 To CountString(bset,";")    PokeS(c(),StringField(bset,i,";"))        If FindString(res,c(0)+c(1)+c(2)) : Continue : EndIf        If c(0)=c(1) Or c(1)=c(2) Or c(0)=c(2) : Continue : EndIf        If FindString(bset,c(1)+c(2)+c(0)) And FindString(bset,c(2)+c(0)+c(1))      res+c(0)+c(1)+c(2)+~"\t"+c(1)+c(2)+c(0)+~"\t"+c(2)+c(0)+c(1)+~"\n"    EndIf         Next  PrintN(res+Str(cw)+" words, "+Str(CountString(res,~"\n"))+" circular") : Input()  bset="" : res="" : cw=0  Read.s dnWend`
Output:
```apt	pta	tap
arc	rca	car
ate	tea	eat
25104 words, 3 circular

aim	ima	mai
arc	rca	car
asp	spa	pas
ate	tea	eat
ips	psi	sip
10000 words, 5 circular```

## Python

### Functional

Composing generic functions, and considering only anagram groups.

`'''Teacup rim text''' from itertools import chain, groupbyfrom os.path import expanduserfrom functools import reduce  # main :: IO ()def main():    '''Circular anagram groups, of more than one word,       and containing words of length > 2, found in:       https://www.mit.edu/~ecprice/wordlist.10000    '''    print('\n'.join(        concatMap(circularGroup)(            anagrams(3)(                # Reading from a local copy.                lines(readFile('~/mitWords.txt'))            )        )    ))  # anagrams :: Int -> [String] -> [[String]]def anagrams(n):    '''Groups of anagrams, of minimum group size n,       found in the given word list.    '''    def go(ws):        def f(xs):            return [                [snd(x) for x in xs]            ] if n <= len(xs) >= len(xs) else []        return concatMap(f)(groupBy(fst)(sorted(            [(''.join(sorted(w)), w) for w in ws],            key=fst        )))    return go  # circularGroup :: [String] -> [String]def circularGroup(ws):    '''Either an empty list, or a list containing       a string showing any circular subset found in ws.    '''    lex = set(ws)    iLast = len(ws) - 1    # If the set contains one word that is circular,    # then it must contain all of them.    (i, blnCircular) = until(        lambda tpl: tpl or (tpl > iLast)    )(        lambda tpl: (1 + tpl, isCircular(lex)(ws[tpl]))    )(        (0, False)    )    return [' -> '.join(allRotations(ws[i]))] if blnCircular else []  # isCircular :: Set String -> String -> Booldef isCircular(lexicon):    '''True if all of a word's rotations       are found in the given lexicon.    '''    def go(w):        def f(tpl):            (i, _, x) = tpl            return (1 + i, x in lexicon, rotated(x))         iLast = len(w) - 1        return until(            lambda tpl: iLast < tpl or (not tpl)        )(f)(            (0, True, rotated(w))        )    return go  # allRotations :: String -> [String]def allRotations(w):    '''All rotations of the string w.'''    return takeIterate(len(w) - 1)(        rotated    )(w)  # GENERIC ------------------------------------------------- # concatMap :: (a -> [b]) -> [a] -> [b]def concatMap(f):    '''A concatenated list over which a function has been mapped.       The list monad can be derived by using a function f which       wraps its output in a list,       (using an empty list to represent computational failure).    '''    def go(xs):        return chain.from_iterable(map(f, xs))    return go  # fst :: (a, b) -> adef fst(tpl):    '''First member of a pair.'''    return tpl  # groupBy :: (a -> b) -> [a] -> [[a]]def groupBy(f):    '''The elements of xs grouped,       preserving order, by equality       in terms of the key function f.    '''    def go(xs):        return [            list(x) for x in groupby(xs, key=f)        ]    return go  # lines :: String -> [String]def lines(s):    '''A list of strings,       (containing no newline characters)       derived from a single new-line delimited string.    '''    return s.splitlines()  # mapAccumL :: (acc -> x -> (acc, y)) -> acc -> [x] -> (acc, [y])def mapAccumL(f):    '''A tuple of an accumulation and a list derived by a       combined map and fold,       with accumulation from left to right.    '''    def go(a, x):        tpl = f(a, x)        return (tpl, a + [tpl])    return lambda acc: lambda xs: (        reduce(go, xs, (acc, []))    )  # readFile :: FilePath -> IO Stringdef readFile(fp):    '''The contents of any file at the path       derived by expanding any ~ in fp.    '''    with open(expanduser(fp), 'r', encoding='utf-8') as f:        return f.read()  # rotated :: String -> Stringdef rotated(s):    '''A string rotated 1 character to the right.'''    return s[1:] + s  # snd :: (a, b) -> bdef snd(tpl):    '''Second member of a pair.'''    return tpl  # takeIterate :: Int -> (a -> a) -> a -> [a]def takeIterate(n):    '''Each value of n iterations of f       over a start value of x.    '''    def go(f):        def g(x):            def h(a, i):                v = f(a) if i else x                return (v, v)            return mapAccumL(h)(x)(                range(0, 1 + n)            )        return g    return go  # until :: (a -> Bool) -> (a -> a) -> a -> adef until(p):    '''The result of repeatedly applying f until p holds.       The initial seed value is x.    '''    def go(f):        def g(x):            v = x            while not p(v):                v = f(v)            return v        return g    return go  # MAIN ---if __name__ == '__main__':    main()`
Output:
```arc -> rca -> car
ate -> tea -> eat
aim -> ima -> mai
asp -> spa -> pas
ips -> psi -> sip```

## Raku

(formerly Perl 6)

Works with: Rakudo version 2019.07.1

There doesn't seem to be any restriction that the word needs to consist only of lowercase letters, so words of any case are included. Since the example code specifically shows the example words (TEA, EAT, ATE) in uppercase, I elected to uppercase the found words.

As the specs keep changing, this version will accept ANY text file as its dictionary and accepts parameters to configure the minimum number of characters in a word to consider and whether to allow mono-character words.

Defaults to unixdict.txt, minimum 3 characters and mono-character 'words' disallowed. Feed a file name to use a different word list, an integer to --min-chars and/or a truthy value to --mono to allow mono-chars.

`my %*SUB-MAIN-OPTS = :named-anywhere; unit sub MAIN ( \$dict = 'unixdict.txt', :\$min-chars = 3, :\$mono = False ); my %words;\$dict.IO.slurp.words.map: { .chars < \$min-chars ?? (next) !! %words{.uc.comb.sort.join}.push: .uc }; my @teacups;my %seen; for %words.values -> @these {    next if !\$mono && @these < 2;    MAYBE: for @these {        my \$maybe = \$_;        next if %seen{\$_};        my @print;        for ^\$maybe.chars {            if \$maybe ∈ @these {                @print.push: \$maybe;                \$maybe = \$maybe.comb.list.rotate.join;            } else {                @print = ();                next MAYBE            }        }        if @print.elems {            @teacups.push: @print;            %seen{\$_}++ for @print;        }    }} say .unique.join(", ") for sort @teacups;`
Defaults:

Command line: raku teacup.p6

```APT, PTA, TAP
ARC, RCA, CAR
ATE, TEA, EAT```
Allow mono-chars:

Command line: raku teacup.p6 --mono=1

```AAA
APT, PTA, TAP
ARC, RCA, CAR
ATE, TEA, EAT
III```
Using a larger dictionary:

words.txt file from https://github.com/dwyl/english-words

Command line: raku teacup.p6 words.txt --min-chars=4 --mono=Allow

```AAAA
AAAAAA
AGAG, GAGA
ALIT, LITA, ITAL, TALI
AMAN, MANA, ANAM, NAMA
AMAR, MARA, ARAM, RAMA
AMEL, MELA, ELAM, LAME
AMEN, MENA, ENAM, NAME
AMOR, MORA, ORAM, RAMO
ANAN, NANA
ANIL, NILA, ILAN, LANI
ARAR, RARA
ARAS, RASA, ASAR, SARA
ARIS, RISA, ISAR, SARI
ASEL, SELA, ELAS, LASE
ASER, SERA, ERAS, RASE
DENI, ENID, NIDE, IDEN
DOLI, OLID, LIDO, IDOL
EGOR, GORE, OREG, REGO
ENOL, NOLE, OLEN, LENO
ESOP, SOPE, OPES, PESO
ISIS, SISI
MMMM
MORO, OROM, ROMO, OMOR
OOOO```

## REXX

All words that contained non─letter (Latin) characters   (periods, decimal digits, minus signs, underbars, or embedded blanks)
weren't considered as candidates for circular words.

Duplicated words (such as   sop   and   SOP)   are ignored   (just the 2nd and subsequent duplicated words are deleted).

All words in the dictionary are treated as caseless.

The dictionary wasn't assumed to be sorted in any way.

`/*REXX pgm finds circular words (length>2),  using a dictionary,  suppress permutations.*/parse arg iFID L .                               /*obtain optional arguments from the CL*/if iFID==''|iFID==","  then iFID= 'wordlist.10k' /*Not specified?  Then use the default.*/if    L==''|   L==","  then    L= 3              /* "      "         "   "   "     "    */#= 0                                             /*number of words in dictionary, Len>L.*/@.=                                              /*stemmed array of non─duplicated words*/       do r=0  while lines(iFID) \== 0           /*read all lines (words) in dictionary.*/       parse upper value  linein(iFID)  with z . /*obtain a word from the dictionary.   */       if length(z)<L | @.z\==''  then iterate   /*length must be  L  or more,  no dups.*/       if \datatype(z, 'U')       then iterate   /*Word contains non-letters?  Then skip*/       @.z = z                                   /*assign a word from the dictionary.   */       #= # + 1;     \$.#= z                      /*bump word count; append word to list.*/       end   /*r*/                               /* [↑]  dictionary need not be sorted. */cw= 0                                            /*the number of circular words (so far)*/say "There're "    r    ' entries in the dictionary (of all types):  '      iFIDsay "There're "    #    ' words in the dictionary of at least length '      Lsay       do j=1  for #;      x= \$.j;      y= x     /*obtain the  Jth  word in the list.   */       if x==''  then iterate                    /*if a null, don't show variants.      */       yy= y                                     /*the start of a list of the variants. */                     do k=1  for length(x)-1     /*"circulate" the litters in the word. */                     y= substr(y, 2)left(y, 1)   /*add the left letter to the right end.*/                     if @.y==''  then iterate j  /*if not a word,  then skip this word. */                     yy= yy','   y               /*append to the list of the variants.  */                     @.y=                        /*nullify word to suppress permutations*/                     end   /*k*/                        cw= cw + 1                                /*bump counter of circular words found.*/       say 'circular word: '     yy              /*display a circular word and variants.*/       end   /*j*/saysay cw     ' circular words were found.'         /*stick a fork in it,  we're all done. */`
output   when using the default inputs:
```There're  10000  entries in the dictionary (of all types):   wordlist.10k
There're  9578  words in the dictionary of at least length  3

circular word:  AIM, IMA, MAI
circular word:  ARC, RCA, CAR
circular word:  ASP, SPA, PAS
circular word:  ATE, TEA, EAT
circular word:  IPS, PSI, SIP

5  circular words were found.
```

## Rust

`use std::collections::BTreeSet;use std::collections::HashSet;use std::fs::File;use std::io::{self, BufRead};use std::iter::FromIterator; fn load_dictionary(filename: &str) -> std::io::Result<BTreeSet<String>> {    let file = File::open(filename)?;    let mut dict = BTreeSet::new();    for line in io::BufReader::new(file).lines() {        let word = line?;        dict.insert(word);    }    Ok(dict)} fn find_teacup_words(dict: &BTreeSet<String>) {    let mut teacup_words: Vec<&String> = Vec::new();    let mut found: HashSet<&String> = HashSet::new();    for word in dict {        let len = word.len();        if len < 3 || found.contains(word) {            continue;        }        teacup_words.clear();        let mut is_teacup_word = true;        let mut chars: Vec<char> = word.chars().collect();        for _ in 1..len {            chars.rotate_left(1);            if let Some(w) = dict.get(&String::from_iter(&chars)) {                if !w.eq(word) && !teacup_words.contains(&w) {                    teacup_words.push(w);                }            } else {                is_teacup_word = false;                break;            }        }        if !is_teacup_word || teacup_words.is_empty() {            continue;        }        print!("{}", word);        found.insert(word);        for w in &teacup_words {            found.insert(w);            print!(" {}", w);        }        println!();    }} fn main() {    let args: Vec<String> = std::env::args().collect();    if args.len() != 2 {        eprintln!("Usage: teacup dictionary");        std::process::exit(1);    }    let dict = load_dictionary(&args);    match dict {        Ok(dict) => find_teacup_words(&dict),        Err(error) => eprintln!("Cannot open file {}: {}", &args, error),    }}`
Output:

With unixdict.txt:

```apt pta tap
arc rca car
ate tea eat
```

With wordlist.10000:

```aim ima mai
arc rca car
asp spa pas
ate tea eat
ips psi sip
```

## Swift

`import Foundation func loadDictionary(_ path: String) throws -> Set<String> {    let contents = try String(contentsOfFile: path, encoding: String.Encoding.ascii)    return Set<String>(contents.components(separatedBy: "\n").filter{!\$0.isEmpty})} func rotate<T>(_ array: inout [T]) {    guard array.count > 1 else {        return    }    let first = array    array.replaceSubrange(0..<array.count-1, with: array[1...])    array[array.count - 1] = first} func findTeacupWords(_ dictionary: Set<String>) {    var teacupWords: [String] = []    var found = Set<String>()    for word in dictionary {        if word.count < 3 || found.contains(word) {            continue        }        teacupWords.removeAll()        var isTeacupWord = true        var chars = Array(word)        for _ in 1..<word.count {            rotate(&chars)            let w = String(chars)            if (!dictionary.contains(w)) {                isTeacupWord = false                break            }            if w != word && !teacupWords.contains(w) {                teacupWords.append(w)            }        }        if !isTeacupWord || teacupWords.isEmpty {            continue        }        print(word, terminator: "")        found.insert(word)        for w in teacupWords {            found.insert(w)            print(" \(w)", terminator: "")        }        print()    }} do {    let dictionary = try loadDictionary("unixdict.txt")    findTeacupWords(dictionary)} catch {    print(error)}`
Output:
```car arc rca
eat ate tea
pta tap apt
```

## Wren

Translation of: Go
Library: Wren-str
Library: Wren-sort
`import "io" for Fileimport "/str" for Strimport "/sort" for Find var readWords = Fn.new { |fileName|    var dict = File.read(fileName).split("\n")    return dict.where { |w| w.count >= 3 }.toList} var dicts = ["mit10000.txt", "unixdict.txt"]for (dict in dicts) {    System.print("Using %(dict):\n")    var words = readWords.call(dict)    var n = words.count    var used = {}    for (word in words) {        var outer = false        var variants = [word]        var word2 = word        for (i in 0...word.count-1) {            word2 = Str.lshift(word2)            if (word == word2 || used[word2]) {                outer = true                break            }            var ix = Find.first(words, word2)            if (ix == n || words[ix] != word2) {                outer = true                break            }            variants.add(word2)        }        if (!outer) {            for (variant in variants) used[variant] = true            System.print(variants)        }    }    System.print()}`
Output:
```Using mit10000.txt:

[aim, ima, mai]
[arc, rca, car]
[asp, spa, pas]
[ate, tea, eat]
[ips, psi, sip]

Using unixdict.txt:

[apt, pta, tap]
[arc, rca, car]
[ate, tea, eat]
```

## zkl

`// Limited to ASCII// This is limited to the max items a Dictionary can holdfcn teacut(wordFile){   words:=File(wordFile).pump(Dictionary().add.fp1(True),"strip");   seen :=Dictionary();   foreach word in (words.keys){      rots,w,sz := List(), word, word.len();      if(sz>2 and word.unique().len()>2 and not seen.holds(word)){	 do(sz-1){ 	    w=String(w[-1],w[0,-1]);	// rotate one character	    if(not words.holds(w)) continue(2);	// not a word, skip these	    rots.append(w); 		// I'd like to see all the rotations	 }	 println(rots.append(word).sort().concat(" ")); 	 rots.pump(seen.add.fp1(True));	// we've seen these rotations      }   }}`
`println("\nunixdict:");           teacut("unixdict.txt");println("\nmit_wordlist_10000:"); teacut("mit_wordlist_10000.txt");`
Output:
```unixdict:
apt pta tap
ate eat tea
arc car rca

mit_wordlist_10000:
asp pas spa
ips psi sip
ate eat tea
aim ima mai
arc car rca
```