Teacup rim text

You are encouraged to solve this task according to the task description, using any language you may know.
On a set of coasters we have, there's a picture of a teacup. On the rim of the teacup the word TEA appears a number of times separated by bullet characters (•).
It occurred to me that if the bullet were removed and the words run together, you could start at any letter and still end up with a meaningful three-letter word.
So start at the T and read TEA. Start at the E and read EAT, or start at the A and read ATE.
That got me thinking that maybe there are other words that could be used rather that TEA. And that's just English. What about Italian or Greek or ... um ... Telugu.
For English, we will use the unixdict (now) located at: unixdict.txt.
(This will maintain continuity with other Rosetta Code tasks that also use it.)
- Task
Search for a set of words that could be printed around the edge of a teacup. The words in each set are to be of the same length, that length being greater than two (thus precluding AH and HA, for example.)
Having listed a set, for example [ate tea eat], refrain from displaying permutations of that set, e.g.: [eat tea ate] etc.
The words should also be made of more than one letter (thus precluding III and OOO etc.)
The relationship between these words is (using ATE as an example) that the first letter of the first becomes the last letter of the second. The first letter of the second becomes the last letter of the third. So ATE becomes TEA and TEA becomes EAT.
All of the possible permutations, using this particular permutation technique, must be words in the list.
The set you generate for ATE will never included the word ETA as that cannot be reached via the first-to-last movement method.
Display one line for each set of teacup rim words.
- Metrics
- Counting
- Word frequency
- Letter frequency
- Jewels and stones
- I before E except after C
- Bioinformatics/base count
- Count occurrences of a substring
- Count how many vowels and consonants occur in a string
- Remove/replace
- XXXX redacted
- Conjugate a Latin verb
- Remove vowels from a string
- String interpolation (included)
- Strip block comments
- Strip comments from a string
- Strip a set of characters from a string
- Strip whitespace from a string -- top and tail
- Strip control codes and extended characters from a string
- Anagrams/Derangements/shuffling
- Word wheel
- ABC problem
- Sattolo cycle
- Knuth shuffle
- Ordered words
- Superpermutation minimisation
- Textonyms (using a phone text pad)
- Anagrams
- Anagrams/Deranged anagrams
- Permutations/Derangements
- Find/Search/Determine
- ABC words
- Odd words
- Word ladder
- Semordnilap
- Word search
- Wordiff (game)
- String matching
- Tea cup rim text
- Alternade words
- Changeable words
- State name puzzle
- String comparison
- Unique characters
- Unique characters in each string
- Extract file extension
- Levenshtein distance
- Palindrome detection
- Common list elements
- Longest common suffix
- Longest common prefix
- Compare a list of strings
- Longest common substring
- Find common directory path
- Words from neighbour ones
- Change e letters to i in words
- Non-continuous subsequences
- Longest common subsequence
- Longest palindromic substrings
- Longest increasing subsequence
- Words containing "the" substring
- Sum of the digits of n is substring of n
- Determine if a string is numeric
- Determine if a string is collapsible
- Determine if a string is squeezable
- Determine if a string has all unique characters
- Determine if a string has all the same characters
- Longest substrings without repeating characters
- Find words which contains all the vowels
- Find words which contain the most consonants
- Find words which contains more than 3 vowels
- Find words whose first and last three letters are equal
- Find words with alternating vowels and consonants
- Formatting
- Substring
- Rep-string
- Word wrap
- String case
- Align columns
- Literals/String
- Repeat a string
- Brace expansion
- Brace expansion using ranges
- Reverse a string
- Phrase reversals
- Comma quibbling
- Special characters
- String concatenation
- Substring/Top and tail
- Commatizing numbers
- Reverse words in a string
- Suffixation of decimal numbers
- Long literals, with continuations
- Numerical and alphabetical suffixes
- Abbreviations, easy
- Abbreviations, simple
- Abbreviations, automatic
- Song lyrics/poems/Mad Libs/phrases
- Mad Libs
- Magic 8-ball
- 99 bottles of beer
- The Name Game (a song)
- The Old lady swallowed a fly
- The Twelve Days of Christmas
- Tokenize
- Text between
- Tokenize a string
- Word break problem
- Tokenize a string with escaping
- Split a character string based on change of character
- Sequences
11l
F rotated(String s)
R s[1..]‘’s[0]
V s = Set(File(‘unixdict.txt’).read().rtrim("\n").split("\n"))
L !s.empty
L(=word) s // `=` is needed here because otherwise after `s.remove(word)` `word` becomes invalid
s.remove(word)
I word.len < 3
L.break
V w = word
L 0 .< word.len - 1
w = rotated(w)
I w C s
s.remove(w)
E
L.break
L.was_no_break
print(word, end' ‘’)
w = word
L 0 .< word.len - 1
w = rotated(w)
print(‘ -> ’w, end' ‘’)
print()
L.break
- Output:
apt -> pta -> tap arc -> rca -> car ate -> tea -> eat
Arturo
wordset: map read.lines relative "unixdict.txt" => strip
rotateable?: function [w][
loop 1..dec size w 'i [
rotated: rotate w i
if or? [rotated = w][not? contains? wordset rotated] ->
return false
]
return true
]
results: new []
loop select wordset 'word [3 =< size word] 'word [
if rotateable? word ->
'results ++ @[ sort map 1..size word 'i [ rotate word i ]]
]
loop sort unique results 'result [
root: first result
print join.with: " -> " map 1..size root 'i [ rotate.left root i]
]
- Output:
tea -> eat -> ate rca -> car -> arc pta -> tap -> apt
AutoHotkey
Teacup_rim_text(wList){
oWord := [], oRes := [], n := 0
for i, w in StrSplit(wList, "`n", "`r")
if StrLen(w) >= 3
oWord[StrLen(w), w] := true
for l, obj in oWord
{
for w, bool in obj
{
loop % l
if oWord[l, rotate(w)]
{
oWord[l, w] := 0
if (A_Index = 1)
n++, oRes[n] := w
if (A_Index < l)
oRes[n] := oRes[n] "," (w := rotate(w))
}
if (StrSplit(oRes[n], ",").Count() <> l)
oRes.RemoveAt(n)
}
}
return oRes
}
rotate(w){
return SubStr(w, 2) . SubStr(w, 1, 1)
}
Examples:
FileRead, wList, % A_Desktop "\unixdict.txt"
result := ""
for i, v in Teacup_rim_text(wList)
result .= v "`n"
MsgBox % result
return
- Output:
apt,pta,tap arc,rca,car ate,tea,eat
AWK
# syntax: GAWK -f TEACUP_RIM_TEXT.AWK UNIXDICT.TXT
#
# sorting:
# PROCINFO["sorted_in"] is used by GAWK
# SORTTYPE is used by Thompson Automation's TAWK
#
{ for (i=1; i<=NF; i++) {
arr[tolower($i)] = 0
}
}
END {
PROCINFO["sorted_in"] = "@ind_str_asc" ; SORTTYPE = 1
for (i in arr) {
leng = length(i)
if (leng > 2) {
delete tmp_arr
words = str = i
tmp_arr[i] = ""
for (j=2; j<=leng; j++) {
str = substr(str,2) substr(str,1,1)
if (str in arr) {
words = words " " str
tmp_arr[str] = ""
}
}
if (length(tmp_arr) == leng) {
count = 0
for (j in tmp_arr) {
(arr[j] == 0) ? arr[j]++ : count++
}
if (count == 0) {
printf("%s\n",words)
circular++
}
}
}
}
printf("%d words, %d circular\n",length(arr),circular)
exit(0)
}
- Output:
using UNIXDICT.TXT
apt pta tap arc rca car ate tea eat 25104 words, 3 circular
using MIT10000.TXT
aim ima mai arc rca car asp spa pas ate tea eat ips psi sip 10000 words, 5 circular
BaCon
OPTION COLLAPSE TRUE
dict$ = LOAD$(DIRNAME$(ME$) & "/unixdict.txt")
FOR word$ IN dict$ STEP NL$
IF LEN(word$) = 3 AND AMOUNT(UNIQ$(EXPLODE$(word$, 1))) = 3 THEN domain$ = APPEND$(domain$, 0, word$)
NEXT
FOR w1$ IN domain$
w2$ = RIGHT$(w1$, 2) & LEFT$(w1$, 1)
w3$ = RIGHT$(w2$, 2) & LEFT$(w2$, 1)
IF TALLY(domain$, w2$) AND TALLY(domain$, w3$) AND NOT(TALLY(result$, w1$)) THEN
result$ = APPEND$(result$, 0, w1$ & " " & w2$ & " " & w3$, NL$)
ENDIF
NEXT
PRINT result$
PRINT "Total words: ", AMOUNT(dict$, NL$), ", and ", AMOUNT(result$, NL$), " are circular."
- Output:
Using 'unixdict.txt':
apt pta tap arc rca car ate tea eat Total words: 25104, and 3 are circular.
Using 'wordlist.10000':
aim ima mai arc rca car asp spa pas ate tea eat ips psi sip Total words: 10000, and 5 are circular.
C
#include <stdbool.h>
#include <stdio.h>
#include <stdlib.h>
#include <glib.h>
int string_compare(gconstpointer p1, gconstpointer p2) {
const char* const* s1 = p1;
const char* const* s2 = p2;
return strcmp(*s1, *s2);
}
GPtrArray* load_dictionary(const char* file, GError** error_ptr) {
GError* error = NULL;
GIOChannel* channel = g_io_channel_new_file(file, "r", &error);
if (channel == NULL) {
g_propagate_error(error_ptr, error);
return NULL;
}
GPtrArray* dict = g_ptr_array_new_full(1024, g_free);
GString* line = g_string_sized_new(64);
gsize term_pos;
while (g_io_channel_read_line_string(channel, line, &term_pos,
&error) == G_IO_STATUS_NORMAL) {
char* word = g_strdup(line->str);
word[term_pos] = '\0';
g_ptr_array_add(dict, word);
}
g_string_free(line, TRUE);
g_io_channel_unref(channel);
if (error != NULL) {
g_propagate_error(error_ptr, error);
g_ptr_array_free(dict, TRUE);
return NULL;
}
g_ptr_array_sort(dict, string_compare);
return dict;
}
void rotate(char* str, size_t len) {
char c = str[0];
memmove(str, str + 1, len - 1);
str[len - 1] = c;
}
char* dictionary_search(const GPtrArray* dictionary, const char* word) {
char** result = bsearch(&word, dictionary->pdata, dictionary->len,
sizeof(char*), string_compare);
return result != NULL ? *result : NULL;
}
void find_teacup_words(GPtrArray* dictionary) {
GHashTable* found = g_hash_table_new(g_str_hash, g_str_equal);
GPtrArray* teacup_words = g_ptr_array_new();
GString* temp = g_string_sized_new(8);
for (size_t i = 0, n = dictionary->len; i < n; ++i) {
char* word = g_ptr_array_index(dictionary, i);
size_t len = strlen(word);
if (len < 3 || g_hash_table_contains(found, word))
continue;
g_ptr_array_set_size(teacup_words, 0);
g_string_assign(temp, word);
bool is_teacup_word = true;
for (size_t i = 0; i < len - 1; ++i) {
rotate(temp->str, len);
char* w = dictionary_search(dictionary, temp->str);
if (w == NULL) {
is_teacup_word = false;
break;
}
if (strcmp(word, w) != 0 && !g_ptr_array_find(teacup_words, w, NULL))
g_ptr_array_add(teacup_words, w);
}
if (is_teacup_word && teacup_words->len > 0) {
printf("%s", word);
g_hash_table_add(found, word);
for (size_t i = 0; i < teacup_words->len; ++i) {
char* teacup_word = g_ptr_array_index(teacup_words, i);
printf(" %s", teacup_word);
g_hash_table_add(found, teacup_word);
}
printf("\n");
}
}
g_string_free(temp, TRUE);
g_ptr_array_free(teacup_words, TRUE);
g_hash_table_destroy(found);
}
int main(int argc, char** argv) {
if (argc != 2) {
fprintf(stderr, "usage: %s dictionary\n", argv[0]);
return EXIT_FAILURE;
}
GError* error = NULL;
GPtrArray* dictionary = load_dictionary(argv[1], &error);
if (dictionary == NULL) {
if (error != NULL) {
fprintf(stderr, "Cannot load dictionary file '%s': %s\n",
argv[1], error->message);
g_error_free(error);
}
return EXIT_FAILURE;
}
find_teacup_words(dictionary);
g_ptr_array_free(dictionary, TRUE);
return EXIT_SUCCESS;
}
- Output:
With unixdict.txt:
apt pta tap arc rca car ate tea eat
With wordlist.10000:
aim ima mai arc rca car asp spa pas ate tea eat ips psi sip
C++
#include <algorithm>
#include <fstream>
#include <iostream>
#include <set>
#include <string>
#include <vector>
// filename is expected to contain one lowercase word per line
std::set<std::string> load_dictionary(const std::string& filename) {
std::ifstream in(filename);
if (!in)
throw std::runtime_error("Cannot open file " + filename);
std::set<std::string> words;
std::string word;
while (getline(in, word))
words.insert(word);
return words;
}
void find_teacup_words(const std::set<std::string>& words) {
std::vector<std::string> teacup_words;
std::set<std::string> found;
for (auto w = words.begin(); w != words.end(); ++w) {
std::string word = *w;
size_t len = word.size();
if (len < 3 || found.find(word) != found.end())
continue;
teacup_words.clear();
teacup_words.push_back(word);
for (size_t i = 0; i + 1 < len; ++i) {
std::rotate(word.begin(), word.begin() + 1, word.end());
if (word == *w || words.find(word) == words.end())
break;
teacup_words.push_back(word);
}
if (teacup_words.size() == len) {
found.insert(teacup_words.begin(), teacup_words.end());
std::cout << teacup_words[0];
for (size_t i = 1; i < len; ++i)
std::cout << ' ' << teacup_words[i];
std::cout << '\n';
}
}
}
int main(int argc, char** argv) {
if (argc != 2) {
std::cerr << "usage: " << argv[0] << " dictionary\n";
return EXIT_FAILURE;
}
try {
find_teacup_words(load_dictionary(argv[1]));
} catch (const std::exception& ex) {
std::cerr << ex.what() << '\n';
return EXIT_FAILURE;
}
return EXIT_SUCCESS;
}
- Output:
With unixdict.txt:
apt pta tap arc rca car ate tea eat
With wordlist.10000:
aim ima mai arc rca car asp spa pas ate tea eat ips psi sip
EasyLang
repeat
s$ = input
until s$ = ""
if len s$ > 2
w$[] &= s$
.
.
func search s$ .
max = len w$[] + 1
while min + 1 < max
mid = min + (max - min) div 2
h = strcmp w$[mid] s$
if h = 0
return 1
elif h < 0
min = mid
else
max = mid
.
.
return 0
.
for w$ in w$[]
w0$ = w$
out$ = w$
l = len w$
for i to l - 1
w$ = substr w$ 2 (l - 1) & substr w$ 1 1
if strcmp w$ w0$ <= 0
break 1
.
if search w$ <> 1
break 1
.
out$ &= " -> " & w$
.
if i = l
print out$
.
.
# the content of unixdict.txt
input_data
10th
ate
eat
tea
F#
// Teacup rim text. Nigel Galloway: August 7th., 2019
let N=System.IO.File.ReadAllLines("dict.txt")|>Array.filter(fun n->String.length n=3 && Seq.length(Seq.distinct n)>1)|>Set.ofArray
let fG z=Set.map(fun n->System.String(Array.ofSeq (Seq.permute(fun g->(g+z)%3)n))) N
Set.intersectMany [N;fG 1;fG 2]|>Seq.distinctBy(Seq.sort>>Array.ofSeq>>System.String)|>Seq.iter(printfn "%s")
- Output:
aim arc asp ate ips
Factor
USING: combinators.short-circuit fry grouping hash-sets
http.client kernel math prettyprint sequences sequences.extras
sets sorting splitting ;
"https://www.mit.edu/~ecprice/wordlist.10000" http-get nip
"\n" split [ { [ length 3 < ] [ all-equal? ] } 1|| ] reject
[ [ all-rotations ] map ] [ >hash-set ] bi
'[ [ _ in? ] all? ] filter [ natural-sort ] map members .
- Output:
{ { "aim" "ima" "mai" } { "arc" "car" "rca" } { "asp" "pas" "spa" } { "ate" "eat" "tea" } { "ips" "psi" "sip" } }
FreeBASIC
Dim As String dict, word, domain, result, uniqueWords, w1, w2, w3
Dim As Integer i
Dim As String filename = "unixdict.txt"
'Dim As String filename = "mit.10000.words.txt"
Function REPLACE(original As String, find As String, replaceWith As String) As String
Dim As Integer posic
Dim As String result = original
Do
posic = Instr(result, find)
If posic = 0 Then Exit Do
result = Left(result, posic - 1) & replaceWith & Mid(result, posic + Len(find))
Loop
Return result
End Function
Function isUnique(word As String) As Boolean
Dim As Integer i, c, chars(255)
For i = 1 To Len(word)
c = Asc(Mid(word, i, 1))
If chars(c) Then Return False
chars(c) = 1
Next
Return True
End Function
Function TALLY (Byval s As String, Byval f As String) As Integer
Return (Len(s) - Len(REPLACE(s, f, ""))) / Len(f)
End Function
Open filename For Input As #1
dict = Input(Lof(1), #1)
Close #1
For i = 1 To Len(dict)
If Mid(dict, i, 1) = Chr(10) Then
If Len(word) = 3 And isUnique(word) Then domain &= word & Chr(10)
word = ""
Else
word &= Mid(dict, i, 1)
End If
Next i
For i = 1 To Len(domain)
If Mid(domain, i, 1) = Chr(10) Then
w2 = Right(w1, 2) & Left(w1, 1)
w3 = Right(w2, 2) & Left(w2, 1)
If TALLY(domain, w2 & Chr(10)) > 0 _
And TALLY(domain, w3 & Chr(10)) > 0 _
And TALLY(uniqueWords, w1 & Chr(10)) = 0 Then
result &= w1 & " " & w2 & " " & w3 & Chr(10)
uniqueWords &= w1 & Chr(10) & w2 & Chr(10) & w3 & Chr(10)
End If
w1 = ""
Else
w1 &= Mid(domain, i, 1)
End If
Next i
Print result
Print "Total words:"; TALLY(dict, Chr(10)); ", and"; TALLY(result, Chr(10)); " are circular."
Sleep
- Output:
using unixdict.txt
apt pta tap arc rca car ate tea eat Total words: 25104, and 3 are circular.
using mit.10000.words.txt
aim ima mai arc rca car asp spa pas ate tea eat ips psi sip Total words: 10000, and 5 are circular.
Go
package main
import (
"bufio"
"fmt"
"log"
"os"
"sort"
"strings"
)
func check(err error) {
if err != nil {
log.Fatal(err)
}
}
func readWords(fileName string) []string {
file, err := os.Open(fileName)
check(err)
defer file.Close()
var words []string
scanner := bufio.NewScanner(file)
for scanner.Scan() {
word := strings.ToLower(strings.TrimSpace(scanner.Text()))
if len(word) >= 3 {
words = append(words, word)
}
}
check(scanner.Err())
return words
}
func rotate(runes []rune) {
first := runes[0]
copy(runes, runes[1:])
runes[len(runes)-1] = first
}
func main() {
dicts := []string{"mit_10000.txt", "unixdict.txt"} // local copies
for _, dict := range dicts {
fmt.Printf("Using %s:\n\n", dict)
words := readWords(dict)
n := len(words)
used := make(map[string]bool)
outer:
for _, word := range words {
runes := []rune(word)
variants := []string{word}
for i := 0; i < len(runes)-1; i++ {
rotate(runes)
word2 := string(runes)
if word == word2 || used[word2] {
continue outer
}
ix := sort.SearchStrings(words, word2)
if ix == n || words[ix] != word2 {
continue outer
}
variants = append(variants, word2)
}
for _, variant := range variants {
used[variant] = true
}
fmt.Println(variants)
}
fmt.Println()
}
}
- Output:
Using mit_10000.txt: [aim ima mai] [arc rca car] [asp spa pas] [ate tea eat] [ips psi sip] Using unixdict.txt: [apt pta tap] [arc rca car] [ate tea eat]
Haskell
Using Data.Set
Circular words of more than 2 characters in a local copy of a word list.
import Data.List (groupBy, intercalate, sort, sortBy)
import qualified Data.Set as S
import Data.Ord (comparing)
import Data.Function (on)
main :: IO ()
main =
readFile "mitWords.txt" >>= (putStrLn . showGroups . circularWords . lines)
circularWords :: [String] -> [String]
circularWords ws =
let lexicon = S.fromList ws
in filter (isCircular lexicon) ws
isCircular :: S.Set String -> String -> Bool
isCircular lex w = 2 < length w && all (`S.member` lex) (rotations w)
rotations :: [a] -> [[a]]
rotations = fmap <$> rotated <*> (enumFromTo 0 . pred . length)
rotated :: [a] -> Int -> [a]
rotated [] _ = []
rotated xs n = zipWith const (drop n (cycle xs)) xs
showGroups :: [String] -> String
showGroups xs =
unlines $
intercalate " -> " . fmap snd <$>
filter
((1 <) . length)
(groupBy (on (==) fst) (sortBy (comparing fst) (((,) =<< sort) <$> xs)))
- Output:
arc -> car -> rca ate -> eat -> tea aim -> ima -> mai asp -> pas -> spa ips -> psi -> sip
Filtering anagrams
Or taking a different approach, we can avoid the use of Data.Set by obtaining the groups of anagrams (of more than two characters) in the lexicon, and filtering out a circular subset of these:
import Data.Function (on)
import Data.List (groupBy, intercalate, sort, sortOn)
import Data.Ord (comparing)
main :: IO ()
main =
readFile "mitWords.txt"
>>= ( putStrLn
. unlines
. fmap (intercalate " -> ")
. (circularOnly =<<)
. anagrams
. lines
)
anagrams :: [String] -> [[String]]
anagrams ws =
let harvest group px
| px = [fmap snd group]
| otherwise = []
in groupBy
(on (==) fst)
(sortOn fst (((,) =<< sort) <$> ws))
>>= (harvest <*> ((> 2) . length))
circularOnly :: [String] -> [[String]]
circularOnly ws
| (length h - 1) > length rs = []
| otherwise = [h : rs]
where
h = head ws
rs = filter (isRotation h) (tail ws)
isRotation :: String -> String -> Bool
isRotation xs ys =
xs
/= until
( (||)
. (ys ==)
<*> (xs ==)
)
rotated
(rotated xs)
rotated :: [a] -> [a]
rotated [] = []
rotated (x : xs) = xs <> [x]
- Output:
arc -> rca -> car ate -> tea -> eat aim -> ima -> mai asp -> spa -> pas ips -> psi -> sip
J
>@{.@> (#~ (=&#>@{.)@> * 2 < #@>)(</.~ {.@/:~@(|."0 1~ i.@#)L:0)cutLF fread'unixdict.txt'
apt
arc
ate
In other words, group words by their canonical rotation (from all rotations: the earliest, alphabetically), select groups with at least three different words, where the word count matches the letter count, then extract the first word from each group.
Java
import java.io.*;
import java.util.*;
public class Teacup {
public static void main(String[] args) {
if (args.length != 1) {
System.err.println("usage: java Teacup dictionary");
System.exit(1);
}
try {
findTeacupWords(loadDictionary(args[0]));
} catch (Exception ex) {
System.err.println(ex.getMessage());
}
}
// The file is expected to contain one lowercase word per line
private static Set<String> loadDictionary(String fileName) throws IOException {
Set<String> words = new TreeSet<>();
try (BufferedReader reader = new BufferedReader(new FileReader(fileName))) {
String word;
while ((word = reader.readLine()) != null)
words.add(word);
return words;
}
}
private static void findTeacupWords(Set<String> words) {
List<String> teacupWords = new ArrayList<>();
Set<String> found = new HashSet<>();
for (String word : words) {
int len = word.length();
if (len < 3 || found.contains(word))
continue;
teacupWords.clear();
teacupWords.add(word);
char[] chars = word.toCharArray();
for (int i = 0; i < len - 1; ++i) {
String rotated = new String(rotate(chars));
if (rotated.equals(word) || !words.contains(rotated))
break;
teacupWords.add(rotated);
}
if (teacupWords.size() == len) {
found.addAll(teacupWords);
System.out.print(word);
for (int i = 1; i < len; ++i)
System.out.print(" " + teacupWords.get(i));
System.out.println();
}
}
}
private static char[] rotate(char[] ch) {
char c = ch[0];
System.arraycopy(ch, 1, ch, 0, ch.length - 1);
ch[ch.length - 1] = c;
return ch;
}
}
- Output:
With unixdict.txt:
apt pta tap arc rca car ate tea eat
With wordlist.10000:
aim ima mai arc rca car asp spa pas ate tea eat ips psi sip
JavaScript
Set() objects
Reading a local dictionary with the macOS JS for Automation library:
(() => {
'use strict';
// main :: IO ()
const main = () =>
showGroups(
circularWords(
// Local copy of:
// https://www.mit.edu/~ecprice/wordlist.10000
lines(readFile('~/mitWords.txt'))
)
);
// circularWords :: [String] -> [String]
const circularWords = ws =>
ws.filter(isCircular(new Set(ws)), ws);
// isCircular :: Set String -> String -> Bool
const isCircular = lexicon => w => {
const iLast = w.length - 1;
return 1 < iLast && until(
([i, bln, s]) => iLast < i || !bln,
([i, bln, s]) => [1 + i, lexicon.has(s), rotated(s)],
[0, true, rotated(w)]
)[1];
};
// DISPLAY --------------------------------------------
// showGroups :: [String] -> String
const showGroups = xs =>
unlines(map(
gp => map(snd, gp).join(' -> '),
groupBy(
(a, b) => fst(a) === fst(b),
sortBy(
comparing(fst),
map(x => Tuple(concat(sort(chars(x))), x),
xs
)
)
).filter(gp => 1 < gp.length)
));
// MAC OS JS FOR AUTOMATION ---------------------------
// readFile :: FilePath -> IO String
const readFile = fp => {
const
e = $(),
uw = ObjC.unwrap,
s = uw(
$.NSString.stringWithContentsOfFileEncodingError(
$(fp)
.stringByStandardizingPath,
$.NSUTF8StringEncoding,
e
)
);
return undefined !== s ? (
s
) : uw(e.localizedDescription);
};
// GENERIC FUNCTIONS ----------------------------------
// Tuple (,) :: a -> b -> (a, b)
const Tuple = (a, b) => ({
type: 'Tuple',
'0': a,
'1': b,
length: 2
});
// chars :: String -> [Char]
const chars = s => s.split('');
// comparing :: (a -> b) -> (a -> a -> Ordering)
const comparing = f =>
(x, y) => {
const
a = f(x),
b = f(y);
return a < b ? -1 : (a > b ? 1 : 0);
};
// concat :: [[a]] -> [a]
// concat :: [String] -> String
const concat = xs =>
0 < xs.length ? (() => {
const unit = 'string' !== typeof xs[0] ? (
[]
) : '';
return unit.concat.apply(unit, xs);
})() : [];
// fst :: (a, b) -> a
const fst = tpl => tpl[0];
// groupBy :: (a -> a -> Bool) -> [a] -> [[a]]
const groupBy = (f, xs) => {
const tpl = xs.slice(1)
.reduce((a, x) => {
const h = a[1].length > 0 ? a[1][0] : undefined;
return (undefined !== h) && f(h, x) ? (
Tuple(a[0], a[1].concat([x]))
) : Tuple(a[0].concat([a[1]]), [x]);
}, Tuple([], 0 < xs.length ? [xs[0]] : []));
return tpl[0].concat([tpl[1]]);
};
// lines :: String -> [String]
const lines = s => s.split(/[\r\n]/);
// map :: (a -> b) -> [a] -> [b]
const map = (f, xs) =>
(Array.isArray(xs) ? (
xs
) : xs.split('')).map(f);
// rotated :: String -> String
const rotated = xs =>
xs.slice(1) + xs[0];
// showLog :: a -> IO ()
const showLog = (...args) =>
console.log(
args
.map(JSON.stringify)
.join(' -> ')
);
// snd :: (a, b) -> b
const snd = tpl => tpl[1];
// sort :: Ord a => [a] -> [a]
const sort = xs => xs.slice()
.sort((a, b) => a < b ? -1 : (a > b ? 1 : 0));
// sortBy :: (a -> a -> Ordering) -> [a] -> [a]
const sortBy = (f, xs) =>
xs.slice()
.sort(f);
// unlines :: [String] -> String
const unlines = xs => xs.join('\n');
// until :: (a -> Bool) -> (a -> a) -> a -> a
const until = (p, f, x) => {
let v = x;
while (!p(v)) v = f(v);
return v;
};
// MAIN ---
return main();
})();
- Output:
arc -> car -> rca ate -> eat -> tea aim -> ima -> mai asp -> pas -> spa ips -> psi -> sip
Anagram filtering
Reading a local dictionary with the macOS JS for Automation library:
(() => {
'use strict';
// main :: IO ()
const main = () =>
anagrams(lines(readFile('~/mitWords.txt')))
.flatMap(circularOnly)
.map(xs => xs.join(' -> '))
.join('\n')
// anagrams :: [String] -> [[String]]
const anagrams = ws =>
groupBy(
on(eq, fst),
sortBy(
comparing(fst),
ws.map(w => Tuple(sort(chars(w)).join(''), w))
)
).flatMap(
gp => 2 < gp.length ? [
gp.map(snd)
] : []
)
// circularOnly :: [String] -> [[String]]
const circularOnly = ws => {
const h = ws[0];
return ws.length < h.length ? (
[]
) : (() => {
const rs = rotations(h);
return rs.every(r => ws.includes(r)) ? (
[rs]
) : [];
})();
};
// rotations :: String -> [String]
const rotations = s =>
takeIterate(s.length, rotated, s)
// rotated :: [a] -> [a]
const rotated = xs => xs.slice(1).concat(xs[0]);
// GENERIC FUNCTIONS ----------------------------
// Tuple (,) :: a -> b -> (a, b)
const Tuple = (a, b) => ({
type: 'Tuple',
'0': a,
'1': b,
length: 2
});
// chars :: String -> [Char]
const chars = s => s.split('');
// comparing :: (a -> b) -> (a -> a -> Ordering)
const comparing = f =>
(x, y) => {
const
a = f(x),
b = f(y);
return a < b ? -1 : (a > b ? 1 : 0);
};
// eq (==) :: Eq a => a -> a -> Bool
const eq = (a, b) => a === b
// fst :: (a, b) -> a
const fst = tpl => tpl[0];
// groupBy :: (a -> a -> Bool) -> [a] -> [[a]]
const groupBy = (f, xs) => {
const tpl = xs.slice(1)
.reduce((a, x) => {
const h = a[1].length > 0 ? a[1][0] : undefined;
return (undefined !== h) && f(h, x) ? (
Tuple(a[0], a[1].concat([x]))
) : Tuple(a[0].concat([a[1]]), [x]);
}, Tuple([], 0 < xs.length ? [xs[0]] : []));
return tpl[0].concat([tpl[1]]);
};
// lines :: String -> [String]
const lines = s => s.split(/[\r\n]/);
// mapAccumL :: (acc -> x -> (acc, y)) -> acc -> [x] -> (acc, [y])
const mapAccumL = (f, acc, xs) =>
xs.reduce((a, x, i) => {
const pair = f(a[0], x, i);
return Tuple(pair[0], a[1].concat(pair[1]));
}, Tuple(acc, []));
// on :: (b -> b -> c) -> (a -> b) -> a -> a -> c
const on = (f, g) => (a, b) => f(g(a), g(b));
// readFile :: FilePath -> IO String
const readFile = fp => {
const
e = $(),
uw = ObjC.unwrap,
s = uw(
$.NSString.stringWithContentsOfFileEncodingError(
$(fp)
.stringByStandardizingPath,
$.NSUTF8StringEncoding,
e
)
);
return undefined !== s ? (
s
) : uw(e.localizedDescription);
};
// snd :: (a, b) -> b
const snd = tpl => tpl[1];
// sort :: Ord a => [a] -> [a]
const sort = xs => xs.slice()
.sort((a, b) => a < b ? -1 : (a > b ? 1 : 0));
// sortBy :: (a -> a -> Ordering) -> [a] -> [a]
const sortBy = (f, xs) =>
xs.slice()
.sort(f);
// takeIterate :: Int -> (a -> a) -> a -> [a]
const takeIterate = (n, f, x) =>
snd(mapAccumL((a, _, i) => {
const v = 0 !== i ? f(a) : x;
return [v, v];
}, x, Array.from({
length: n
})));
// MAIN ---
return main();
})();
- Output:
arc -> rca -> car ate -> tea -> eat aim -> ima -> mai asp -> spa -> pas ips -> psi -> sip
jq
Works with gojq, the Go implementation of jq (*)
(*) To run the program below using gojq, change `keys_unsorted` to `keys`; this slows it down a lot.
# Output: an array of the words when read around the rim
def read_teacup:
. as $in
| [range(0; length) | $in[.:] + $in[:.] ];
# Boolean
def is_teacup_word($dict):
. as $in
| all( range(1; length); . as $i | $dict[ $in[$i:] + $in[:$i] ]) ;
# Output: a stream of the eligible teacup words
def teacup_words:
def same_letters:
explode
| .[0] as $first
| all( .[1:][]; . == $first);
# Only consider one word in a teacup cycle
def consider: explode | .[0] == min;
# Create the dictionary
reduce (inputs
| select(length>2 and (same_letters|not))) as $w ( {};
.[$w]=true )
| . as $dict
| keys[]
| select(consider and is_teacup_word($dict)) ;
# The task:
teacup_words
| read_teacup
- Output:
Invocation example: jq -nRc -f teacup-rim.jq unixdict.txt
["apt","pta","tap"] ["arc","rca","car"] ["ate","tea","eat"]
Julia
Using the MIT 10000 word list, and excluding words of less than three letters, to reduce output length.
using HTTP
rotate(s, n) = String(circshift(Vector{UInt8}(s), n))
isliketea(w, d) = (n = length(w); n > 2 && any(c -> c != w[1], w) &&
all(i -> haskey(d, rotate(w, i)), 1:n-1))
function getteawords(listuri)
req = HTTP.request("GET", listuri)
wdict = Dict{String, Int}((lowercase(string(x)), 1) for x in split(String(req.body), r"\s+"))
sort(unique([sort([rotate(word, i) for i in 1:length(word)])
for word in collect(keys(wdict)) if isliketea(word, wdict)]))
end
foreach(println, getteawords("https://www.mit.edu/~ecprice/wordlist.10000"))
- Output:
["aim", "ima", "mai"] ["arc", "car", "rca"] ["asp", "pas", "spa"] ["ate", "eat", "tea"] ["ips", "psi", "sip"]
Lychen
Lychen is V8 JavaScript wrapped in C#, exposing C# into JavaScript.
Using https://www.mit.edu/~ecprice/wordlist.10000 as per the Julia example.
const wc = new CS.System.Net.WebClient();
const lines = wc.DownloadString("http://wiki.puzzlers.org/pub/wordlists/unixdict.txt");
const words = lines.split(/\n/g);
const collection = {};
words.filter(word => word.length > 2).forEach(word => {
let allok = true;
let newword = word;
for (let i = 0; i < word.length - 1; i++) {
newword = newword.substr(1) + newword.substr(0, 1);
if (!words.includes(newword)) {
allok = false;
break;
}
}
if (allok) {
const key = word.split("").sort().join("");
if (!collection[key]) {
collection[key] = [word];
} else {
if (!collection[key].includes(word)) {
collection[key].push(word);
}
}
}
});
Object.keys(collection)
.filter(key => collection[key].length > 1)
.forEach(key => console.log("%s", collection[key].join(", ")));
apt, pta, tap arc, car, rca ate, eat, tea
Mathematica /Wolfram Language
ClearAll[Teacuppable]
TeacuppableHelper[set_List] := Module[{f, s},
f = First[set];
s = StringRotateLeft[f, #] & /@ Range[Length[set]];
Sort[s] == Sort[set]
]
Teacuppable[set_List] := Module[{ss, l},
l = StringLength[First[set]];
ss = Subsets[set, {l}];
Select[ss, TeacuppableHelper]
]
s = Import["http://wiki.puzzlers.org/pub/wordlists/unixdict.txt", "String"];
s //= StringSplit[#, "\n"] &;
s //= Select[StringLength /* GreaterThan[2]];
s //= Map[ToLowerCase];
s //= Map[{#, Sort[Characters[#]]} &];
s //= GatherBy[#, Last] &;
s //= Select[Length /* GreaterEqualThan[2]];
s = s[[All, All, 1]];
s //= Select[StringLength[First[#]] <= Length[#] &];
Flatten[Teacuppable /@ s, 1]
- Output:
{{"apt", "pta", "tap"}, {"arc", "car", "rca"}, {"ate", "eat", "tea"}}
Nim
import sequtils, sets, sugar
let words = collect(initHashSet, for word in "unixdict.txt".lines: {word})
proc rotate(s: var string) =
let first = s[0]
for i in 1..s.high: s[i - 1] = s[i]
s[^1] = first
var result: seq[string]
for word in "unixdict.txt".lines:
if word.len >= 3:
block checkWord:
var w = word
for _ in 1..w.len:
w.rotate()
if w notin words or w in result:
# Not present in dictionary or already encountered.
break checkWord
if word.anyIt(it != word[0]):
# More then one letter.
result.add word
for word in result:
var w = word
stdout.write w
for _ in 2..w.len:
w.rotate()
stdout.write " → ", w
echo()
- Output:
apt → pta → tap arc → rca → car ate → tea → eat
PascalABC.NET
// Teacup rim text. Nigel Galloway: September 26th., 2024
##
function fG(n:string;g:integer):sequence of string;
begin
yield n;
foreach var i in 1..g-1 do begin
n:=n.Cycle.Take(g+1).Skip(1).JoinIntoString;
yield n;
end;
end;
function fN(g:string):sequence of string;
begin
foreach var n in System.IO.File.ReadLines(g) do if n.Length>2 then yield n;
end;
function fI(n:HashSet<string>;g:integer):sequence of sequence of string;
begin
while n.Count>0 do begin
var c:=fG(n.First,g);
if (c.ElementAt(0)<>c.ElementAt(1)) and c.All(g->n.Contains(g)) then yield c;
n:=n-c.ToHashSet;
end;
end;
foreach var n in fN('unixdict.txt').GroupBy(n->n.length) do begin
var g:= fI(n.ToHashSet,n.First.Length);
if g.Count>0 then println(g);
end;
- Output:
[[apt,pta,tap],[arc,rca,car],[ate,tea,eat]]
And using words_alpha.txt
foreach var n in fN('words_alpha.txt').GroupBy(n->n.length) do begin
var g:= fI(n.ToHashSet,n.First.Length);
if g.Count>0 then println(g);
end;
- Output:
[[aas,asa,saa],[abr,bra,rab],[adc,dca,cad],[ade,dea,ead],[ado,doa,oad],[aet,eta,tae],[agr,gra,rag],[ail,ila,lai],[ait,ita,tai],[ako,koa,oak],[alc,lca,cal],[ame,mea,eam],[amy,mya,yam],[ane,nea,ean],[app,ppa,pap],[apt,pta,tap],[are,rea,ear],[ary,rya,yar],[ash,sha,has],[asp,spa,pas],[ast,sta,tas],[ate,tea,eat],[aus,usa,sau],[blo,lob,obl],[chi,hic,ich],[cho,hoc,och],[dzo,zod,odz],[eer,ere,ree],[eir,ire,rei],[eon,one,neo],[eth,the,het],[ety,tye,yet],[foo,oof,ofo],[fou,ouf,ufo],[hip,iph,phi],[his,ish,shi],[hoo,ooh,oho],[ipl,pli,lip],[ips,psi,sip],[its,tsi,sit],[oot,oto,too],[pts,tsp,spt]] [[adad,dada,adad,dada],[agag,gaga,agag,gaga],[anan,nana,anan,nana],[arar,rara,arar,rara],[doli,olid,lido,idol],[isis,sisi,isis,sisi]]
Perl
use strict;
use warnings;
use feature 'say';
use List::Util qw(uniqstr any);
my(%words,@teacups,%seen);
open my $fh, '<', 'ref/wordlist.10000';
while (<$fh>) {
chomp(my $w = uc $_);
next if length $w < 3;
push @{$words{join '', sort split //, $w}}, $w;}
for my $these (values %words) {
next if @$these < 3;
MAYBE: for (@$these) {
my $maybe = $_;
next if $seen{$_};
my @print;
for my $i (0 .. length $maybe) {
if (any { $maybe eq $_ } @$these) {
push @print, $maybe;
$maybe = substr($maybe,1) . substr($maybe,0,1)
} else {
@print = () and next MAYBE
}
}
if (@print) {
push @teacups, [@print];
$seen{$_}++ for @print;
}
}
}
say join ', ', uniqstr @$_ for sort @teacups;
- Output:
ARC, RCA, CAR ATE, TEA, EAT AIM, IMA, MAI ASP, SPA, PAS IPS, PSI, SIP
Phix
Filters anagram lists
procedure filter_set(sequence anagrams) -- anagrams is a (small) set of words that are all anagrams of each other -- for example: {"angel","angle","galen","glean","lange"} -- print any set(s) for which every rotation is also present (marking as -- you go to prevent the same set appearing with each word being first) sequence used = repeat(false,length(anagrams)) for i=1 to length(anagrams) do if not used[i] then used[i] = true string word = anagrams[i] sequence res = {word} for r=2 to length(word) do word = word[2..$]&word[1] integer k = find(word,anagrams) if k=0 then res = {} exit end if if not find(word,res) then res = append(res,word) end if used[k] = true end for if length(res) then ?res end if end if end for end procedure procedure teacup(string filename, integer minlen=3, bool allow_mono=false) sequence letters, -- a sorted word, eg "ate" -> "aet". words = {}, -- in eg {{"aet","ate"},...} form anagrams = {}, -- a set with same letters last = "" -- (for building such sets) object word printf(1,"using %s",filename) integer fn = open(filename,"r") if fn=-1 then crash(filename&" not found") end if while 1 do word = lower(trim(gets(fn))) if atom(word) then exit end if if length(word)>=minlen then letters = sort(word) words = append(words, {letters, word}) end if end while close(fn) printf(1,", %d words read\n",length(words)) if length(words)!=0 then words = sort(words) -- group by anagram for i=1 to length(words) do {letters,word} = words[i] if letters=last then anagrams = append(anagrams,word) else if allow_mono or length(anagrams)>=length(last) then filter_set(anagrams) end if last = letters anagrams = {word} end if end for if allow_mono or length(anagrams)>=length(last) then filter_set(anagrams) end if end if end procedure teacup(join_path({"demo","unixdict.txt"})) -- These match output from other entries: --teacup(join_path({"demo","unixdict.txt"}),allow_mono:=true) --teacup(join_path({"demo","rosetta","mit.wordlist.10000.txt"})) --teacup(join_path({"demo","rosetta","words.txt"}),4,true) -- Note that allow_mono is needed to display eg {"agag","gaga"}
- Output:
using demo\unixdict.txt, 24948 words read {"arc","rca","car"} {"ate","tea","eat"} {"apt","pta","tap"}
PicoLisp
(de rotw (W)
(let W (chop W)
(unless (or (apply = W) (not (cddr W)))
(make
(do (length W)
(link (pack (copy W)))
(rot W) ) ) ) ) )
(off D)
(put 'D 'v (cons))
(mapc
'((W)
(idx 'D (cons (hash W) W) T) )
(setq Words
(make (in "wordlist.10000" (while (line T) (link @)))) ) )
(mapc
println
(extract
'((W)
(let? Lst (rotw W)
(when
(and
(fully
'((L) (idx 'D (cons (hash L) L)))
Lst )
(not
(member (car Lst) (car (get 'D 'v))) ) )
(mapc
'((L) (push (get 'D 'v) L))
Lst )
Lst ) ) )
Words ) )
- Output:
("aim" "mai" "ima") ("arc" "car" "rca") ("asp" "pas" "spa") ("ate" "eat" "tea") ("ips" "sip" "psi")
PureBasic
DataSection
dname:
Data.s "./Data/unixdict.txt"
Data.s "./Data/wordlist.10000.txt"
Data.s ""
EndDataSection
EnableExplicit
Dim c.s{1}(2)
Define.s txt, bset, res, dn
Define.i i,q, cw
Restore dname : Read.s dn
While OpenConsole() And ReadFile(0,dn)
While Not Eof(0)
cw+1
txt=ReadString(0)
If Len(txt)=3 : bset+txt+";" : EndIf
Wend
CloseFile(0)
For i=1 To CountString(bset,";")
PokeS(c(),StringField(bset,i,";"))
If FindString(res,c(0)+c(1)+c(2)) : Continue : EndIf
If c(0)=c(1) Or c(1)=c(2) Or c(0)=c(2) : Continue : EndIf
If FindString(bset,c(1)+c(2)+c(0)) And FindString(bset,c(2)+c(0)+c(1))
res+c(0)+c(1)+c(2)+~"\t"+c(1)+c(2)+c(0)+~"\t"+c(2)+c(0)+c(1)+~"\n"
EndIf
Next
PrintN(res+Str(cw)+" words, "+Str(CountString(res,~"\n"))+" circular") : Input()
bset="" : res="" : cw=0
Read.s dn
Wend
- Output:
apt pta tap arc rca car ate tea eat 25104 words, 3 circular aim ima mai arc rca car asp spa pas ate tea eat ips psi sip 10000 words, 5 circular
Python
Functional
Composing generic functions, and considering only anagram groups.
'''Teacup rim text'''
from itertools import chain, groupby
from os.path import expanduser
from functools import reduce
# main :: IO ()
def main():
'''Circular anagram groups, of more than one word,
and containing words of length > 2, found in:
https://www.mit.edu/~ecprice/wordlist.10000
'''
print('\n'.join(
concatMap(circularGroup)(
anagrams(3)(
# Reading from a local copy.
lines(readFile('~/mitWords.txt'))
)
)
))
# anagrams :: Int -> [String] -> [[String]]
def anagrams(n):
'''Groups of anagrams, of minimum group size n,
found in the given word list.
'''
def go(ws):
def f(xs):
return [
[snd(x) for x in xs]
] if n <= len(xs) >= len(xs[0][0]) else []
return concatMap(f)(groupBy(fst)(sorted(
[(''.join(sorted(w)), w) for w in ws],
key=fst
)))
return go
# circularGroup :: [String] -> [String]
def circularGroup(ws):
'''Either an empty list, or a list containing
a string showing any circular subset found in ws.
'''
lex = set(ws)
iLast = len(ws) - 1
# If the set contains one word that is circular,
# then it must contain all of them.
(i, blnCircular) = until(
lambda tpl: tpl[1] or (tpl[0] > iLast)
)(
lambda tpl: (1 + tpl[0], isCircular(lex)(ws[tpl[0]]))
)(
(0, False)
)
return [' -> '.join(allRotations(ws[i]))] if blnCircular else []
# isCircular :: Set String -> String -> Bool
def isCircular(lexicon):
'''True if all of a word's rotations
are found in the given lexicon.
'''
def go(w):
def f(tpl):
(i, _, x) = tpl
return (1 + i, x in lexicon, rotated(x))
iLast = len(w) - 1
return until(
lambda tpl: iLast < tpl[0] or (not tpl[1])
)(f)(
(0, True, rotated(w))
)[1]
return go
# allRotations :: String -> [String]
def allRotations(w):
'''All rotations of the string w.'''
return takeIterate(len(w) - 1)(
rotated
)(w)
# GENERIC -------------------------------------------------
# concatMap :: (a -> [b]) -> [a] -> [b]
def concatMap(f):
'''A concatenated list over which a function has been mapped.
The list monad can be derived by using a function f which
wraps its output in a list,
(using an empty list to represent computational failure).
'''
def go(xs):
return chain.from_iterable(map(f, xs))
return go
# fst :: (a, b) -> a
def fst(tpl):
'''First member of a pair.'''
return tpl[0]
# groupBy :: (a -> b) -> [a] -> [[a]]
def groupBy(f):
'''The elements of xs grouped,
preserving order, by equality
in terms of the key function f.
'''
def go(xs):
return [
list(x[1]) for x in groupby(xs, key=f)
]
return go
# lines :: String -> [String]
def lines(s):
'''A list of strings,
(containing no newline characters)
derived from a single new-line delimited string.
'''
return s.splitlines()
# mapAccumL :: (acc -> x -> (acc, y)) -> acc -> [x] -> (acc, [y])
def mapAccumL(f):
'''A tuple of an accumulation and a list derived by a
combined map and fold,
with accumulation from left to right.
'''
def go(a, x):
tpl = f(a[0], x)
return (tpl[0], a[1] + [tpl[1]])
return lambda acc: lambda xs: (
reduce(go, xs, (acc, []))
)
# readFile :: FilePath -> IO String
def readFile(fp):
'''The contents of any file at the path
derived by expanding any ~ in fp.
'''
with open(expanduser(fp), 'r', encoding='utf-8') as f:
return f.read()
# rotated :: String -> String
def rotated(s):
'''A string rotated 1 character to the right.'''
return s[1:] + s[0]
# snd :: (a, b) -> b
def snd(tpl):
'''Second member of a pair.'''
return tpl[1]
# takeIterate :: Int -> (a -> a) -> a -> [a]
def takeIterate(n):
'''Each value of n iterations of f
over a start value of x.
'''
def go(f):
def g(x):
def h(a, i):
v = f(a) if i else x
return (v, v)
return mapAccumL(h)(x)(
range(0, 1 + n)
)[1]
return g
return go
# until :: (a -> Bool) -> (a -> a) -> a -> a
def until(p):
'''The result of repeatedly applying f until p holds.
The initial seed value is x.
'''
def go(f):
def g(x):
v = x
while not p(v):
v = f(v)
return v
return g
return go
# MAIN ---
if __name__ == '__main__':
main()
- Output:
arc -> rca -> car ate -> tea -> eat aim -> ima -> mai asp -> spa -> pas ips -> psi -> sip
Raku
(formerly Perl 6)
There doesn't seem to be any restriction that the word needs to consist only of lowercase letters, so words of any case are included. Since the example code specifically shows the example words (TEA, EAT, ATE) in uppercase, I elected to uppercase the found words.
As the specs keep changing, this version will accept ANY text file as its dictionary and accepts parameters to configure the minimum number of characters in a word to consider and whether to allow mono-character words.
Defaults to unixdict.txt, minimum 3 characters and mono-character 'words' disallowed. Feed a file name to use a different word list, an integer to --min-chars and/or a truthy value to --mono to allow mono-chars.
my %*SUB-MAIN-OPTS = :named-anywhere;
unit sub MAIN ( $dict = 'unixdict.txt', :$min-chars = 3, :$mono = False );
my %words;
$dict.IO.slurp.words.map: { .chars < $min-chars ?? (next) !! %words{.uc.comb.sort.join}.push: .uc };
my @teacups;
my %seen;
for %words.values -> @these {
next if !$mono && @these < 2;
MAYBE: for @these {
my $maybe = $_;
next if %seen{$_};
my @print;
for ^$maybe.chars {
if $maybe ∈ @these {
@print.push: $maybe;
$maybe = $maybe.comb.list.rotate.join;
} else {
@print = ();
next MAYBE
}
}
if @print.elems {
@teacups.push: @print;
%seen{$_}++ for @print;
}
}
}
say .unique.join(", ") for sort @teacups;
- Defaults:
Command line: raku teacup.p6
APT, PTA, TAP ARC, RCA, CAR ATE, TEA, EAT
- Allow mono-chars:
Command line: raku teacup.p6 --mono=1
AAA APT, PTA, TAP ARC, RCA, CAR ATE, TEA, EAT III
- Using a larger dictionary:
words.txt file from https://github.com/dwyl/english-words
Command line: raku teacup.p6 words.txt --min-chars=4 --mono=Allow
AAAA AAAAAA ADAD, DADA ADAR, DARA, ARAD, RADA AGAG, GAGA ALIT, LITA, ITAL, TALI AMAN, MANA, ANAM, NAMA AMAR, MARA, ARAM, RAMA AMEL, MELA, ELAM, LAME AMEN, MENA, ENAM, NAME AMOR, MORA, ORAM, RAMO ANAN, NANA ANIL, NILA, ILAN, LANI ARAR, RARA ARAS, RASA, ASAR, SARA ARIS, RISA, ISAR, SARI ASEL, SELA, ELAS, LASE ASER, SERA, ERAS, RASE DENI, ENID, NIDE, IDEN DOLI, OLID, LIDO, IDOL EGOR, GORE, OREG, REGO ENOL, NOLE, OLEN, LENO ESOP, SOPE, OPES, PESO ISIS, SISI MMMM MORO, OROM, ROMO, OMOR OOOO
REXX
All words that contained non-letter (Latin) characters (periods, decimal digits, minus signs,
underbars, or embedded blanks)
weren't considered as candidates for circular words.
Duplicated words (such as sop and SOP) are ignored (just the 2nd and subsequent duplicated words are deleted).
All words in the dictionary are treated as caseless.
The dictionary wasn't assumed to be sorted in any way.
/*REXX pgm finds circular words (length>2), using a dictionary, suppress permutations.*/
parse arg iFID L . /*obtain optional arguments from the CL */
if iFID==''|iFID=="," then iFID='unixdict.txt' /*Not specified? Then use the default*/
if L==''| L=="," then L=3 /* " " " " " " */
word.=0
have.=0
do r=0 while lines(iFID)>0 /*read all lines (words) in dictionary.*/
parse upper value linein(iFID) with w . /*obtain a word from the dictionary. */
if length(w)>=3 &, /*length must be L or more */
datatype(w,'U') &, /*Word must be all letters */
have.w=0 then do /*No duplicates */
z=word.0+1 /* number of eligible words */
word.0=z
word.z=w /*Word number z */
have.w=1 /*is available */
end /*r*/ /*dictionary need not be sorted. */
End
cw=0 /*the number of circular words (so far)*/
say 'There''re ' r ' entries (of all types) in the dictionary' iFID
say 'There''re ' word.0 ' words in the dictionary of at least length 'L
say
do j=1 To word.0 /* loop through words */
if word.j>'' then do /* word is available */
w=word.j /* base for circulation */
lw=length(w) /* number of letters */
cl=w
ci=1
do k=1 for lw-1 /*'circulate' the letters in the word. */
w=substr(w,2)left(w,1) /*move the first letter to the end. */
If have.w Then Do /*available word */
cl=cl','w /*add it to list */
ci=ci+1 /*list items */
have.w=0 /*no longer available */
End
Else
Leave
end /*k*/
If ci=lw Then Do /*complete list */
say 'circular word: ' cl /*display a circular word and variants.*/
cw=cw + 1 /*bump counter of circular words found.*/
End
End
end /*j*/
say
say cw ' circular words were found.' /*stick a fork in it, we're all done. */
- output when using the default inputs:
There're 25104 entries (of all types) in the dictionary unixdict.txt There're 24819 words in the dictionary of at least length 3 circular word: APT,PTA,TAP circular word: ARC,RCA,CAR circular word: ATE,TEA,EAT 3 circular words were found.
Ruby
"woordenlijst.txt" is a Dutch wordlist. It has 413125 words > 2 chars and takes about two minutes.
lists = ["unixdict.txt", "wordlist.10000", "woordenlijst.txt"]
lists.each do |list|
words = open(list).readlines( chomp: true).reject{|w| w.size < 3 }
grouped_by_size = words.group_by(&:size)
tea_words = words.filter_map do |word|
chars = word.chars
next unless chars.none?{|c| c < chars.first }
next if chars.uniq.size == 1
rotations = word.size.times.map {|i| chars.rotate(i).join }
rotations if rotations.all?{|rot| grouped_by_size[rot.size].include? rot }
end
puts "", list + ":"
tea_words.uniq(&:to_set).each{|ar| puts ar.join(", ") }
end
- Output:
unixdict.txt: apt, pta, tap arc, rca, car ate, tea, eat wordlist.10000: aim, ima, mai arc, rca, car asp, spa, pas ate, tea, eat ips, psi, sip woordenlijst.txt: ast, sta, tas een, ene, nee eer, ere, ree
Rust
use std::collections::BTreeSet;
use std::collections::HashSet;
use std::fs::File;
use std::io::{self, BufRead};
use std::iter::FromIterator;
fn load_dictionary(filename: &str) -> std::io::Result<BTreeSet<String>> {
let file = File::open(filename)?;
let mut dict = BTreeSet::new();
for line in io::BufReader::new(file).lines() {
let word = line?;
dict.insert(word);
}
Ok(dict)
}
fn find_teacup_words(dict: &BTreeSet<String>) {
let mut teacup_words: Vec<&String> = Vec::new();
let mut found: HashSet<&String> = HashSet::new();
for word in dict {
let len = word.len();
if len < 3 || found.contains(word) {
continue;
}
teacup_words.clear();
let mut is_teacup_word = true;
let mut chars: Vec<char> = word.chars().collect();
for _ in 1..len {
chars.rotate_left(1);
if let Some(w) = dict.get(&String::from_iter(&chars)) {
if !w.eq(word) && !teacup_words.contains(&w) {
teacup_words.push(w);
}
} else {
is_teacup_word = false;
break;
}
}
if !is_teacup_word || teacup_words.is_empty() {
continue;
}
print!("{}", word);
found.insert(word);
for w in &teacup_words {
found.insert(w);
print!(" {}", w);
}
println!();
}
}
fn main() {
let args: Vec<String> = std::env::args().collect();
if args.len() != 2 {
eprintln!("Usage: teacup dictionary");
std::process::exit(1);
}
let dict = load_dictionary(&args[1]);
match dict {
Ok(dict) => find_teacup_words(&dict),
Err(error) => eprintln!("Cannot open file {}: {}", &args[1], error),
}
}
- Output:
With unixdict.txt:
apt pta tap arc rca car ate tea eat
With wordlist.10000:
aim ima mai arc rca car asp spa pas ate tea eat ips psi sip
Swift
import Foundation
func loadDictionary(_ path: String) throws -> Set<String> {
let contents = try String(contentsOfFile: path, encoding: String.Encoding.ascii)
return Set<String>(contents.components(separatedBy: "\n").filter{!$0.isEmpty})
}
func rotate<T>(_ array: inout [T]) {
guard array.count > 1 else {
return
}
let first = array[0]
array.replaceSubrange(0..<array.count-1, with: array[1...])
array[array.count - 1] = first
}
func findTeacupWords(_ dictionary: Set<String>) {
var teacupWords: [String] = []
var found = Set<String>()
for word in dictionary {
if word.count < 3 || found.contains(word) {
continue
}
teacupWords.removeAll()
var isTeacupWord = true
var chars = Array(word)
for _ in 1..<word.count {
rotate(&chars)
let w = String(chars)
if (!dictionary.contains(w)) {
isTeacupWord = false
break
}
if w != word && !teacupWords.contains(w) {
teacupWords.append(w)
}
}
if !isTeacupWord || teacupWords.isEmpty {
continue
}
print(word, terminator: "")
found.insert(word)
for w in teacupWords {
found.insert(w)
print(" \(w)", terminator: "")
}
print()
}
}
do {
let dictionary = try loadDictionary("unixdict.txt")
findTeacupWords(dictionary)
} catch {
print(error)
}
- Output:
car arc rca eat ate tea pta tap apt
Wren
import "io" for File
import "./str" for Str
import "./sort" for Find
var readWords = Fn.new { |fileName|
var dict = File.read(fileName).split("\n")
return dict.where { |w| w.count >= 3 }.toList
}
var dicts = ["mit10000.txt", "unixdict.txt"]
for (dict in dicts) {
System.print("Using %(dict):\n")
var words = readWords.call(dict)
var n = words.count
var used = {}
for (word in words) {
var outer = false
var variants = [word]
var word2 = word
for (i in 0...word.count-1) {
word2 = Str.lshift(word2)
if (word == word2 || used[word2]) {
outer = true
break
}
var ix = Find.first(words, word2)
if (ix == n || words[ix] != word2) {
outer = true
break
}
variants.add(word2)
}
if (!outer) {
for (variant in variants) used[variant] = true
System.print(variants)
}
}
System.print()
}
- Output:
Using mit10000.txt: [aim, ima, mai] [arc, rca, car] [asp, spa, pas] [ate, tea, eat] [ips, psi, sip] Using unixdict.txt: [apt, pta, tap] [arc, rca, car] [ate, tea, eat]
zkl
// Limited to ASCII
// This is limited to the max items a Dictionary can hold
fcn teacut(wordFile){
words:=File(wordFile).pump(Dictionary().add.fp1(True),"strip");
seen :=Dictionary();
foreach word in (words.keys){
rots,w,sz := List(), word, word.len();
if(sz>2 and word.unique().len()>2 and not seen.holds(word)){
do(sz-1){
w=String(w[-1],w[0,-1]); // rotate one character
if(not words.holds(w)) continue(2); // not a word, skip these
rots.append(w); // I'd like to see all the rotations
}
println(rots.append(word).sort().concat(" "));
rots.pump(seen.add.fp1(True)); // we've seen these rotations
}
}
}
println("\nunixdict:"); teacut("unixdict.txt");
println("\nmit_wordlist_10000:"); teacut("mit_wordlist_10000.txt");
- Output:
unixdict: apt pta tap ate eat tea arc car rca mit_wordlist_10000: asp pas spa ips psi sip ate eat tea aim ima mai arc car rca
- Programming Tasks
- Solutions by Programming Task
- 11l
- Arturo
- AutoHotkey
- AWK
- BaCon
- C
- GLib
- C++
- EasyLang
- F Sharp
- Factor
- FreeBASIC
- Go
- Haskell
- J
- Java
- JavaScript
- Jq
- Julia
- Lychen
- Mathematica
- Wolfram Language
- Nim
- PascalABC.NET
- Perl
- Phix
- PicoLisp
- PureBasic
- Python
- Raku
- REXX
- Ruby
- Rust
- Swift
- Wren
- Wren-str
- Wren-sort
- Zkl
- 6502 Assembly/Omit
- 8080 Assembly/Omit
- Z80 Assembly/Omit