I'm working on modernizing Rosetta Code's infrastructure. Starting with communications. Please accept this time-limited open invite to RC's Slack.. --Michael Mol (talk) 20:59, 30 May 2020 (UTC)

Change e letters to i in words

From Rosetta Code
Change e letters to i in words is a draft programming task. It is not yet considered ready to be promoted as a complete task, for reasons that should be found in its talk page.
Task

Use the dictionary   unixdict.txt

Change letters   e   to   i   in words.

If the changed word is in the dictionary,   show it here on this page.

The length of any word shown should have a length   >  5.


Other tasks related to string operations:
Metrics
Counting
Remove/replace
Anagrams/Derangements/shuffling
Find/Search/Determine
Formatting
Song lyrics/poems/Mad Libs/phrases
Tokenize
Sequences



Ada[edit]

with Ada.Text_Io;
with Ada.Strings.Fixed;
with Ada.Strings.Maps;
with Ada.Containers.Indefinite_Ordered_Maps;
 
procedure Change_E_To_I is
use Ada.Text_Io;
use Ada.Strings;
 
Filename : constant String := "unixdict.txt";
Mapping  : constant Maps.Character_Mapping :=
Maps.To_Mapping ("Ee", "Ii");
 
package Dictionaries is
new Ada.Containers.Indefinite_Ordered_Maps
(Key_Type => String,
Element_Type => String);
 
Dict  : Dictionaries.Map;
File  : File_Type;
begin
Open (File, In_File, Filename);
while not End_Of_File (File) loop
declare
Word : constant String := Get_Line (File);
begin
Dict.Insert (Word, Word);
end;
end loop;
Close (File);
 
for Word of Dict loop
declare
Trans : constant String := Fixed.Translate (Word, Mapping);
begin
if Word /= Trans and Dict.Contains (Trans) and Word'Length >= 6 then
Put (Word); Put (" -> "); Put (Trans); New_Line;
end if;
end;
end loop;
end Change_E_To_I;
Output:
analyses -> analysis
atlantes -> atlantis
bellow -> billow
breton -> briton
clench -> clinch
convect -> convict
crises -> crisis
diagnoses -> diagnosis
enfant -> infant
enquiry -> inquiry
frances -> francis
galatea -> galatia
harden -> hardin
heckman -> hickman
inequity -> iniquity
inflect -> inflict
jacobean -> jacobian
marten -> martin
module -> moduli
pegging -> pigging
psychoses -> psychosis
rabbet -> rabbit
sterling -> stirling
synopses -> synopsis
vector -> victor
welles -> willis

ALGOL 68[edit]

# find words where replacing "e" with "i" results in another word    #
# use the associative array in the Associate array/iteration task #
PR read "aArray.a68" PR
# read the list of words and store the words in an associative array #
IF FILE input file;
STRING file name = "unixdict.txt";
open( input file, file name, stand in channel ) /= 0
THEN
# failed to open the file #
print( ( "Unable to open """ + file name + """", newline ) )
ELSE
# file opened OK #
BOOL at eof := FALSE;
# set the EOF handler for the file #
on logical file end( input file, ( REF FILE f )BOOL:
BEGIN
# note that we reached EOF on the #
# latest read #
at eof := TRUE;
# return TRUE so processing can continue #
TRUE
END
);
# build an associative array of the words #
REF AARRAY words := INIT LOC AARRAY;
WHILE STRING word;
get( input file, ( word, newline ) );
NOT at eof
DO
words // word := word
OD;
close( input file );
# find the words where replacing "e" with "i" is still a word #
# the words must be at least 6 characters long #
REF AAELEMENT e := FIRST words;
WHILE e ISNT nil element DO
IF STRING word = key OF e;
INT w len = ( UPB word + 1 ) - LWB word;
w len >= 6
THEN
# the word is at least 6 characters long #
[ LWB word : UPB word ]CHAR i word := word[ @ LWB word ];
FOR w pos FROM LWB i word TO UPB i word DO
IF i word[ w pos ] = "e" THEN i word[ w pos ] := "i" FI
OD;
IF i word /= word THEN
# replacing "e" with "I" resulted in a new word #
IF words CONTAINSKEY i word THEN
# the new word is still a word #
print( ( word ) );
FROM w len + 1 TO 18 DO print( ( " " ) ) OD;
print( ( "-> ", i word, newline ) )
FI
FI
FI;
e := NEXT words
OD
FI
Output:

Note, the associative array is not traversed in lexicographical order, the output here has been sorted for ease of comparison with other samples.

analyses          -> analysis
atlantes          -> atlantis
bellow            -> billow
breton            -> briton
clench            -> clinch
convect           -> convict
crises            -> crisis
diagnoses         -> diagnosis
enfant            -> infant
enquiry           -> inquiry
frances           -> francis
galatea           -> galatia
harden            -> hardin
heckman           -> hickman
inequity          -> iniquity
inflect           -> inflict
jacobean          -> jacobian
marten            -> martin
module            -> moduli
pegging           -> pigging
psychoses         -> psychosis
rabbet            -> rabbit
sterling          -> stirling
synopses          -> synopsis
vector            -> victor
welles            -> willis

AppleScript[edit]

Core language only[edit]

Because of the huge size of the original word list and the number of changed words to check, it's nearly 100 times as fast to ensure the list is sorted and to use a binary search handler as it is to use the language's built-in is in command! (1.17 seconds instead of 110 on my current machine.) A further, lesser but interesting optimisation is to work through the sorted list in reverse, storing possible "i" word candidates encountered before getting to any "e" words from which they can be derived. Changed "e" words then only need to be checked against this smaller collection.

use AppleScript version "2.3.1" -- Mac OS X 10.9 (Mavericks) or later.
use sorter : script "Custom Iterative Ternary Merge Sort" -- <https://macscripter.net/viewtopic.php?pid=194430#p194430>
use scripting additions
 
on binarySearch(v, theList, l, r)
script o
property lst : theList
end script
 
repeat until (l = r)
set m to (l + r) div 2
if (item m of o's lst < v) then
set l to m + 1
else
set r to m
end if
end repeat
 
if (item l of o's lst is v) then return l
return 0
end binarySearch
 
on task(minWordLength)
set dictPath to (path to desktop as text) & "www.rosettacode.org:unixdict.txt"
script o
property wordList : paragraphs of (read file dictPath as «class utf8»)
property iWords : {}
property output : {}
end script
 
set wordCount to (count o's wordList)
tell sorter to sort(o's wordList, 1, wordCount, {}) -- Not actually needed with unixdict.txt.
 
set iWordCount to 0
set astid to AppleScript's text item delimiters
repeat with i from wordCount to 1 by -1
set thisWord to item i of o's wordList
if ((count thisWord) < minWordLength) then
else if ((thisWord contains "e") and (iWordCount > 0)) then
set AppleScript's text item delimiters to "e"
set tis to thisWord's text items
set AppleScript's text item delimiters to "i"
set changedWord to tis as text
if (binarySearch(changedWord, o's iWords, 1, iWordCount) > 0) then
set beginning of o's output to {thisWord, changedWord}
end if
else if (thisWord contains "i") then
set beginning of o's iWords to thisWord
set iWordCount to iWordCount + 1
end if
end repeat
set AppleScript's text item delimiters to astid
 
return o's output
end task
 
task(6)
Output:
{{"analyses", "analysis"}, {"atlantes", "atlantis"}, {"bellow", "billow"}, {"breton", "briton"}, {"clench", "clinch"}, {"convect", "convict"}, {"crises", "crisis"}, {"diagnoses", "diagnosis"}, {"enfant", "infant"}, {"enquiry", "inquiry"}, {"frances", "francis"}, {"galatea", "galatia"}, {"harden", "hardin"}, {"heckman", "hickman"}, {"inequity", "iniquity"}, {"inflect", "inflict"}, {"jacobean", "jacobian"}, {"marten", "martin"}, {"module", "moduli"}, {"pegging", "pigging"}, {"psychoses", "psychosis"}, {"rabbet", "rabbit"}, {"sterling", "stirling"}, {"synopses", "synopsis"}, {"vector", "victor"}, {"welles", "willis"}}

AppleScriptObjC[edit]

The Foundation framework has very fast array filters, but reducing the checklist size and checking with the same binary search handler as above are also effective. This version takes about 0.37 seconds. As above, the case-sensitivity and sorting arrangements are superfluous with unixdict.txt, but are included for interest. Same result.

use AppleScript version "2.4" -- OS X 10.10 (Yosemite) or later
use framework "Foundation"
use scripting additions
 
on binarySearch(v, theList, l, r)
script o
property lst : theList
end script
 
repeat until (l = r)
set m to (l + r) div 2
if (item m of o's lst < v) then
set l to m + 1
else
set r to m
end if
end repeat
 
if (item l of o's lst is v) then return l
return 0
end binarySearch
 
on task(minWordLength)
set |⌘| to current application
-- Read the unixdict.txt file.
set dictPath to (POSIX path of (path to desktop)) & "www.rosettacode.org/unixdict.txt"
set dictText to |⌘|'s class "NSString"'s stringWithContentsOfFile:(dictPath) ¬
usedEncoding:(missing value) |error|:(missing value)
-- Extract its words, which are known to be one per line.
set newlineSet to |⌘|'s class "NSCharacterSet"'s newlineCharacterSet()
set wordArray to dictText's componentsSeparatedByCharactersInSet:(newlineSet)
-- Case-insensitively extract any words containing "e" whose length is at least minWordLength.
set filter to |⌘|'s class "NSPredicate"'s ¬
predicateWithFormat:("(self MATCHES '.{" & minWordLength & ",}+') && (self CONTAINS[c] 'e')")
set eWords to wordArray's filteredArrayUsingPredicate:(filter)
-- Case-insensitively extract and sort any words containing "i" but not "e" whose length is at least minWordLength.
set filter to |⌘|'s class "NSPredicate"'s ¬
predicateWithFormat:("(self MATCHES '.{" & minWordLength & ",}+') && (self CONTAINS[c] 'i') && !(self CONTAINS[c] 'e')")
set iWords to (wordArray's filteredArrayUsingPredicate:(filter))'s sortedArrayUsingSelector:("localizedStandardCompare:")
-- Replace the "e"s in the "e" words with (lower-case) "i"s.
set changedWords to ((eWords's componentsJoinedByString:(linefeed))'s ¬
lowercaseString()'s stringByReplacingOccurrencesOfString:("e") withString:("i"))'s ¬
componentsSeparatedByCharactersInSet:(newlineSet)
 
-- Switch to vanilla to check the changed words.
script o
property eWordList : eWords as list
property iWordList : iWords as list
property changedWordList : changedWords as list
property output : {}
end script
-- Case-insensitively (by default), search the "i" word list for each word in the changed word list.
-- Where found, use the original-case version from the "i" word list.
set iWordCount to (count o's iWordList)
repeat with i from 1 to (count o's changedWordList)
set matchIndex to binarySearch(item i of o's changedWordList, o's iWordList, 1, iWordCount)
if (matchIndex > 0) then set end of o's output to {item i of o's eWordList, item matchIndex of o's iWordList}
end repeat
 
return o's output
end task
 
task(6)

AWK[edit]

 
# syntax: GAWK -f CHANGE_E_LETTERS_TO_I_IN_WORDS.AWK unixdict.txt
#
# sorting:
# PROCINFO["sorted_in"] is used by GAWK
# SORTTYPE is used by Thompson Automation's TAWK
#
{ if (length($0) < 6) {
next
}
arr1[$0] = ""
}
END {
PROCINFO["sorted_in"] = "@ind_str_asc" ; SORTTYPE = 1
for (i in arr1) {
word = i
if (gsub(/e/,"i",word) > 0) {
if (word in arr1) {
arr2[i] = word
}
}
}
for (i in arr2) {
printf("%-9s %s\n",i,arr2[i])
}
exit(0)
}
 
Output:
analyses  analysis
atlantes  atlantis
bellow    billow
breton    briton
clench    clinch
convect   convict
crises    crisis
diagnoses diagnosis
enfant    infant
enquiry   inquiry
frances   francis
galatea   galatia
harden    hardin
heckman   hickman
inequity  iniquity
inflect   inflict
jacobean  jacobian
marten    martin
module    moduli
pegging   pigging
psychoses psychosis
rabbet    rabbit
sterling  stirling
synopses  synopsis
vector    victor
welles    willis

C[edit]

#include <stdbool.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
 
#define MAX_WORD_SIZE 128
#define MIN_WORD_LENGTH 6
 
void fatal(const char* message) {
fprintf(stderr, "%s\n", message);
exit(1);
}
 
void* xmalloc(size_t n) {
void* ptr = malloc(n);
if (ptr == NULL)
fatal("Out of memory");
return ptr;
}
 
void* xrealloc(void* p, size_t n) {
void* ptr = realloc(p, n);
if (ptr == NULL)
fatal("Out of memory");
return ptr;
}
 
int string_compare(const void* p1, const void* p2) {
const char* const* s1 = p1;
const char* const* s2 = p2;
return strcmp(*s1, *s2);
}
 
char* string_copy(const char* str) {
size_t len = strlen(str);
char* str2 = xmalloc(len + 1);
memcpy(str2, str, len + 1);
return str2;
}
 
char** load_dictionary(const char* filename, size_t* psize) {
FILE* in = fopen(filename, "r");
if (!in) {
perror(filename);
return NULL;
}
size_t size = 0, capacity = 1024;
char** dictionary = xmalloc(sizeof(char*) * capacity);
char line[MAX_WORD_SIZE];
while (fgets(line, sizeof(line), in)) {
size_t len = strlen(line);
if (len > MIN_WORD_LENGTH) {
line[len - 1] = '\0'; // discard newline
char* str = string_copy(line);
if (size == capacity) {
capacity <<= 1;
dictionary = xrealloc(dictionary, sizeof(char*) * capacity);
}
dictionary[size++] = str;
}
}
fclose(in);
qsort(dictionary, size, sizeof(char*), string_compare);
*psize = size;
return dictionary;
}
 
void free_dictionary(char** dictionary, size_t size) {
for (size_t i = 0; i < size; ++i)
free(dictionary[i]);
free(dictionary);
}
 
bool find_word(char** dictionary, size_t size, const char* word) {
return bsearch(&word, dictionary, size, sizeof(char*), string_compare) !=
NULL;
}
 
int main(int argc, char** argv) {
const char* filename = argc < 2 ? "unixdict.txt" : argv[1];
size_t size = 0;
char** dictionary = load_dictionary(filename, &size);
if (dictionary == NULL)
return EXIT_FAILURE;
int count = 0;
for (size_t i = 0; i < size; ++i) {
const char* word1 = dictionary[i];
if (strchr(word1, 'e') != NULL) {
char* word2 = string_copy(word1);
for (char* p = word2; *p; ++p) {
if (*p == 'e')
*p = 'i';
}
if (find_word(dictionary, size, word2))
printf("%2d. %-10s -> %s\n", ++count, word1, word2);
free(word2);
}
}
free_dictionary(dictionary, size);
return EXIT_SUCCESS;
}
Output:
 1. analyses   -> analysis
 2. atlantes   -> atlantis
 3. bellow     -> billow
 4. breton     -> briton
 5. clench     -> clinch
 6. convect    -> convict
 7. crises     -> crisis
 8. diagnoses  -> diagnosis
 9. enfant     -> infant
10. enquiry    -> inquiry
11. frances    -> francis
12. galatea    -> galatia
13. harden     -> hardin
14. heckman    -> hickman
15. inequity   -> iniquity
16. inflect    -> inflict
17. jacobean   -> jacobian
18. marten     -> martin
19. module     -> moduli
20. pegging    -> pigging
21. psychoses  -> psychosis
22. rabbet     -> rabbit
23. sterling   -> stirling
24. synopses   -> synopsis
25. vector     -> victor
26. welles     -> willis

C++[edit]

#include <algorithm>
#include <cstdlib>
#include <fstream>
#include <iomanip>
#include <iostream>
#include <set>
#include <string>
 
int main(int argc, char** argv) {
const char* filename(argc < 2 ? "unixdict.txt" : argv[1]);
std::ifstream in(filename);
if (!in) {
std::cerr << "Cannot open file '" << filename << "'.\n";
return EXIT_FAILURE;
}
const int min_length = 6;
std::string word;
std::set<std::string> dictionary;
while (getline(in, word)) {
if (word.size() >= min_length)
dictionary.insert(word);
}
int count = 0;
for (const std::string& word1 : dictionary) {
if (word1.find('e') == std::string::npos)
continue;
std::string word2(word1);
std::replace(word2.begin(), word2.end(), 'e', 'i');
if (dictionary.find(word2) != dictionary.end()) {
std::cout << std::right << std::setw(2) << ++count
<< ". " << std::left << std::setw(10) << word1
<< " -> " << word2 << '\n';
}
}
 
return EXIT_SUCCESS;
}
Output:
 1. analyses   -> analysis
 2. atlantes   -> atlantis
 3. bellow     -> billow
 4. breton     -> briton
 5. clench     -> clinch
 6. convect    -> convict
 7. crises     -> crisis
 8. diagnoses  -> diagnosis
 9. enfant     -> infant
10. enquiry    -> inquiry
11. frances    -> francis
12. galatea    -> galatia
13. harden     -> hardin
14. heckman    -> hickman
15. inequity   -> iniquity
16. inflect    -> inflict
17. jacobean   -> jacobian
18. marten     -> martin
19. module     -> moduli
20. pegging    -> pigging
21. psychoses  -> psychosis
22. rabbet     -> rabbit
23. sterling   -> stirling
24. synopses   -> synopsis
25. vector     -> victor
26. welles     -> willis

Delphi[edit]

 
program Change_e_letters_to_i_in_words;
 
{$APPTYPE CONSOLE}
 
uses
System.SysUtils,
System.Classes;
 
var
Result: TStringList;
 
begin
with TStringList.Create do
begin
LoadFromFile('unixdict.txt');
for var i := Count - 1 downto 0 do
if (Strings[i].Length < 6) then
Delete(i);
 
Result := TStringList.Create;
 
for var i := Count - 1 downto 0 do
begin
var w_e := Strings[i];
 
if w_e.IndexOf('e') = -1 then
continue;
 
var w_i := w_e.Replace('e', 'i', [rfReplaceAll]);
if IndexOf(w_i) > -1 then
Result.Add(format('%s ──► %s', [w_e.PadRight(12), w_i]));
end;
 
Result.Sort;
writeln(Result.Text);
Free;
end;
 
readln;
end.
Output:
analyses     ──► analysis
atlantes     ──► atlantis
bellow       ──► billow
breton       ──► briton
clench       ──► clinch
convect      ──► convict
crises       ──► crisis
diagnoses    ──► diagnosis
enfant       ──► infant
enquiry      ──► inquiry
frances      ──► francis
galatea      ──► galatia
harden       ──► hardin
heckman      ──► hickman
inequity     ──► iniquity
inflect      ──► inflict
jacobean     ──► jacobian
marten       ──► martin
module       ──► moduli
pegging      ──► pigging
psychoses    ──► psychosis
rabbet       ──► rabbit
sterling     ──► stirling
synopses     ──► synopsis
vector       ──► victor
welles       ──► willis

F#[edit]

 
// Change 'e' to 'i' in words. Nigel Galloway: February 18th., 2021
let g=[|use n=System.IO.File.OpenText("unixdict.txt") in while not n.EndOfStream do yield n.ReadLine()|]|>Array.filter(fun n->n.Length>5)
let fN g=(g,(Seq.map(fun n->if n='e' then 'i' else n)>>Array.ofSeq>>System.String)g)
g|>Array.filter(Seq.contains 'e')|>Array.map fN|>Array.filter(fun(_,n)-> Array.contains n g)|>Array.iter(fun(n,g)->printfn "%s ->  %s" n g)
 
Output:
analyses ->  analysis
atlantes ->  atlantis
bellow ->  billow
breton ->  briton
clench ->  clinch
convect ->  convict
crises ->  crisis
diagnoses ->  diagnosis
enfant ->  infant
enquiry ->  inquiry
frances ->  francis
galatea ->  galatia
harden ->  hardin
heckman ->  hickman
inequity ->  iniquity
inflect ->  inflict
jacobean ->  jacobian
marten ->  martin
module ->  moduli
pegging ->  pigging
psychoses ->  psychosis
rabbet ->  rabbit
sterling ->  stirling
synopses ->  synopsis
vector ->  victor
welles ->  willis

Factor[edit]

USING: assocs binary-search formatting io.encodings.ascii
io.files kernel literals math sequences splitting ;
 
CONSTANT: words $[ "unixdict.txt" ascii file-lines ]
 
words
[ length 5 > ] filter
[ CHAR: e swap member? ] filter
[ dup "e" "i" replace ] map>alist
[ nip words sorted-member? ] assoc-filter  ! binary search
[ "%-9s -> %s\n" printf ] assoc-each
Output:
analyses  -> analysis
atlantes  -> atlantis
bellow    -> billow
breton    -> briton
clench    -> clinch
convect   -> convict
crises    -> crisis
diagnoses -> diagnosis
enfant    -> infant
enquiry   -> inquiry
frances   -> francis
galatea   -> galatia
harden    -> hardin
heckman   -> hickman
inequity  -> iniquity
inflect   -> inflict
jacobean  -> jacobian
marten    -> martin
module    -> moduli
pegging   -> pigging
psychoses -> psychosis
rabbet    -> rabbit
sterling  -> stirling
synopses  -> synopsis
vector    -> victor
welles    -> willis

FreeBASIC[edit]

#define NULL 0
 
type node
word as string*32 'enough space to store any word in the dictionary
nxt as node ptr
end type
 
function addword( tail as node ptr, word as string ) as node ptr
'allocates memory for a new node, links the previous tail to it,
'and returns the address of the new node
dim as node ptr newnode = allocate(sizeof(node))
tail->nxt = newnode
newnode->nxt = NULL
newnode->word = word
return newnode
end function
 
function length( word as string ) as uinteger
'necessary replacement for the built-in len function, which in this
'case would always return 32
for i as uinteger = 1 to 32
if asc(mid(word,i,1)) = 0 then return i-1
next i
return 999
end function
 
dim as string word
dim as node ptr tail = allocate( sizeof(node) )
dim as node ptr head = tail, curr = head, currj
tail->nxt = NULL
tail->word = "XXXXHEADER"
 
open "unixdict.txt" for input as #1
while true
line input #1, word
if word = "" then exit while
if length(word)>5 then tail = addword( tail, word )
wend
close #1
 
dim as string tempword
dim as boolean changed
 
while curr->nxt <> NULL
changed = false
tempword = curr->word
for i as uinteger = 1 to length(tempword)
if mid(tempword,i,1) = "e" then
tempword = left(tempword,i-1) + "i" + mid(tempword, i+1, length(tempword)-i)
changed = true
end if
next i
if changed = true then
currj = head
while currj->nxt <> NULL
if currj->word = tempword then print curr->word, tempword
currj=currj->nxt
wend
end if
curr = curr->nxt
wend
Output:

analyses analysis atlantes atlantis bellow billow breton briton clench clinch convect convict crises crisis diagnoses diagnosis enfant infant enquiry inquiry frances francis galatea galatia harden hardin heckman hickman inequity iniquity inflect inflict jacobean jacobian marten martin module moduli pegging pigging psychoses psychosis rabbet rabbit sterling stirling synopses synopsis vector victor

welles willis

Go[edit]

package main
 
import (
"bytes"
"fmt"
"io/ioutil"
"log"
"sort"
"strings"
"unicode/utf8"
)
 
func main() {
wordList := "unixdict.txt"
b, err := ioutil.ReadFile(wordList)
if err != nil {
log.Fatal("Error reading file")
}
bwords := bytes.Fields(b)
var words []string
for _, bword := range bwords {
s := string(bword)
if utf8.RuneCountInString(s) > 5 {
words = append(words, s)
}
}
count := 0
le := len(words)
for _, word := range words {
if strings.ContainsRune(word, 'e') {
repl := strings.ReplaceAll(word, "e", "i")
ix := sort.SearchStrings(words, repl) // binary search
if ix < le && words[ix] == repl {
count++
fmt.Printf("%2d: %-9s -> %s\n", count, word, repl)
}
}
}
}
Output:
 1: analyses  -> analysis
 2: atlantes  -> atlantis
 3: bellow    -> billow
 4: breton    -> briton
 5: clench    -> clinch
 6: convect   -> convict
 7: crises    -> crisis
 8: diagnoses -> diagnosis
 9: enfant    -> infant
10: enquiry   -> inquiry
11: frances   -> francis
12: galatea   -> galatia
13: harden    -> hardin
14: heckman   -> hickman
15: inequity  -> iniquity
16: inflect   -> inflict
17: jacobean  -> jacobian
18: marten    -> martin
19: module    -> moduli
20: pegging   -> pigging
21: psychoses -> psychosis
22: rabbet    -> rabbit
23: sterling  -> stirling
24: synopses  -> synopsis
25: vector    -> victor
26: welles    -> willis

Julia[edit]

See Alternade_words for the foreachword function.

e2i(w, d) = (if 'e' in w   s = replace(w, "e" => "i"); haskey(d, s) && return "$w => $s" end; "")
foreachword("unixdict.txt", e2i, minlen=6, colwidth=23, numcols=4)
 
Output:
Word source: unixdict.txt

analyses => analysis   atlantes => atlantis   bellow => billow       breton => briton
clench => clinch       convect => convict     crises => crisis       diagnoses => diagnosis
enfant => infant       enquiry => inquiry     frances => francis     galatea => galatia
harden => hardin       heckman => hickman     inequity => iniquity   inflect => inflict
jacobean => jacobian   marten => martin       module => moduli       pegging => pigging     
psychoses => psychosis rabbet => rabbit       sterling => stirling   synopses => synopsis
vector => victor       welles => willis

Nim[edit]

import sets, strutils, sugar
 
# Build a set of words to speed up membership check.
let wordSet = collect(initHashSet, for word in "unixdict.txt".lines: {word})
 
for word in "unixdict.txt".lines:
let newWord = word.replace('e', 'i')
if newWord.len > 5 and newWord != word and newWord in wordSet:
echo word, " → ", newWord
Output:
analyses → analysis
atlantes → atlantis
bellow → billow
breton → briton
clench → clinch
convect → convict
crises → crisis
diagnoses → diagnosis
enfant → infant
enquiry → inquiry
frances → francis
galatea → galatia
harden → hardin
heckman → hickman
inequity → iniquity
inflect → inflict
jacobean → jacobian
marten → martin
module → moduli
pegging → pigging
psychoses → psychosis
rabbet → rabbit
sterling → stirling
synopses → synopsis
vector → victor
welles → willis

Perl[edit]

#!/usr/bin/perl
 
use strict; # https://rosettacode.org/wiki/Change_e_letters_to_i_in_words
use warnings;
no warnings 'uninitialized';
 
my $file = do { local (@ARGV, $/) = 'unixdict.txt'; <> };
my %i = map { tr/i/e/r => sprintf "%30s  %s\n", tr/i/e/r, $_ }
grep !/e/, grep 5 <= length, $file =~ /^.*i.*$/gm;
print @i{ split ' ', $file };
Output:
                      analyses  analysis
                      atlantes  atlantis
                         basel  basil
                        bellow  billow
                         belly  billy
                         berth  birth
                         blend  blind
                         bless  bliss
                        breton  briton
                         check  chick
                        clench  clinch
                       convect  convict
                         cress  criss
                        enfant  infant
                         faery  fairy
                         fetch  fitch
                         fleck  flick
                       frances  francis
                       galatea  galatia
                        harden  hardin
                       heckman  hickman
                      jacobean  jacobian
                        marten  martin
                         messy  missy
                        module  moduli
                         oases  oasis
                         peggy  piggy
                     psychoses  psychosis
                         quell  quill
                        rabbet  rabbit
                         ruben  rubin
                         share  shari
                         shell  shill
                         spell  spill
                         style  styli
                      synopses  synopsis
                         taper  tapir
                         tread  triad
                        vector  victor
                         vella  villa
                        welles  willis
                         wells  wills
                         wendy  windy
                         wrest  wrist

Phix[edit]

sequence words = get_text("demo/unixdict.txt",GT_LF_STRIPPED)
function chei(string word) return substitute(word,"e","i") end function
function cheti(string word) return length(word)>5 and find('e',word) and find(chei(word),words) end function
sequence chetie = filter(words,cheti), chetei = columnize({chetie,apply(chetie,chei)})
printf(1,"%d words: %v\n",{length(chetei),shorten(chetei,"",2)})
Output:
26 words: {{"analyses","analysis"},{"atlantes","atlantis"},"...",{"vector","victor"},{"welles","willis"}}

Prolog[edit]

Works with: SWI Prolog
:- dynamic dictionary_word/1.
 
main:-
load_dictionary_from_file("unixdict.txt", 6),
forall((dictionary_word(Word1),
string_chars(Word1, Chars1),
memberchk('e', Chars1),
replace('e', 'i', Chars1, Chars2),
string_chars(Word2, Chars2),
dictionary_word(Word2)),
writef('%10l -> %w\n', [Word1, Word2])).
 
load_dictionary_from_file(File, Min_length):-
open(File, read, Stream),
retractall(dictionary_word(_)),
load_dictionary_from_stream(Stream, Min_length),
close(Stream).
 
load_dictionary_from_stream(Stream, Min_length):-
read_line_to_string(Stream, String),
String \= end_of_file,
!,
string_length(String, Length),
(Length >= Min_length -> assertz(dictionary_word(String)) ; true),
load_dictionary_from_stream(Stream, Min_length).
load_dictionary_from_stream(_, _).
 
replace(_, _, [], []):-!.
replace(Ch1, Ch2, [Ch1|Chars1], [Ch2|Chars2]):-
!,
replace(Ch1, Ch2, Chars1, Chars2).
replace(Ch1, Ch2, [Ch|Chars1], [Ch|Chars2]):-
replace(Ch1, Ch2, Chars1, Chars2).
Output:
analyses   -> analysis
atlantes   -> atlantis
bellow     -> billow
breton     -> briton
clench     -> clinch
convect    -> convict
crises     -> crisis
diagnoses  -> diagnosis
enfant     -> infant
enquiry    -> inquiry
frances    -> francis
galatea    -> galatia
harden     -> hardin
heckman    -> hickman
inequity   -> iniquity
inflect    -> inflict
jacobean   -> jacobian
marten     -> martin
module     -> moduli
pegging    -> pigging
psychoses  -> psychosis
rabbet     -> rabbit
sterling   -> stirling
synopses   -> synopsis
vector     -> victor
welles     -> willis


Python[edit]

unixdict = ['analyses','atlantes','bellow','breton','clench','convect','crises','diagnoses','enfant','enquiry','frances','galatea','harden','heckman','inequity','inflect','jacobean','marten','module','pegging','psychoses','rabbet','sterling','synopses','vector','welles']
 
strOfUD = str(unixdict)
replacestr = strOfUD.replace("e","i")
print(replacestr)
Output:
['analysis', 'atlantis', 'billow', 'briton', 'clinch', 'convict', 'crisis', 'diagnosis', 'infant', 'inquiry', 'francis', 'galatia', 'hardin', 'hickman', 'iniquity', 'inflict', 'jacobian', 'martin', 'moduli', 'pigging', 'psychosis', 'rabbit', 'stirling', 'synopsis', 'victor', 'willis']

Python Using Lists[edit]

unixdict = ['analyses','atlantes','bellow','breton','clench','convect','crises','diagnoses','enfant','enquiry','frances','galatea','harden','heckman','inequity','inflect','jacobean','marten','module','pegging','psychoses','rabbet','sterling','synopses','vector','welles']
 
newdict = []
 
for xx in range(0,len(unixdict)):
strOfUD = unixdict[xx]
replacestr = strOfUD.replace("e","i")
newdict.append(strOfUD + " ==> " + replacestr)
finaloutput = '\n'.join(newdict)
print(finaloutput)
Output:
analyses ==> analysis
atlantes ==> atlantis
bellow ==> billow
breton ==> briton
clench ==> clinch
convect ==> convict
crises ==> crisis
diagnoses ==> diagnosis
enfant ==> infant
enquiry ==> inquiry
frances ==> francis
galatea ==> galatia
harden ==> hardin
heckman ==> hickman
inequity ==> iniquity
inflect ==> inflict
jacobean ==> jacobian
marten ==> martin
module ==> moduli
pegging ==> pigging
psychoses ==> psychosis
rabbet ==> rabbit
sterling ==> stirling
synopses ==> synopsis
vector ==> victor
welles ==> willis

Quackery[edit]

  [ [] swap ]'[ swap
witheach [
dup nested
unrot over do
iff [ dip join ]
else nip
] drop ] is filter ( [ --> [ )
 
[ [] swap
witheach
[ [] swap
witheach
[ dup char e = if
[ drop char i ]
join ]
nested join ] ] is e->i ( [ --> [ )
 
$ "rosetta/unixdict.txt" sharefile drop nest$
filter [ size 5 > ]
dup
filter [ char e over find swap found ]
e->i
witheach
[ tuck over find
over found iff
[ swap echo$ cr ]
else nip ]
drop
Output:
analysis
atlantis
billow
briton
clinch
convict
crisis
diagnosis
infant
inquiry
francis
galatia
hardin
hickman
iniquity
inflict
jacobian
martin
moduli
pigging
psychosis
rabbit
stirling
synopsis
victor
willis

Raku[edit]

my %ei = 'unixdict.txt'.IO.words.grep({ .chars > 5 and /<[ie]>/ }).map: { $_ => .subst('e', 'i', :g) };
put %ei.grep( *.key.contains: 'e' ).grep({ %ei{.value}:exists }).sort.batch(4)».gist».fmt('%-22s').join: "\n";
Output:
analyses => analysis   atlantes => atlantis   bellow => billow       breton => briton      
clench => clinch       convect => convict     crises => crisis       diagnoses => diagnosis
enfant => infant       enquiry => inquiry     frances => francis     galatea => galatia    
harden => hardin       heckman => hickman     inequity => iniquity   inflect => inflict    
jacobean => jacobian   marten => martin       module => moduli       pegging => pigging    
psychoses => psychosis rabbet => rabbit       sterling => stirling   synopses => synopsis  
vector => victor       welles => willis

REXX[edit]

This REXX version doesn't care what order the words in the dictionary are in,   nor does it care what
case  (lower/upper/mixed)  the words are in,   the search for words is   caseless.

It also allows the minimum length to be specified on the command line (CL),   as well as the old character   (that is
to be changed),   the new character   (that is to be changed into),   and as well as the dictionary file identifier.

/*REXX pgm finds words with changed letter  E──►I  and is a word  (in a specified dict).*/
parse arg minL oldC newC iFID . /*obtain optional arguments from the CL*/
if minL=='' | minL=="," then minL= 6 /*Not specified? Then use the default.*/
if oldC=='' | oldC=="," then oldC= 'e' /* " " " " " " */
if newC=='' | newC=="," then newC= 'i' /* " " " " " " */
if iFID=='' | iFID=="," then iFID='unixdict.txt' /* " " " " " " */
upper oldC newC /*get uppercase versions of OLDC & NEWC*/
@.= /*default value of any dictionary word.*/
do #=1 while lines(iFID)\==0 /*read each word in the file (word=X).*/
x= strip( linein( iFID) ) /*pick off a word from the input line. */
$.#= x; upper x; @.x= $.# /*save: original case and the old word.*/
end /*#*/ /*Note: the old word case is left as─is*/
#= # - 1 /*adjust word count because of DO loop.*/
finds= 0 /*count of changed words found (so far)*/
say copies('─', 30) # "words in the dictionary file: " iFID
say
do j=1 for #; L= length($.j) /*process all the words that were found*/
if L<minL then iterate /*Is word too short? Then ignore it. */
y = $.j; upper y /*uppercase the dictionary word. */
if pos(oldC, y)==0 then iterate /*Have the required character? No, skip*/
new= translate(y, newC, oldC) /*obtain a changed (translated) word. */
if @.new=='' then iterate /*New word in the dict.? No, skip it.*/
finds= finds + 1 /*bump the count of found changed words*/
say right(left($.j, 20), 40) '──►' @.new /*indent a bit, display the old & new. */
end /*j*/
say /*stick a fork in it, we're all done. */
say copies('─',30) finds " words found that were changed with " oldC '──►' ,
newC", and with a minimum length of " minL
output   when using the default inputs:
────────────────────────────── 25104 words in the dictionary file:  unixdict.txt

                    analyses             ──► analysis
                    atlantes             ──► atlantis
                    bellow               ──► billow
                    breton               ──► briton
                    clench               ──► clinch
                    convect              ──► convict
                    crises               ──► crisis
                    diagnoses            ──► diagnosis
                    enfant               ──► infant
                    enquiry              ──► inquiry
                    frances              ──► francis
                    galatea              ──► galatia
                    harden               ──► hardin
                    heckman              ──► hickman
                    inequity             ──► iniquity
                    inflect              ──► inflict
                    jacobean             ──► jacobian
                    marten               ──► martin
                    module               ──► moduli
                    pegging              ──► pigging
                    psychoses            ──► psychosis
                    rabbet               ──► rabbit
                    sterling             ──► stirling
                    synopses             ──► synopsis
                    vector               ──► victor
                    welles               ──► willis

────────────────────────────── 26  words found that were changed with  E ──► I,  and with a minimum length of  6

Ring[edit]

 
load "stdlib.ring"
 
cStr = read("unixdict.txt")
wordList = str2list(cStr)
num = 0
 
see "working..." + nl
see "Words are:" + nl
 
ln = len(wordList)
for n = ln to 1 step -1
if len(wordList[n]) < 6
del(wordList,n)
ok
next
 
for n = 1 to len(wordList)
ind = substr(wordList[n],"e")
if ind > 0
str = substr(wordList[n],"e","i")
indstr = find(wordList,str)
if indstr > 0
num = num + 1
see "" + num + ". " + wordList[n] + " => " + str + nl
ok
ok
next
 
see "done..." + nl
 
Output:
working...
Words are:
1. analyses => analysis
2. atlantes => atlantis
3. bellow => billow
4. breton => briton
5. clench => clinch
6. convect => convict
7. crises => crisis
8. diagnoses => diagnosis
9. enfant => infant
10. enquiry => inquiry
11. frances => francis
12. galatea => galatia
13. harden => hardin
14. heckman => hickman
15. inequity => iniquity
16. inflect => inflict
17. jacobean => jacobian
18. marten => martin
19. module => moduli
20. pegging => pigging
21. psychoses => psychosis
22. rabbet => rabbit
23. sterling => stirling
24. synopses => synopsis
25. vector => victor
26. welles => willis
done...

Rust[edit]

use std::collections::BTreeSet;
use std::fs::File;
use std::io::{self, BufRead};
 
fn load_dictionary(filename: &str, min_length: usize) -> std::io::Result<BTreeSet<String>> {
let file = File::open(filename)?;
let mut dict = BTreeSet::new();
for line in io::BufReader::new(file).lines() {
let word = line?;
if word.len() >= min_length {
dict.insert(word);
}
}
Ok(dict)
}
 
fn main() {
match load_dictionary("unixdict.txt", 6) {
Ok(dictionary) => {
let mut count = 0;
for word in dictionary.iter().filter(|x| x.contains("e")) {
let word2 = word.replace("e", "i");
if dictionary.contains(&word2) {
count += 1;
println!("{:2}. {:<9} -> {}", count, word, word2);
}
}
}
Err(error) => eprintln!("{}", error),
}
}
Output:
 1. analyses  -> analysis
 2. atlantes  -> atlantis
 3. bellow    -> billow
 4. breton    -> briton
 5. clench    -> clinch
 6. convect   -> convict
 7. crises    -> crisis
 8. diagnoses -> diagnosis
 9. enfant    -> infant
10. enquiry   -> inquiry
11. frances   -> francis
12. galatea   -> galatia
13. harden    -> hardin
14. heckman   -> hickman
15. inequity  -> iniquity
16. inflect   -> inflict
17. jacobean  -> jacobian
18. marten    -> martin
19. module    -> moduli
20. pegging   -> pigging
21. psychoses -> psychosis
22. rabbet    -> rabbit
23. sterling  -> stirling
24. synopses  -> synopsis
25. vector    -> victor
26. welles    -> willis

Sidef[edit]

var file = File("unixdict.txt")
 
if (!file.exists) {
require('LWP::Simple')
say ":: Retrieving #{file} from internet..."
%S<LWP::Simple>.mirror(
'https://web.archive.org/web/20180611003215if_/' +
'http://www.puzzlers.org:80/pub/wordlists/unixdict.txt',
'unixdict.txt')
}
 
var words = file.read.words
var dict = Hash().set_keys(words...)
var count = 0
 
words.each {|word|
 
word.len > 5 || next
word.contains('e') || next
 
var changed = word.gsub('e', 'i')
 
if (dict.contains(changed)) {
printf("%2d: %20s <-> %s\n", ++count, word, changed)
}
}
Output:
 1:             analyses <-> analysis
 2:             atlantes <-> atlantis
 3:               bellow <-> billow
 4:               breton <-> briton
 5:               clench <-> clinch
 6:              convect <-> convict
 7:               crises <-> crisis
 8:            diagnoses <-> diagnosis
 9:               enfant <-> infant
10:              enquiry <-> inquiry
11:              frances <-> francis
12:              galatea <-> galatia
13:               harden <-> hardin
14:              heckman <-> hickman
15:             inequity <-> iniquity
16:              inflect <-> inflict
17:             jacobean <-> jacobian
18:               marten <-> martin
19:               module <-> moduli
20:              pegging <-> pigging
21:            psychoses <-> psychosis
22:               rabbet <-> rabbit
23:             sterling <-> stirling
24:             synopses <-> synopsis
25:               vector <-> victor
26:               welles <-> willis

Swift[edit]

import Foundation
 
func loadDictionary(path: String, minLength: Int) throws -> Set<String> {
let contents = try String(contentsOfFile: path, encoding: String.Encoding.ascii)
return Set<String>(contents.components(separatedBy: "\n").filter{$0.count >= minLength})
}
 
func pad(string: String, width: Int) -> String {
return string.count >= width ? string
 : string + String(repeating: " ", count: width - string.count)
}
 
do {
let dictionary = try loadDictionary(path: "unixdict.txt", minLength: 6)
var words: [(String,String)] = []
for word1 in dictionary {
let word2 = word1.replacingOccurrences(of: "e", with: "i")
if word1 != word2 && dictionary.contains(word2) {
words.append((word1, word2))
}
}
words.sort(by: {$0 < $1})
for (n, (word1, word2)) in words.enumerated() {
print(String(format: "%2d. %@ -> %@", n + 1, pad(string: word1, width: 10), word2))
}
} catch {
print(error.localizedDescription)
}
Output:
 1. analyses   -> analysis
 2. atlantes   -> atlantis
 3. bellow     -> billow
 4. breton     -> briton
 5. clench     -> clinch
 6. convect    -> convict
 7. crises     -> crisis
 8. diagnoses  -> diagnosis
 9. enfant     -> infant
10. enquiry    -> inquiry
11. frances    -> francis
12. galatea    -> galatia
13. harden     -> hardin
14. heckman    -> hickman
15. inequity   -> iniquity
16. inflect    -> inflict
17. jacobean   -> jacobian
18. marten     -> martin
19. module     -> moduli
20. pegging    -> pigging
21. psychoses  -> psychosis
22. rabbet     -> rabbit
23. sterling   -> stirling
24. synopses   -> synopsis
25. vector     -> victor
26. welles     -> willis

Wren[edit]

Library: Wren-sort
Library: Wren-fmt
import "io" for File
import "/sort" for Find
import "/fmt" for Fmt
 
var wordList = "unixdict.txt" // local copy
var count = 0
var words = File.read(wordList).trimEnd().split("\n").
where { |w| w.count > 5 }.toList
for (word in words) {
if (word.contains("e")) {
var repl = word.replace("e", "i")
if (Find.first(words, repl) >= 0) { // binary search
count = count + 1
Fmt.print("$2d: $-9s -> $s", count, word, repl)
}
}
}
Output:
 1: analyses  -> analysis
 2: atlantes  -> atlantis
 3: bellow    -> billow
 4: breton    -> briton
 5: clench    -> clinch
 6: convect   -> convict
 7: crises    -> crisis
 8: diagnoses -> diagnosis
 9: enfant    -> infant
10: enquiry   -> inquiry
11: frances   -> francis
12: galatea   -> galatia
13: harden    -> hardin
14: heckman   -> hickman
15: inequity  -> iniquity
16: inflect   -> inflict
17: jacobean  -> jacobian
18: marten    -> martin
19: module    -> moduli
20: pegging   -> pigging
21: psychoses -> psychosis
22: rabbet    -> rabbit
23: sterling  -> stirling
24: synopses  -> synopsis
25: vector    -> victor
26: welles    -> willis