Remove duplicate elements: Difference between revisions
(add JavaScript) |
|||
Line 1: | Line 1: | ||
{{task}}Given an Array |
{{task}}Given an Array, derive a sequence of elements in which all duplicates are removed. |
||
There are basically three approaches seen here: |
There are basically three approaches seen here: |
||
Line 8: | Line 8: | ||
=={{header|Ada}}== |
=={{header|Ada}}== |
||
{{works with|GNAT|GPL 2007}} |
{{works with|GNAT|GPL 2007}} |
||
<lang ada> |
<lang ada> with Ada.Containers.Ordered_Sets; |
||
with Ada.Containers.Ordered_Sets; |
|||
with Ada.Text_IO; use Ada.Text_IO; |
with Ada.Text_IO; use Ada.Text_IO; |
||
Line 29: | Line 28: | ||
Set_Cur := Next(Set_Cur); |
Set_Cur := Next(Set_Cur); |
||
end loop; |
end loop; |
||
end Unique_Set; |
end Unique_Set;</lang> |
||
</lang> |
|||
== {{header|APL}} == |
== {{header|APL}} == |
||
{{works with|Dyalog APL}} |
{{works with|Dyalog APL}} |
||
The primitive monad ∪ means "unique", so: |
The primitive monad ∪ means "unique", so: |
||
<pre> ∪ 1 2 3 1 2 3 4 1 |
|||
<pre> |
|||
1 2 3 4</pre> |
|||
1 2 3 4 |
|||
</pre> |
|||
{{works with|APL2}} |
{{works with|APL2}} |
||
<pre> w←1 2 3 1 2 3 4 1 |
|||
<pre> |
|||
w←1 2 3 1 2 3 4 1 |
|||
((⍳⍨w)=⍳⍴w)/w |
((⍳⍨w)=⍳⍴w)/w |
||
1 2 3 4 |
1 2 3 4</pre> |
||
</pre> |
|||
=={{header|AppleScript}}== |
=={{header|AppleScript}}== |
||
<lang applescript> |
<lang applescript>set array to {1, 2, 3, "a", "b", "c", 2, 3, 4, "b", "c", "d"} |
||
set array to {1, 2, 3, "a", "b", "c", 2, 3, 4, "b", "c", "d"} |
|||
set unique to {} |
set unique to {} |
||
repeat with i in array |
repeat with i in array |
||
Line 56: | Line 49: | ||
set unique to unique & i |
set unique to unique & i |
||
end if |
end if |
||
end repeat |
end repeat</lang> |
||
</lang> |
|||
=={{header|AutoHotkey}}== |
=={{header|AutoHotkey}}== |
||
Built in Sort has an option to remove duplicates |
Built in Sort has an option to remove duplicates |
||
Line 72: | Line 65: | ||
Since there's no way to know ahead of time how large the new data structure will need to be, we'll return a linked list instead of an array. |
Since there's no way to know ahead of time how large the new data structure will need to be, we'll return a linked list instead of an array. |
||
<lang c> |
<lang c>#include <stdio.h> |
||
#include <stdio.h> |
|||
#include <stdlib.h> |
#include <stdlib.h> |
||
Line 105: | Line 97: | ||
printf("%d ", n->x); |
printf("%d ", n->x); |
||
puts(""); |
puts(""); |
||
return 0;} |
return 0;}</lang> |
||
</lang> |
|||
=={{header|C++}}== |
=={{header|C++}}== |
||
This version uses <tt>std::set</tt>, which requires its element type be comparable using the < operator. |
This version uses <tt>std::set</tt>, which requires its element type be comparable using the < operator. |
||
<lang cpp> |
<lang cpp>#include <set> |
||
#include <set> |
|||
#include <iostream> |
#include <iostream> |
||
using namespace std; |
using namespace std; |
||
Line 125: | Line 115: | ||
cout << *iter << " "; |
cout << *iter << " "; |
||
cout << endl; |
cout << endl; |
||
}</lang> |
|||
} |
|||
</lang> |
|||
This version uses <tt>hash_set</tt>, which is part of the SGI extension to the Standard Template Library. It is not part of the C++ standard library. It requires that its element type have a hash function. |
This version uses <tt>hash_set</tt>, which is part of the SGI extension to the Standard Template Library. It is not part of the C++ standard library. It requires that its element type have a hash function. |
||
{{works with|GCC}} |
{{works with|GCC}} |
||
<lang cpp> |
<lang cpp>#include <ext/hash_set> |
||
#include <ext/hash_set> |
|||
#include <iostream> |
#include <iostream> |
||
using namespace std; |
using namespace std; |
||
Line 146: | Line 134: | ||
cout << *iter << " "; |
cout << *iter << " "; |
||
cout << endl; |
cout << endl; |
||
}</lang> |
|||
} |
|||
</lang> |
|||
This version uses <tt>unordered_set</tt>, which is part of the TR1, which is likely to be included in the next version of C++. It is not part of the C++ standard library. It requires that its element type have a hash function. |
This version uses <tt>unordered_set</tt>, which is part of the TR1, which is likely to be included in the next version of C++. It is not part of the C++ standard library. It requires that its element type have a hash function. |
||
{{works with|GCC}} |
{{works with|GCC}} |
||
<lang cpp> |
<lang cpp>#include <tr1/unordered_set> |
||
#include <tr1/unordered_set> |
|||
#include <iostream> |
#include <iostream> |
||
using namespace std; |
using namespace std; |
||
Line 167: | Line 153: | ||
cout << *iter << " "; |
cout << *iter << " "; |
||
cout << endl; |
cout << endl; |
||
}</lang> |
|||
} |
|||
</lang> |
|||
Alternative method working directly on the array: |
Alternative method working directly on the array: |
||
<lang cpp> |
<lang cpp>#include <iostream> |
||
#include <iostream> |
|||
#include <iterator> |
#include <iterator> |
||
#include <algorithm> |
#include <algorithm> |
||
Line 187: | Line 171: | ||
std::copy(data, new_end, std::ostream_iterator<int>(std::cout, " "); |
std::copy(data, new_end, std::ostream_iterator<int>(std::cout, " "); |
||
std::cout << std::endl; |
std::cout << std::endl; |
||
}</lang> |
|||
} |
|||
</lang> |
|||
=={{header|C sharp|C #}}== |
=={{header|C sharp|C #}}== |
||
Line 208: | Line 191: | ||
To remove duplicates non-destructively: |
To remove duplicates non-destructively: |
||
<lang lisp>(remove-duplicates '(1 3 2 9 1 2 3 8 8 1 0 2)) |
|||
<lang lisp> |
|||
> (9 3 8 1 0 2)</lang> |
|||
> (9 3 8 1 0 2) |
|||
</lang> |
|||
Or, to remove duplicates in-place: |
Or, to remove duplicates in-place: |
||
<lang lisp>(delete-duplicates '(1 3 2 9 1 2 3 8 8 1 0 2)) |
|||
<lang lisp> |
|||
> (9 3 8 1 0 2)</lang> |
|||
> (9 3 8 1 0 2) |
|||
</lang> |
|||
=={{header|D}}== |
=={{header|D}}== |
||
<lang d> |
<lang d>void main() { |
||
void main() { |
|||
int[] data = [1, 2, 3, 2, 3, 4]; |
int[] data = [1, 2, 3, 2, 3, 4]; |
||
int[int] hash; |
int[int] hash; |
||
foreach(el; data) |
foreach(el; data) |
||
hash[el] = 0; |
hash[el] = 0; |
||
}</lang> |
|||
} |
|||
</lang> |
|||
=={{header|E}}== |
=={{header|E}}== |
||
Line 332: | Line 309: | ||
=={{header|Haskell}}== |
=={{header|Haskell}}== |
||
<lang haskell> |
<lang haskell>values = [1,2,3,2,3,4] |
||
unique = List.nub values</lang> |
|||
values = [1,2,3,2,3,4] |
|||
unique = List.nub values |
|||
</lang> |
|||
=={{header|IDL}}== |
=={{header|IDL}}== |
||
Line 373: | Line 348: | ||
=={{header|Java}}== |
=={{header|Java}}== |
||
{{works with|Java|1.5}} |
{{works with|Java|1.5}} |
||
<lang java5> |
<lang java5>import java.util.Set; |
||
import java.util.Set; |
|||
import java.util.HashSet; |
import java.util.HashSet; |
||
import java.util.Arrays; |
import java.util.Arrays; |
||
Line 380: | Line 354: | ||
Object[] data = {1, 2, 3, "a", "b", "c", 2, 3, 4, "b", "c", "d"}; |
Object[] data = {1, 2, 3, "a", "b", "c", 2, 3, 4, "b", "c", "d"}; |
||
Set<Object> uniqueSet = new HashSet<Object>(Arrays.asList(data)); |
Set<Object> uniqueSet = new HashSet<Object>(Arrays.asList(data)); |
||
Object[] unique = uniqueSet.toArray(); |
Object[] unique = uniqueSet.toArray();</lang> |
||
</lang> |
|||
=={{header|JavaScript}}== |
|||
<lang javascript>function unique(ary) { |
|||
// concat() with no args is a way to clone an array |
|||
var u = ary.concat().sort(); |
|||
for (var i = 1; i < u.length; ) { |
|||
if (u[i-1] == u[i]) |
|||
u.splice(i,1); |
|||
else |
|||
i++; |
|||
} |
|||
return u; |
|||
} |
|||
function output(str) { |
|||
try { |
|||
WScript.Echo(str); // WSH |
|||
} catch(err) { |
|||
print(str); // Rhino |
|||
} |
|||
} |
|||
var ary = [1, 2, 3, "a", "b", "c", 2, 3, 4, "b", "c", "d"]; |
|||
output(unique(ary));</lang> |
|||
output: |
|||
<pre>1,2,3,4,a,b,c,d</pre> |
|||
=={{header|Logo}}== |
=={{header|Logo}}== |
||
Line 426: | Line 425: | ||
=={{header|Objective-C}}== |
=={{header|Objective-C}}== |
||
<lang objc>NSArray *items = [NSArray arrayWithObjects:@"A", @"B", @"C", @"B", @"A", nil]; |
|||
<lang objc> |
|||
NSArray *items = [NSArray arrayWithObjects:@"A", @"B", @"C", @"B", @"A", nil]; |
|||
NSSet *unique = [NSSet setWithArray:items]; |
NSSet *unique = [NSSet setWithArray:items];</lang> |
||
</lang> |
|||
=={{header|OCaml}}== |
=={{header|OCaml}}== |
||
<lang ocaml> |
<lang ocaml>let uniq lst = |
||
let uniq lst = |
|||
let unique_set = Hashtbl.create (List.length lst) in |
let unique_set = Hashtbl.create (List.length lst) in |
||
List.iter (fun x -> Hashtbl.replace unique_set x ()) lst; |
List.iter (fun x -> Hashtbl.replace unique_set x ()) lst; |
||
Line 440: | Line 436: | ||
let _ = |
let _ = |
||
uniq [1;2;3;2;3;4] |
uniq [1;2;3;2;3;4]</lang> |
||
</lang> |
|||
=={{header|Perl}}== |
=={{header|Perl}}== |
||
{{libheader|List::MoreUtils}} |
{{libheader|List::MoreUtils}} |
||
<lang perl> |
<lang perl>use List::MoreUtils qw(uniq); |
||
use List::MoreUtils qw(uniq); |
|||
my @uniq = uniq qw(1 2 3 a b c 2 3 4 b c d); |
my @uniq = uniq qw(1 2 3 a b c 2 3 4 b c d);</lang> |
||
</lang> |
|||
Without modules: |
Without modules: |
||
<lang perl> |
<lang perl>my %seen; |
||
my @uniq = grep {!$seen{$_}++} qw(1 2 3 a b c 2 3 4 b c d);</lang> |
|||
my %seen; |
|||
my @uniq = grep {!$seen{$_}++} qw(1 2 3 a b c 2 3 4 b c d); |
|||
</lang> |
|||
Note: the following two solutions convert elements to strings in the result, so it will cause problems with references. |
Note: the following two solutions convert elements to strings in the result, so it will cause problems with references. |
||
Alternately: |
Alternately: |
||
<lang perl>my %hash = map { $_ => 1 } qw(1 2 3 a b c 2 3 4 b c d); |
|||
<lang perl> |
|||
my @uniq = keys %hash;</lang> |
|||
my %hash = map { $_ => 1 } qw(1 2 3 a b c 2 3 4 b c d); |
|||
my @uniq = keys %hash; |
|||
</lang> |
|||
Alternately: |
Alternately: |
||
<lang perl> |
<lang perl>my %seen; |
||
my %seen; |
|||
@seen{qw(1 2 3 a b c 2 3 4 b c d)} = (); |
@seen{qw(1 2 3 a b c 2 3 4 b c d)} = (); |
||
my @uniq = keys %seen; |
my @uniq = keys %seen;</lang> |
||
</lang> |
|||
=={{header|PHP}}== |
=={{header|PHP}}== |
||
<lang php>$list = array(1, 2, 3, 'a', 'b', 'c', 2, 3, 4, 'b', 'c', 'd'); |
|||
<lang php> |
|||
$unique_list = array_unique($list);</lang> |
|||
$list = array(1, 2, 3, 'a', 'b', 'c', 2, 3, 4, 'b', 'c', 'd'); |
|||
$unique_list = array_unique($list); |
|||
</lang> |
|||
=={{header|Pop11}}== |
=={{header|Pop11}}== |
||
Line 507: | Line 492: | ||
=={{header|Prolog}}== |
=={{header|Prolog}}== |
||
<lang prolog> |
<lang prolog>uniq(Data,Uniques) :- sort(Data,Uniques).</lang> |
||
uniq(Data,Uniques) :- sort(Data,Uniques). |
|||
</lang> |
|||
Example usage: |
Example usage: |
||
<lang prolog> |
<lang prolog>?- uniq([1, 2, 3, 2, 3, 4],Xs). |
||
Xs = [1, 2, 3, 4]</lang> |
|||
Xs = [1, 2, 3, 4] |
|||
</lang> |
|||
=={{header|Python}}== |
=={{header|Python}}== |
||
Line 526: | Line 507: | ||
=={{header| R }}== |
=={{header| R }}== |
||
items <- c(1,2,3,2,4,3,2) |
items <- c(1,2,3,2,4,3,2) |
||
unique (items) |
|||
unique (items) |
|||
=={{header|Raven}}== |
=={{header|Raven}}== |
||
Line 546: | Line 526: | ||
=={{header|Ruby}}== |
=={{header|Ruby}}== |
||
<lang ruby>ary = [1,1,2,1,'redundant',[1,2,3],[1,2,3],'redundant'] |
|||
<lang ruby> |
|||
ary = [1,1,2,1,'redundant',[1,2,3],[1,2,3],'redundant'] |
|||
uniq_ary = ary.uniq |
uniq_ary = ary.uniq |
||
# => [1, 2, "redundant", [1, 2, 3]] |
# => [1, 2, "redundant", [1, 2, 3]]</lang> |
||
</lang> |
|||
=={{header|Scala}}== |
=={{header|Scala}}== |
||
<lang scala> |
<lang scala>val list = List(1,2,3,4,2,3,4,99) |
||
val list = List(1,2,3,4,2,3,4,99) |
|||
val l2 = list.removeDuplicates |
val l2 = list.removeDuplicates |
||
// l2: scala.List[scala.Int] = List(1,2,3,4,99) |
// l2: scala.List[scala.Int] = List(1,2,3,4,99)</lang> |
||
</lang> |
|||
=={{header|Scheme}}== |
=={{header|Scheme}}== |
||
<lang scheme> |
<lang scheme>(define (remove-duplicates l) |
||
(define (remove-duplicates l) |
|||
(do ((a '() (if (member (car l) a) a (cons (car l) a))) |
(do ((a '() (if (member (car l) a) a (cons (car l) a))) |
||
(l l (cdr l))) |
(l l (cdr l))) |
||
((null? l) (reverse a)))) |
((null? l) (reverse a)))) |
||
(remove-duplicates (list 1 2 1 3 2 4 5)) |
(remove-duplicates (list 1 2 1 3 2 4 5))</lang> |
||
</lang> |
|||
<lang scheme> |
<lang scheme>(1 2 3 4 5)</lang> |
||
(1 2 3 4 5) |
|||
</lang> |
|||
Some implementations provide remove-duplicates in their standard library. |
Some implementations provide remove-duplicates in their standard library. |
||
Line 606: | Line 578: | ||
=={{header|UnixPipes}}== |
=={{header|UnixPipes}}== |
||
Assuming a sequence is represented by lines in a file. |
Assuming a sequence is represented by lines in a file. |
||
<lang bash> |
<lang bash>bash$ # original list |
||
bash$ # original list |
|||
bash$ printf '6\n2\n3\n6\n4\n2\n' |
bash$ printf '6\n2\n3\n6\n4\n2\n' |
||
6 |
6 |
||
Line 621: | Line 592: | ||
4 |
4 |
||
6 |
6 |
||
bash$ |
bash$</lang> |
||
</lang> |
|||
or |
or |
||
<lang bash> |
<lang bash>bash$ # original list |
||
bash$ # original list |
|||
bash$ printf '6\n2\n3\n6\n4\n2\n' |
bash$ printf '6\n2\n3\n6\n4\n2\n' |
||
6 |
6 |
||
Line 641: | Line 610: | ||
4 |
4 |
||
6 |
6 |
||
bash$ |
bash$</lang> |
||
</lang> |
|||
=={{header|Ursala}}== |
=={{header|Ursala}}== |
||
Line 662: | Line 630: | ||
The input "array" is an edit buffer where each line is one element. |
The input "array" is an edit buffer where each line is one element. |
||
<lang vedit>Sort(0, File_Size) // sort the data |
|||
<lang vedit> |
|||
While(Replace("^(.*)\N\1$", "\1", REGEXP+BEGIN+NOERR)){} // remove duplicates</lang> |
|||
Sort(0, File_Size) // sort the data |
|||
While(Replace("^(.*)\N\1$", "\1", REGEXP+BEGIN+NOERR)){} // remove duplicates |
|||
</lang> |
Revision as of 18:43, 6 October 2009
You are encouraged to solve this task according to the task description, using any language you may know.
Given an Array, derive a sequence of elements in which all duplicates are removed.
There are basically three approaches seen here:
- Put the elements into a hash table which does not allow duplicates. The complexity is O(n) on average, and O(n^2) worst case. This approach requires a hash function for your type (which is compatible with equality), either built-in to your language, or provided by the user.
- Sort the elements and remove consecutive duplicate elements. The complexity of the best sorting algorithms is O(n log n). This approach requires that your type be "comparable", i.e. have an ordering. Putting the elements into a self-balancing binary search tree is a special case of sorting.
- Go through the list, and for each element, check the rest of the list to see if it appears again, and discard it if it does. The complexity is O(n^2). The up-shot is that this always works on any type (provided that you can test for equality).
Ada
<lang ada> with Ada.Containers.Ordered_Sets;
with Ada.Text_IO; use Ada.Text_IO; procedure Unique_Set is package Int_Sets is new Ada.Containers.Ordered_Sets(Integer); use Int_Sets; Nums : array (Natural range <>) of Integer := (1,2,3,4,5,5,6,7,1); Unique : Set; Set_Cur : Cursor; Success : Boolean; begin for I in Nums'range loop Unique.Insert(Nums(I), Set_Cur, Success); end loop; Set_Cur := Unique.First; loop Put_Line(Item => Integer'Image(Element(Set_Cur))); exit when Set_Cur = Unique.Last; Set_Cur := Next(Set_Cur); end loop; end Unique_Set;</lang>
APL
The primitive monad ∪ means "unique", so:
∪ 1 2 3 1 2 3 4 1 1 2 3 4
w←1 2 3 1 2 3 4 1 ((⍳⍨w)=⍳⍴w)/w 1 2 3 4
AppleScript
<lang applescript>set array to {1, 2, 3, "a", "b", "c", 2, 3, 4, "b", "c", "d"} set unique to {} repeat with i in array
-- very important -- list index starts at 1 not 0 if (i is not in unique) then set unique to unique & i end if
end repeat</lang>
AutoHotkey
Built in Sort has an option to remove duplicates <lang AutoHotkey>a = 1,2,1,4,5,2,15,1,3,4 Sort, a, a, NUD`, MsgBox % a ; 1,2,3,4,5,15</lang>
AWK
We produce an array a with duplicates from a string; then index a second array b with the contents of a, so that duplicates make only one entry; then produce a string with the keys of b, which is finally output.
$ awk 'BEGIN{split("a b c d c b a",a);for(i in a)b[a[i]]=1;r="";for(i in b)r=r" "i;print r}' a b c d
C
Since there's no way to know ahead of time how large the new data structure will need to be, we'll return a linked list instead of an array.
<lang c>#include <stdio.h>
- include <stdlib.h>
struct list_node {int x; struct list_node *next;}; typedef struct list_node node;
node * uniq(int *a, unsigned alen)
{if (alen == 0) return NULL; node *start = malloc(sizeof(node)); if (start == NULL) exit(EXIT_FAILURE); start->x = a[0]; start->next = NULL;
for (int i = 1 ; i < alen ; ++i) {node *n = start; for (;; n = n->next) {if (a[i] == n->x) break; if (n->next == NULL) {n->next = malloc(sizeof(node)); n = n->next; if (n == NULL) exit(EXIT_FAILURE); n->x = a[i]; n->next = NULL; break;}}}
return start;}
int main(void)
{int a[] = {1, 2, 1, 4, 5, 2, 15, 1, 3, 4}; for (node *n = uniq(a, 10) ; n != NULL ; n = n->next) printf("%d ", n->x); puts(""); return 0;}</lang>
C++
This version uses std::set, which requires its element type be comparable using the < operator. <lang cpp>#include <set>
- include <iostream>
using namespace std;
int main() {
typedef set<int> TySet; int data[] = {1, 2, 3, 2, 3, 4};
TySet unique_set(data, data + 6);
cout << "Set items:" << endl; for (TySet::iterator iter = unique_set.begin(); iter != unique_set.end(); iter++) cout << *iter << " "; cout << endl;
}</lang>
This version uses hash_set, which is part of the SGI extension to the Standard Template Library. It is not part of the C++ standard library. It requires that its element type have a hash function.
<lang cpp>#include <ext/hash_set>
- include <iostream>
using namespace std;
int main() {
typedef __gnu_cxx::hash_set<int> TyHash; int data[] = {1, 2, 3, 2, 3, 4};
TyHash unique_set(data, data + 6);
cout << "Set items:" << endl; for (TyHash::iterator iter = unique_set.begin(); iter != unique_set.end(); iter++) cout << *iter << " "; cout << endl;
}</lang>
This version uses unordered_set, which is part of the TR1, which is likely to be included in the next version of C++. It is not part of the C++ standard library. It requires that its element type have a hash function.
<lang cpp>#include <tr1/unordered_set>
- include <iostream>
using namespace std;
int main() {
typedef tr1::unordered_set<int> TyHash; int data[] = {1, 2, 3, 2, 3, 4};
TyHash unique_set(data, data + 6);
cout << "Set items:" << endl; for (TyHash::iterator iter = unique_set.begin(); iter != unique_set.end(); iter++) cout << *iter << " "; cout << endl;
}</lang>
Alternative method working directly on the array:
<lang cpp>#include <iostream>
- include <iterator>
- include <algorithm>
// helper template template<typename T> T* end(T (&array)[size]) { return array+size; }
int main() {
int data[] = { 1, 2, 3, 2, 3, 4 }; std::sort(data, end(data)); int* new_end = std::unique(data, end(data)); std::copy(data, new_end, std::ostream_iterator<int>(std::cout, " "); std::cout << std::endl;
}</lang>
C#
C# 2.0
<lang csharp>int[] nums = { 1, 1, 2, 3, 4, 4 }; List<int> unique = new List<int>(); for (int i = 0; i < nums.Length; i++)
if (!unique.Contains(nums[i])) unique.Add(nums[i]);</lang>
C# 3.0
<lang csharp>int[] nums = {1, 1, 2, 3, 4, 4}; int[] unique = nums.Distinct().ToArray();</lang>
Common Lisp
To remove duplicates non-destructively:
<lang lisp>(remove-duplicates '(1 3 2 9 1 2 3 8 8 1 0 2)) > (9 3 8 1 0 2)</lang>
Or, to remove duplicates in-place:
<lang lisp>(delete-duplicates '(1 3 2 9 1 2 3 8 8 1 0 2)) > (9 3 8 1 0 2)</lang>
D
<lang d>void main() {
int[] data = [1, 2, 3, 2, 3, 4]; int[int] hash; foreach(el; data) hash[el] = 0;
}</lang>
E
[1,2,3,2,3,4].asSet().getElements()
Erlang
List = [1, 2, 3, 2, 2, 4, 5, 5, 4, 6, 6, 5]. Set = sets:from_list(List).
Factor
USING: sets ; V{ 1 2 1 3 2 4 5 } prune .
V{ 1 2 3 4 5 }
Forth
Forth has no built-in hashtable facility, so the easiest way to achieve this goal is to take the "uniq" program as an example.
The word uniq, if given a sorted array of cells, will remove the duplicate entries and return the new length of the array. For simplicity, uniq has been written to process cells (which are to Forth what "int" is to C), but could easily be modified to handle a variety of data types through deferred procedures, etc.
The input data is assumed to be sorted.
\ Increments a2 until it no longer points to the same value as a1 \ a3 is the address beyond the data a2 is traversing. : skip-dups ( a1 a2 a3 -- a1 a2+n ) dup rot ?do over @ i @ <> if drop i leave then cell +loop ; \ Compress an array of cells by removing adjacent duplicates \ Returns the new count : uniq ( a n -- n2 ) over >r \ Original addr to return stack cells over + >r \ "to" addr now on return stack, available as r@ dup begin ( write read ) dup r@ < while 2dup @ swap ! \ copy one cell cell+ r@ skip-dups cell 0 d+ \ increment write ptr only repeat r> 2drop r> - cell / ;
Here is another implementation of "uniq" that uses a popular parameters and local variables extension words. It is structurally the same as the above implementation, but uses less overt stack manipulation.
: uniqv { a n \ r e -- n } a n cells+ to e a dup to r \ the write address lives on the stack begin r e < while r @ over ! r cell+ e skip-dups to r cell+ repeat a - cell / ;
To test this code, you can execute:
create test 1 , 2 , 3 , 2 , 6 , 4 , 5 , 3 , 6 , here test - cell / constant ntest : .test ( n -- ) 0 ?do test i cells + ? loop ; test ntest 2dup cell-sort uniq .test
output
1 2 3 4 5 6 ok
Groovy
<lang groovy>def list = [1, 2, 3, 'a', 'b', 'c', 2, 3, 4, 'b', 'c', 'd'] assert list.size() == 12 println " Original List: ${list}"
// Filtering the List list.unique() assert list.size() == 8 println " Filtered List: ${list}"
list = [1, 2, 3, 'a', 'b', 'c', 2, 3, 4, 'b', 'c', 'd'] assert list.size() == 12
// Converting to Set def set = new HashSet(list) assert set.size() == 8 println " Set: ${set}"
// Converting to Order-preserving Set set = new LinkedHashSet(list) assert set.size() == 8 println "List-ordered Set: ${set}"</lang>
Output:
Original List: [1, 2, 3, a, b, c, 2, 3, 4, b, c, d] Filtered List: [1, 2, 3, a, b, c, 4, d] Set: [1, d, 2, 3, 4, b, c, a] List-ordered Set: [1, 2, 3, a, b, c, 4, d]
Haskell
<lang haskell>values = [1,2,3,2,3,4] unique = List.nub values</lang>
IDL
result = uniq( array[sort( array )] )
J
] a=: 4 5 ?@$ 13 NB. 4 by 5 matrix of numbers chosen from 0 to 12 4 3 2 8 0 1 9 5 1 7 6 3 9 9 4 2 1 5 3 2 , a NB. sequence of the same elements 4 3 2 8 0 1 9 5 1 7 6 3 9 9 4 2 1 5 3 2 ~. , a NB. unique elements 4 3 2 8 0 1 9 5 7 6
The verb ~. removes duplicate items from any array (numeric, character, or other; vector, matrix, rank-n array). For example:
~. 'chthonic eleemosynary paronomasiac' chtoni elmsyarp
Or
0 1 1 2 0 */0 1 2 0 0 0 0 1 2 0 1 2 0 2 4 0 0 0 ~. 0 1 1 2 0 */0 1 2 0 0 0 0 1 2 0 2 4
Java
<lang java5>import java.util.Set; import java.util.HashSet; import java.util.Arrays;
Object[] data = {1, 2, 3, "a", "b", "c", 2, 3, 4, "b", "c", "d"}; Set<Object> uniqueSet = new HashSet<Object>(Arrays.asList(data)); Object[] unique = uniqueSet.toArray();</lang>
JavaScript
<lang javascript>function unique(ary) {
// concat() with no args is a way to clone an array var u = ary.concat().sort(); for (var i = 1; i < u.length; ) { if (u[i-1] == u[i]) u.splice(i,1); else i++; } return u;
}
function output(str) {
try { WScript.Echo(str); // WSH } catch(err) { print(str); // Rhino }
}
var ary = [1, 2, 3, "a", "b", "c", 2, 3, 4, "b", "c", "d"]; output(unique(ary));</lang> output:
1,2,3,4,a,b,c,d
Logo
show remdup [1 2 3 a b c 2 3 4 b c d] ; [1 a 2 3 4 b c d]
MAXScript
uniques = #(1, 2, 3, "a", "b", "c", 2, 3, 4, "b", "c", "d") for i in uniques.count to 1 by -1 do ( id = findItem uniques uniques[i] if (id != i) do deleteItem uniques i )
Mathematica
Built-in function: <lang Mathematica>
DeleteDuplicates[{0, 2, 1, 4, 2, 0, 3, 1, 1, 1, 0, 3}]
</lang> gives back: <lang Mathematica>
{0, 2, 1, 4, 3}
</lang> Custom function (reordering of elements is possible): <lang Mathematica>
NoDupes[input_List] := Split[Sort[input]]All, 1 NoDupes[{0, 2, 1, 4, 2, 0, 3, 1, 1, 1, 0, 3}]
</lang> gives back: <lang Mathematica> {0, 1, 2, 3, 4} </lang>
Nial
uniques := [1, 2, 3, 'a', 'b', 'c', 2, 3, 4, 'b', 'c', 'd'] cull uniques =+-+-+-+-+-+-+-+-+ =|1|2|3|a|b|c|4|d| =+-+-+-+-+-+-+-+-+
Using strand form
cull 1 1 2 2 3 3 =1 2 3
Objective-C
<lang objc>NSArray *items = [NSArray arrayWithObjects:@"A", @"B", @"C", @"B", @"A", nil];
NSSet *unique = [NSSet setWithArray:items];</lang>
OCaml
<lang ocaml>let uniq lst =
let unique_set = Hashtbl.create (List.length lst) in List.iter (fun x -> Hashtbl.replace unique_set x ()) lst; Hashtbl.fold (fun x () xs -> x :: xs) unique_set []
let _ =
uniq [1;2;3;2;3;4]</lang>
Perl
<lang perl>use List::MoreUtils qw(uniq);
my @uniq = uniq qw(1 2 3 a b c 2 3 4 b c d);</lang>
Without modules: <lang perl>my %seen; my @uniq = grep {!$seen{$_}++} qw(1 2 3 a b c 2 3 4 b c d);</lang>
Note: the following two solutions convert elements to strings in the result, so it will cause problems with references.
Alternately: <lang perl>my %hash = map { $_ => 1 } qw(1 2 3 a b c 2 3 4 b c d); my @uniq = keys %hash;</lang>
Alternately: <lang perl>my %seen; @seen{qw(1 2 3 a b c 2 3 4 b c d)} = (); my @uniq = keys %seen;</lang>
PHP
<lang php>$list = array(1, 2, 3, 'a', 'b', 'c', 2, 3, 4, 'b', 'c', 'd'); $unique_list = array_unique($list);</lang>
Pop11
;;; Initial array lvars ar = {1 2 3 2 3 4}; ;;; Create a hash table lvars ht= newmapping([], 50, 0, true); ;;; Put all array as keys into the hash table lvars i; for i from 1 to length(ar) do 1 -> ht(ar(i)) endfor;
;;; Collect keys into a list lvars ls = []; appdata(ht, procedure(x); cons(front(x), ls) -> ls; endprocedure);
PowerShell
The common array for both approaches: <lang powershell>$data = 1,2,3,1,2,3,4,1</lang> Using a hash table to remove duplicates: <lang powershell>$h = @{} foreach ($x in $data) {
$h[$x] = 1
}
$h.Keys</lang>
Sorting and removing duplicates along the way can be done with the Sort-Object
cmdlet.
<lang powershell>$data | Sort-Object -Unique</lang>
Prolog
<lang prolog>uniq(Data,Uniques) :- sort(Data,Uniques).</lang>
Example usage: <lang prolog>?- uniq([1, 2, 3, 2, 3, 4],Xs). Xs = [1, 2, 3, 4]</lang>
Python
<lang python>items = [1, 2, 3, 'a', 'b', 'c', 2, 3, 4, 'b', 'c', 'd'] unique = list(set(items))</lang> Note: the above will not work if any elements are not hashable (e.g. if they are list, dict, set, or many other mutable types).
See also http://www.peterbe.com/plog/uniqifiers-benchmark and http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/52560
R
items <- c(1,2,3,2,4,3,2) unique (items)
Raven
[ 1 2 3 'a' 'b' 'c' 2 3 4 'b' 'c' 'd' ] as items items copy unique print
list (8 items) 0 => 1 1 => 2 2 => 3 3 => "a" 4 => "b" 5 => "c" 6 => 4 7 => "d"
Ruby
<lang ruby>ary = [1,1,2,1,'redundant',[1,2,3],[1,2,3],'redundant'] uniq_ary = ary.uniq
- => [1, 2, "redundant", [1, 2, 3]]</lang>
Scala
<lang scala>val list = List(1,2,3,4,2,3,4,99) val l2 = list.removeDuplicates // l2: scala.List[scala.Int] = List(1,2,3,4,99)</lang>
Scheme
<lang scheme>(define (remove-duplicates l)
(do ((a '() (if (member (car l) a) a (cons (car l) a))) (l l (cdr l))) ((null? l) (reverse a))))
(remove-duplicates (list 1 2 1 3 2 4 5))</lang>
<lang scheme>(1 2 3 4 5)</lang>
Some implementations provide remove-duplicates in their standard library.
SETL
<lang SETL>items := [0,7,6,6,4,9,7,1,2,3,2]; print(unique(items));</lang> Output in arbitrary order (convert tuple->set then set->tuple): <lang SETL>proc unique(items);
return [item: item in {item: item in items}];
end proc;</lang>
Preserving source order <lang SETL>proc unique(items);
seen := {}; return [item: item in items, nps in {#seen} | #(seen with:= item) > nps];
end proc;</lang>
Smalltalk
<lang smalltalk>|aCollection| "Example of creating a collection" a := #( 1 1 2 'hello' 'world' #symbol #another 2 'hello' #symbol ). a asSet printNl.</lang>
Output:
Set (1 2 #symbol 'world' #another 'hello' )
Tcl
The concept of an "array" in TCL is strictly associative - and since there cannot be duplicate keys, there cannot be a redundant element in an array. What is called "array" in many other languages is probably better represented by the "list" in TCL (as in LISP). <lang tcl>set result [lsort -unique $listname]</lang>
UnixPipes
Assuming a sequence is represented by lines in a file. <lang bash>bash$ # original list bash$ printf '6\n2\n3\n6\n4\n2\n' 6 2 3 6 4 2 bash$ # made uniq bash$ printf '6\n2\n3\n6\n4\n2\n'|sort -n|uniq 2 3 4 6 bash$</lang>
or
<lang bash>bash$ # original list bash$ printf '6\n2\n3\n6\n4\n2\n' 6 2 3 6 4 2 bash$ # made uniq bash$ printf '6\n2\n3\n6\n4\n2\n'|sort -u 2 3 4 6 bash$</lang>
Ursala
The algorithm is to partition the list by equality and take one representative from each class, which can be done by letting the built in partition operator, |=, use its default comparison relation. This works on lists of any type including character strings but the comparison is based only on structural equivalence. It's up to the programmer to decide whether that's a relevant criterion for equivalence or else specify a better one. <lang Ursala>#cast %s
example = |=hS& 'mississippi'</lang> output:
'mspi'
Vedit macro language
The input "array" is an edit buffer where each line is one element. <lang vedit>Sort(0, File_Size) // sort the data While(Replace("^(.*)\N\1$", "\1", REGEXP+BEGIN+NOERR)){} // remove duplicates</lang>