Remove duplicate elements: Difference between revisions
m (<code>) |
m (<lang>) |
||
Line 8: | Line 8: | ||
=={{header|Ada}}== |
=={{header|Ada}}== |
||
{{works with|GNAT|GPL 2007}} |
{{works with|GNAT|GPL 2007}} |
||
< |
<lang ada> |
||
with Ada.Containers.Ordered_Sets; |
with Ada.Containers.Ordered_Sets; |
||
with Ada.Text_IO; use Ada.Text_IO; |
with Ada.Text_IO; use Ada.Text_IO; |
||
Line 30: | Line 30: | ||
end loop; |
end loop; |
||
end Unique_Set; |
end Unique_Set; |
||
</ |
</lang> |
||
== {{header|APL}} == |
== {{header|APL}} == |
||
{{works with|Dyalog APL}} |
{{works with|Dyalog APL}} |
||
The primitive monad ∪ means "unique", so: |
The primitive monad ∪ means "unique", so: |
||
<lang apl> |
|||
⚫ | |||
∪ 1 2 3 1 2 3 4 1 |
|||
⚫ | |||
</lang> |
|||
{{works with|APL2}} |
{{works with|APL2}} |
||
<lang apl> |
|||
w←1 2 3 1 2 3 4 1 |
|||
⚫ | |||
((⍳⍨w)=⍳⍴w)/w |
|||
⚫ | |||
1 2 3 4 |
|||
</lang> |
|||
=={{header|AppleScript}}== |
=={{header|AppleScript}}== |
||
< |
<lang applescript> |
||
set array to {1, 2, 3, "a", "b", "c", 2, 3, 4, "b", "c", "d"} |
set array to {1, 2, 3, "a", "b", "c", 2, 3, 4, "b", "c", "d"} |
||
set unique to {} |
set unique to {} |
||
Line 53: | Line 57: | ||
end if |
end if |
||
end repeat |
end repeat |
||
</ |
</lang> |
||
=={{header|C}}== |
=={{header|C}}== |
||
Since there's no way to know ahead of time how large the new data structure will need to be, we'll return a linked list instead of an array. |
Since there's no way to know ahead of time how large the new data structure will need to be, we'll return a linked list instead of an array. |
||
< |
<lang c> |
||
#include <stdio.h> |
#include <stdio.h> |
||
#include <stdlib.h> |
#include <stdlib.h> |
||
Line 92: | Line 96: | ||
puts(""); |
puts(""); |
||
return 0;} |
return 0;} |
||
</ |
</lang> |
||
=={{header|C++}}== |
=={{header|C++}}== |
||
This version uses <tt>std::set</tt>, which requires its element type be comparable using the < operator. |
This version uses <tt>std::set</tt>, which requires its element type be comparable using the < operator. |
||
< |
<lang cpp> |
||
#include <set> |
#include <set> |
||
#include <iostream> |
#include <iostream> |
||
Line 112: | Line 116: | ||
cout << endl; |
cout << endl; |
||
} |
} |
||
</ |
</lang> |
||
This version uses <tt>hash_set</tt>, which is part of the SGI extension to the Standard Template Library. It is not part of the C++ standard library. It requires that its element type have a hash function. |
This version uses <tt>hash_set</tt>, which is part of the SGI extension to the Standard Template Library. It is not part of the C++ standard library. It requires that its element type have a hash function. |
||
{{works with|GCC}} |
{{works with|GCC}} |
||
< |
<lang cpp> |
||
#include <ext/hash_set> |
#include <ext/hash_set> |
||
#include <iostream> |
#include <iostream> |
||
Line 133: | Line 137: | ||
cout << endl; |
cout << endl; |
||
} |
} |
||
</ |
</lang> |
||
This version uses <tt>unordered_set</tt>, which is part of the TR1, which is likely to be included in the next version of C++. It is not part of the C++ standard library. It requires that its element type have a hash function. |
This version uses <tt>unordered_set</tt>, which is part of the TR1, which is likely to be included in the next version of C++. It is not part of the C++ standard library. It requires that its element type have a hash function. |
||
{{works with|GCC}} |
{{works with|GCC}} |
||
< |
<lang cpp> |
||
#include <tr1/unordered_set> |
#include <tr1/unordered_set> |
||
#include <iostream> |
#include <iostream> |
||
Line 154: | Line 158: | ||
cout << endl; |
cout << endl; |
||
} |
} |
||
</ |
</lang> |
||
Alternative method working directly on the array: |
Alternative method working directly on the array: |
||
< |
<lang cpp> |
||
#include <iostream> |
#include <iostream> |
||
#include <iterator> |
#include <iterator> |
||
Line 174: | Line 178: | ||
std::cout << std::endl; |
std::cout << std::endl; |
||
} |
} |
||
</ |
</lang> |
||
=={{header|C sharp|C #}}== |
=={{header|C sharp|C #}}== |
||
C# 2.0 |
C# 2.0 |
||
{{works with |MSVS|2005 and .Net Framework 2.0}} |
{{works with |MSVS|2005 and .Net Framework 2.0}} |
||
< |
<lang csharp> |
||
List<int> nums = new List<int>( new int { 1, 1, 2, 3, 4, 4 } ); |
List<int> nums = new List<int>( new int { 1, 1, 2, 3, 4, 4 } ); |
||
List<int> unique = new List<int>(); |
List<int> unique = new List<int>(); |
||
Line 185: | Line 189: | ||
if( !unique.Contains( i ) ) |
if( !unique.Contains( i ) ) |
||
unique.Add( i ); |
unique.Add( i ); |
||
</ |
</lang> |
||
C# 3.0 |
C# 3.0 |
||
{{works with |MSVS|2008 and .Net Framework 3.5}} |
{{works with |MSVS|2008 and .Net Framework 3.5}} |
||
< |
<lang csharp> |
||
var unique = (new int { 1, 1, 2, 3, 4, 4 }).Distinct(); |
var unique = (new int { 1, 1, 2, 3, 4, 4 }).Distinct(); |
||
</ |
</lang> |
||
=={{header|Common Lisp}}== |
=={{header|Common Lisp}}== |
||
Line 197: | Line 201: | ||
To remove duplicates non-destructively: |
To remove duplicates non-destructively: |
||
< |
<lang lisp> |
||
(remove-duplicates '(1 3 2 9 1 2 3 8 8 1 0 2)) |
(remove-duplicates '(1 3 2 9 1 2 3 8 8 1 0 2)) |
||
> (9 3 8 1 0 2) |
> (9 3 8 1 0 2) |
||
</ |
</lang> |
||
Or, to remove duplicates in-place: |
Or, to remove duplicates in-place: |
||
< |
<lang lisp> |
||
(delete-duplicates '(1 3 2 9 1 2 3 8 8 1 0 2)) |
(delete-duplicates '(1 3 2 9 1 2 3 8 8 1 0 2)) |
||
> (9 3 8 1 0 2) |
> (9 3 8 1 0 2) |
||
</ |
</lang> |
||
=={{header|D}}== |
=={{header|D}}== |
||
< |
<lang d> |
||
void main() { |
void main() { |
||
int[] data = [1, 2, 3, 2, 3, 4]; |
int[] data = [1, 2, 3, 2, 3, 4]; |
||
Line 218: | Line 222: | ||
hash[el] = 0; |
hash[el] = 0; |
||
} |
} |
||
</ |
</lang> |
||
=={{header|E}}== |
=={{header|E}}== |
||
Line 292: | Line 296: | ||
=={{header|Haskell}}== |
=={{header|Haskell}}== |
||
< |
<lang haskell> |
||
values = [1,2,3,2,3,4] |
values = [1,2,3,2,3,4] |
||
unique = List.nub values |
unique = List.nub values |
||
</ |
</lang> |
||
=={{header|IDL}}== |
=={{header|IDL}}== |
||
Line 320: | Line 324: | ||
=={{header|Java}}== |
=={{header|Java}}== |
||
{{works with|Java|1.5}} |
{{works with|Java|1.5}} |
||
< |
<lang java5> |
||
import java.util.Set; |
import java.util.Set; |
||
import java.util.HashSet; |
import java.util.HashSet; |
||
Line 328: | Line 332: | ||
Set<Object> uniqueSet = new HashSet<Object>(Arrays.asList(data)); |
Set<Object> uniqueSet = new HashSet<Object>(Arrays.asList(data)); |
||
Object[] unique = uniqueSet.toArray(); |
Object[] unique = uniqueSet.toArray(); |
||
</ |
</lang> |
||
=={{header|Logo}}== |
=={{header|Logo}}== |
||
Line 354: | Line 358: | ||
=={{header|Objective-C}}== |
=={{header|Objective-C}}== |
||
< |
<lang objc> |
||
NSArray *items = [NSArray arrayWithObjects:@"A", @"B", @"C", @"B", @"A", nil]; |
NSArray *items = [NSArray arrayWithObjects:@"A", @"B", @"C", @"B", @"A", nil]; |
||
NSSet *unique = [NSSet setWithArray:items]; |
NSSet *unique = [NSSet setWithArray:items]; |
||
</ |
</lang> |
||
=={{header|OCaml}}== |
=={{header|OCaml}}== |
||
< |
<lang ocaml> |
||
let uniq lst = |
let uniq lst = |
||
let unique_set = Hashtbl.create (List.length lst) in |
let unique_set = Hashtbl.create (List.length lst) in |
||
Line 369: | Line 373: | ||
let _ = |
let _ = |
||
uniq [1;2;3;2;3;4] |
uniq [1;2;3;2;3;4] |
||
</ |
</lang> |
||
=={{header|Perl}}== |
=={{header|Perl}}== |
||
{{libheader|List::MoreUtils}} |
{{libheader|List::MoreUtils}} |
||
< |
<lang perl> |
||
use List::MoreUtils qw(uniq); |
use List::MoreUtils qw(uniq); |
||
my @uniq = uniq qw(1 2 3 a b c 2 3 4 b c d); |
my @uniq = uniq qw(1 2 3 a b c 2 3 4 b c d); |
||
</ |
</lang> |
||
Without modules: |
Without modules: |
||
< |
<lang perl> |
||
my %seen; |
my %seen; |
||
my @uniq = grep {!$seen{$_}++} qw(1 2 3 a b c 2 3 4 b c d); |
my @uniq = grep {!$seen{$_}++} qw(1 2 3 a b c 2 3 4 b c d); |
||
</ |
</lang> |
||
=={{header|PHP}}== |
=={{header|PHP}}== |
||
< |
<lang php> |
||
$list = array(1, 2, 3, 'a', 'b', 'c', 2, 3, 4, 'b', 'c', 'd'); |
$list = array(1, 2, 3, 'a', 'b', 'c', 2, 3, 4, 'b', 'c', 'd'); |
||
$unique_list = array_unique($list); |
$unique_list = array_unique($list); |
||
</ |
</lang> |
||
=={{header|Pop11}}== |
=={{header|Pop11}}== |
||
Line 408: | Line 412: | ||
=={{header|Prolog}}== |
=={{header|Prolog}}== |
||
< |
<lang prolog> |
||
uniq(Data,Uniques) :- sort(Data,Uniques). |
uniq(Data,Uniques) :- sort(Data,Uniques). |
||
</ |
</lang> |
||
Example usage: |
Example usage: |
||
< |
<lang prolog> |
||
?- uniq([1, 2, 3, 2, 3, 4],Xs). |
?- uniq([1, 2, 3, 2, 3, 4],Xs). |
||
Xs = [1, 2, 3, 4] |
Xs = [1, 2, 3, 4] |
||
</ |
</lang> |
||
=={{header|Python}}== |
=={{header|Python}}== |
||
< |
<lang python> |
||
items = [1, 2, 3, 'a', 'b', 'c', 2, 3, 4, 'b', 'c', 'd'] |
items = [1, 2, 3, 'a', 'b', 'c', 2, 3, 4, 'b', 'c', 'd'] |
||
unique = list(set(items)) |
unique = list(set(items)) |
||
</ |
</lang> |
||
See also http://www.peterbe.com/plog/uniqifiers-benchmark and http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/52560 |
See also http://www.peterbe.com/plog/uniqifiers-benchmark and http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/52560 |
||
Line 441: | Line 445: | ||
=={{header|Ruby}}== |
=={{header|Ruby}}== |
||
< |
<lang ruby> |
||
ary = [1,1,2,1,'redundant',[1,2,3],[1,2,3],'redundant'] |
ary = [1,1,2,1,'redundant',[1,2,3],[1,2,3],'redundant'] |
||
uniq_ary = ary.uniq |
uniq_ary = ary.uniq |
||
# => [1, 2, "redundant", [1, 2, 3]] |
# => [1, 2, "redundant", [1, 2, 3]] |
||
</ |
</lang> |
||
=={{header|Scala}}== |
=={{header|Scala}}== |
||
< |
<lang scala> |
||
val list = List(1,2,3,4,2,3,4,99) |
val list = List(1,2,3,4,2,3,4,99) |
||
val l2 = list.removeDuplicates |
val l2 = list.removeDuplicates |
||
// l2: scala.List[scala.Int] = List(1,2,3,4,99) |
// l2: scala.List[scala.Int] = List(1,2,3,4,99) |
||
</ |
</lang> |
||
=={{header|Scheme}}== |
=={{header|Scheme}}== |
||
< |
<lang scheme> |
||
(define (remove-duplicates l) |
(define (remove-duplicates l) |
||
(do ((a '() (if (member (car l) a) a (cons (car l) a))) |
(do ((a '() (if (member (car l) a) a (cons (car l) a))) |
||
Line 462: | Line 466: | ||
(remove-duplicates (list 1 2 1 3 2 4 5)) |
(remove-duplicates (list 1 2 1 3 2 4 5)) |
||
</ |
</lang> |
||
< |
<lang scheme> |
||
(1 2 3 4 5) |
(1 2 3 4 5) |
||
</ |
</lang> |
||
Some implementations provide remove-duplicates in their standard library. |
Some implementations provide remove-duplicates in their standard library. |
||
Line 474: | Line 478: | ||
The concept of an "array" in TCL is strictly associative - and since there cannot be duplicate keys, there cannot be a redundant element in an array. What is called "array" in many other languages is probably better represented by the "list" in TCL (as in LISP). |
The concept of an "array" in TCL is strictly associative - and since there cannot be duplicate keys, there cannot be a redundant element in an array. What is called "array" in many other languages is probably better represented by the "list" in TCL (as in LISP). |
||
< |
<lang tcl> |
||
set result [lsort -unique $listname] |
set result [lsort -unique $listname] |
||
</ |
</lang> |
||
=={{header|UnixPipes}}== |
=={{header|UnixPipes}}== |
||
Assuming a sequence is represented by lines in a file. |
Assuming a sequence is represented by lines in a file. |
||
< |
<lang bash> |
||
bash$ # original list |
bash$ # original list |
||
bash$ printf '6\n2\n3\n6\n4\n2\n' |
bash$ printf '6\n2\n3\n6\n4\n2\n' |
||
Line 496: | Line 500: | ||
6 |
6 |
||
bash$ |
bash$ |
||
</ |
</lang> |
||
or |
or |
||
< |
<lang bash> |
||
bash$ # original list |
bash$ # original list |
||
bash$ printf '6\n2\n3\n6\n4\n2\n' |
bash$ printf '6\n2\n3\n6\n4\n2\n' |
||
Line 516: | Line 520: | ||
6 |
6 |
||
bash$ |
bash$ |
||
</ |
</lang> |
Revision as of 12:44, 31 January 2009
You are encouraged to solve this task according to the task description, using any language you may know.
Given an Array, derive a data structure containing a sequence of elements, derive a sequence of elements in which all duplicates are removed.
There are basically three approaches seen here:
- Put the elements into a hash table which does not allow duplicates. The complexity is O(n) on average, and O(n^2) worst case. This approach requires a hash function for your type (which is compatible with equality), either built-in to your language, or provided by the user.
- Sort the elements and remove consecutive duplicate elements. The complexity of the best sorting algorithms is O(n log n). This approach requires that your type be "comparable", i.e. have an ordering. Putting the elements into a self-balancing binary search tree is a special case of sorting.
- Go through the list, and for each element, check the rest of the list to see if it appears again, and discard it if it does. The complexity is O(n^2). The up-shot is that this always works on any type (provided that you can test for equality).
Ada
<lang ada>
with Ada.Containers.Ordered_Sets; with Ada.Text_IO; use Ada.Text_IO; procedure Unique_Set is package Int_Sets is new Ada.Containers.Ordered_Sets(Integer); use Int_Sets; Nums : array (Natural range <>) of Integer := (1,2,3,4,5,5,6,7,1); Unique : Set; Set_Cur : Cursor; Success : Boolean; begin for I in Nums'range loop Unique.Insert(Nums(I), Set_Cur, Success); end loop; Set_Cur := Unique.First; loop Put_Line(Item => Integer'Image(Element(Set_Cur))); exit when Set_Cur = Unique.Last; Set_Cur := Next(Set_Cur); end loop; end Unique_Set;
</lang>
APL
The primitive monad ∪ means "unique", so: <lang apl>
∪ 1 2 3 1 2 3 4 1
1 2 3 4 </lang>
<lang apl>
w←1 2 3 1 2 3 4 1 ((⍳⍨w)=⍳⍴w)/w
1 2 3 4 </lang>
AppleScript
<lang applescript> set array to {1, 2, 3, "a", "b", "c", 2, 3, 4, "b", "c", "d"} set unique to {} repeat with i in array
-- very important -- list index starts at 1 not 0 if (i is not in unique) then set unique to unique & i end if
end repeat </lang>
C
Since there's no way to know ahead of time how large the new data structure will need to be, we'll return a linked list instead of an array.
<lang c>
- include <stdio.h>
- include <stdlib.h>
struct list_node {int x; struct list_node *next;}; typedef struct list_node node;
node * uniq(int *a, unsigned alen)
{if (alen == 0) return NULL; node *start = malloc(sizeof(node)); if (start == NULL) exit(EXIT_FAILURE); start->x = a[0]; start->next = NULL;
for (int i = 1 ; i < alen ; ++i) {node *n = start; for (;; n = n->next) {if (a[i] == n->x) break; if (n->next == NULL) {n->next = malloc(sizeof(node)); n = n->next; if (n == NULL) exit(EXIT_FAILURE); n->x = a[i]; n->next = NULL; break;}}}
return start;}
int main(void)
{int a[] = {1, 2, 1, 4, 5, 2, 15, 1, 3, 4}; for (node *n = uniq(a, 10) ; n != NULL ; n = n->next) printf("%d ", n->x); puts(""); return 0;}
</lang>
C++
This version uses std::set, which requires its element type be comparable using the < operator. <lang cpp>
- include <set>
- include <iostream>
using namespace std;
int main() {
typedef set<int> TySet; int data[] = {1, 2, 3, 2, 3, 4};
TySet unique_set(data, data + 6);
cout << "Set items:" << endl; for (TySet::iterator iter = unique_set.begin(); iter != unique_set.end(); iter++) cout << *iter << " "; cout << endl;
} </lang>
This version uses hash_set, which is part of the SGI extension to the Standard Template Library. It is not part of the C++ standard library. It requires that its element type have a hash function.
<lang cpp>
- include <ext/hash_set>
- include <iostream>
using namespace std;
int main() {
typedef __gnu_cxx::hash_set<int> TyHash; int data[] = {1, 2, 3, 2, 3, 4};
TyHash unique_set(data, data + 6);
cout << "Set items:" << endl; for (TyHash::iterator iter = unique_set.begin(); iter != unique_set.end(); iter++) cout << *iter << " "; cout << endl;
} </lang>
This version uses unordered_set, which is part of the TR1, which is likely to be included in the next version of C++. It is not part of the C++ standard library. It requires that its element type have a hash function.
<lang cpp>
- include <tr1/unordered_set>
- include <iostream>
using namespace std;
int main() {
typedef tr1::unordered_set<int> TyHash; int data[] = {1, 2, 3, 2, 3, 4};
TyHash unique_set(data, data + 6);
cout << "Set items:" << endl; for (TyHash::iterator iter = unique_set.begin(); iter != unique_set.end(); iter++) cout << *iter << " "; cout << endl;
} </lang>
Alternative method working directly on the array:
<lang cpp>
- include <iostream>
- include <iterator>
- include <algorithm>
// helper template template<typename T> T* end(T (&array)[size]) { return array+size; }
int main() {
int data[] = { 1, 2, 3, 2, 3, 4 }; std::sort(data, end(data)); int* new_end = std::unique(data, end(data)); std::copy(data, new_end, std::ostream_iterator<int>(std::cout, " "); std::cout << std::endl;
} </lang>
C#
C# 2.0
<lang csharp> List<int> nums = new List<int>( new int { 1, 1, 2, 3, 4, 4 } ); List<int> unique = new List<int>(); foreach( int i in nums )
if( !unique.Contains( i ) ) unique.Add( i );
</lang>
C# 3.0
<lang csharp> var unique = (new int { 1, 1, 2, 3, 4, 4 }).Distinct(); </lang>
Common Lisp
To remove duplicates non-destructively:
<lang lisp> (remove-duplicates '(1 3 2 9 1 2 3 8 8 1 0 2)) > (9 3 8 1 0 2) </lang>
Or, to remove duplicates in-place:
<lang lisp> (delete-duplicates '(1 3 2 9 1 2 3 8 8 1 0 2)) > (9 3 8 1 0 2) </lang>
D
<lang d> void main() {
int[] data = [1, 2, 3, 2, 3, 4]; int[int] hash; foreach(el; data) hash[el] = 0;
} </lang>
E
[1,2,3,2,3,4].asSet().getElements()
Erlang
List = [1, 2, 3, 2, 2, 4, 5, 5, 4, 6, 6, 5]. Set = sets:from_list(List).
Factor
USING: sets ; V{ 1 2 1 3 2 4 5 } prune .
V{ 1 2 3 4 5 }
Forth
Forth has no built-in hashtable facility, so the easiest way to achieve this goal is to take the "uniq" program as an example.
The word uniq, if given a sorted array of cells, will remove the duplicate entries and return the new length of the array. For simplicity, uniq has been written to process cells (which are to Forth what "int" is to C), but could easily be modified to handle a variety of data types through deferred procedures, etc.
The input data is assumed to be sorted.
\ Increments a2 until it no longer points to the same value as a1 \ a3 is the address beyond the data a2 is traversing. : skip-dups ( a1 a2 a3 -- a1 a2+n ) dup rot ?do over @ i @ <> if drop i leave then cell +loop ; \ Compress an array of cells by removing adjacent duplicates \ Returns the new count : uniq ( a n -- n2 ) over >r \ Original addr to return stack cells over + >r \ "to" addr now on return stack, available as r@ dup begin ( write read ) dup r@ < while 2dup @ swap ! \ copy one cell cell+ r@ skip-dups cell 0 d+ \ increment write ptr only repeat r> 2drop r> - cell / ;
Here is another implementation of "uniq" that uses a popular parameters and local variables extension words. It is structurally the same as the above implementation, but uses less overt stack manipulation.
: uniqv { a n \ r e -- n } a n cells+ to e a dup to r \ the write address lives on the stack begin r e < while r @ over ! r cell+ e skip-dups to r cell+ repeat a - cell / ;
To test this code, you can execute:
create test 1 , 2 , 3 , 2 , 6 , 4 , 5 , 3 , 6 , here test - cell / constant ntest : .test ( n -- ) 0 ?do test i cells + ? loop ; test ntest 2dup cell-sort uniq .test
output
1 2 3 4 5 6 ok
Haskell
<lang haskell> values = [1,2,3,2,3,4] unique = List.nub values </lang>
IDL
result = uniq( array[sort( array )] )
J
] a=: 4 5 ?@$ 13 NB. 4 by 5 matrix of numbers chosen from 0 to 12 4 3 2 8 0 1 9 5 1 7 6 3 9 9 4 2 1 5 3 2 , a NB. sequence of the same elements 4 3 2 8 0 1 9 5 1 7 6 3 9 9 4 2 1 5 3 2 ~. , a NB. unique elements 4 3 2 8 0 1 9 5 7 6
The verb ~. removes duplicate items from any array (numeric, character, or other; vector, matrix, rank-n array). For example:
~. 'chthonic eleemosynary paronomasiac' chtoni elmsyarp
Java
<lang java5> import java.util.Set; import java.util.HashSet; import java.util.Arrays;
Object[] data = {1, 2, 3, "a", "b", "c", 2, 3, 4, "b", "c", "d"}; Set<Object> uniqueSet = new HashSet<Object>(Arrays.asList(data)); Object[] unique = uniqueSet.toArray(); </lang>
Logo
show remdup [1 2 3 a b c 2 3 4 b c d] ; [1 a 2 3 4 b c d]
MAXScript
uniques = #(1, 2, 3, "a", "b", "c", 2, 3, 4, "b", "c", "d") for i in uniques.count to 1 by -1 do ( id = findItem uniques uniques[i] if (id != i) do deleteItem uniques i )
Nial
uniques := [1, 2, 3, 'a', 'b', 'c', 2, 3, 4, 'b', 'c', 'd'] cull uniques =+-+-+-+-+-+-+-+-+ =|1|2|3|a|b|c|4|d| =+-+-+-+-+-+-+-+-+
Using strand form
cull 1 1 2 2 3 3 =1 2 3
Objective-C
<lang objc> NSArray *items = [NSArray arrayWithObjects:@"A", @"B", @"C", @"B", @"A", nil];
NSSet *unique = [NSSet setWithArray:items]; </lang>
OCaml
<lang ocaml> let uniq lst =
let unique_set = Hashtbl.create (List.length lst) in List.iter (fun x -> Hashtbl.replace unique_set x ()) lst; Hashtbl.fold (fun x () xs -> x :: xs) unique_set []
let _ =
uniq [1;2;3;2;3;4]
</lang>
Perl
<lang perl> use List::MoreUtils qw(uniq);
my @uniq = uniq qw(1 2 3 a b c 2 3 4 b c d); </lang>
Without modules: <lang perl> my %seen; my @uniq = grep {!$seen{$_}++} qw(1 2 3 a b c 2 3 4 b c d); </lang>
PHP
<lang php> $list = array(1, 2, 3, 'a', 'b', 'c', 2, 3, 4, 'b', 'c', 'd'); $unique_list = array_unique($list); </lang>
Pop11
;;; Initial array lvars ar = {1 2 3 2 3 4}; ;;; Create a hash table lvars ht= newmapping([], 50, 0, true); ;;; Put all array as keys into the hash table lvars i; for i from 1 to length(ar) do 1 -> ht(ar(i)) endfor;
;;; Collect keys into a list lvars ls = []; appdata(ht, procedure(x); cons(front(x), ls) -> ls; endprocedure);
Prolog
<lang prolog> uniq(Data,Uniques) :- sort(Data,Uniques). </lang>
Example usage: <lang prolog> ?- uniq([1, 2, 3, 2, 3, 4],Xs). Xs = [1, 2, 3, 4] </lang>
Python
<lang python> items = [1, 2, 3, 'a', 'b', 'c', 2, 3, 4, 'b', 'c', 'd'] unique = list(set(items)) </lang> See also http://www.peterbe.com/plog/uniqifiers-benchmark and http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/52560
Raven
[ 1 2 3 'a' 'b' 'c' 2 3 4 'b' 'c' 'd' ] as items items copy unique print
list (8 items) 0 => 1 1 => 2 2 => 3 3 => "a" 4 => "b" 5 => "c" 6 => 4 7 => "d"
Ruby
<lang ruby> ary = [1,1,2,1,'redundant',[1,2,3],[1,2,3],'redundant'] uniq_ary = ary.uniq
- => [1, 2, "redundant", [1, 2, 3]]
</lang>
Scala
<lang scala> val list = List(1,2,3,4,2,3,4,99) val l2 = list.removeDuplicates // l2: scala.List[scala.Int] = List(1,2,3,4,99) </lang>
Scheme
<lang scheme> (define (remove-duplicates l)
(do ((a '() (if (member (car l) a) a (cons (car l) a))) (l l (cdr l))) ((null? l) (reverse a))))
(remove-duplicates (list 1 2 1 3 2 4 5)) </lang>
<lang scheme> (1 2 3 4 5) </lang>
Some implementations provide remove-duplicates in their standard library.
Tcl
The concept of an "array" in TCL is strictly associative - and since there cannot be duplicate keys, there cannot be a redundant element in an array. What is called "array" in many other languages is probably better represented by the "list" in TCL (as in LISP).
<lang tcl> set result [lsort -unique $listname] </lang>
UnixPipes
Assuming a sequence is represented by lines in a file. <lang bash> bash$ # original list bash$ printf '6\n2\n3\n6\n4\n2\n' 6 2 3 6 4 2 bash$ # made uniq bash$ printf '6\n2\n3\n6\n4\n2\n'|sort -n|uniq 2 3 4 6 bash$ </lang>
or
<lang bash> bash$ # original list bash$ printf '6\n2\n3\n6\n4\n2\n' 6 2 3 6 4 2 bash$ # made uniq bash$ printf '6\n2\n3\n6\n4\n2\n'|sort -u 2 3 4 6 bash$ </lang>