Remove duplicate elements
You are encouraged to solve this task according to the task description, using any language you may know.
Given an Array, derive a data structure containing a sequence of elements, derive a sequence of elements in which all duplicates are removed.
Ada
with Ada.Containers.Ordered_Sets; with Ada.Text_IO; use Ada.Text_IO; procedure Unique_Set is package Int_Sets is new Ada.Containers.Ordered_Sets(Integer); use Int_Sets; Nums : array (Natural range <>) of Integer := (1,2,3,4,5,5,6,7,1); Unique : Set; Set_Cur : Cursor; Success : Boolean; begin for I in Nums'range loop Unique.Insert(Nums(I), Set_Cur, Success); end loop; Set_Cur := Unique.First; loop Put_Line(Item => Integer'Image(Element(Set_Cur))); exit when Set_Cur = Unique.Last; Set_Cur := Next(Set_Cur); end loop; end Unique_Set;
AppleScript
set array to {1, 2, 3, "a", "b", "c", 2, 3, 4, "b", "c", "d"} set unique to {} repeat with i in array -- very important -- list index starts at 1 not 0 if (i is not in unique) then set unique to unique & i end if end repeat
C++
This version uses std::set, which requires its element type be comparable using the < operator.
#include <set> #include <iostream> using namespace std; int main() { typedef set<int> TySet; int data[] = {1, 2, 3, 2, 3, 4}; TySet unique_set(data, data + 6); cout << "Set items:" << endl; for (TySet::iterator iter = unique_set.begin(); iter != unique_set.end(); iter++) cout << *iter << " "; cout << endl; }
This version uses hash_set, which is part of the SGI extension to the Standard Template Library. It is not part of the C++ standard library. It requires that its element type have a hash function.
#include <ext/hash_set> #include <iostream> using namespace std; int main() { typedef __gnu_cxx::hash_set<int> TyHash; int data[] = {1, 2, 3, 2, 3, 4}; TyHash unique_set(data, data + 6); cout << "Set items:" << endl; for (TyHash::iterator iter = unique_set.begin(); iter != unique_set.end(); iter++) cout << *iter << " "; cout << endl; }
This version uses unordered_set, which is part of the TR1, which is likely to be included in the next version of C++. It is not part of the C++ standard library. It requires that its element type have a hash function.
#include <tr1/unordered_set> #include <iostream> using namespace std; int main() { typedef tr1::unordered_set<int> TyHash; int data[] = {1, 2, 3, 2, 3, 4}; TyHash unique_set(data, data + 6); cout << "Set items:" << endl; for (TyHash::iterator iter = unique_set.begin(); iter != unique_set.end(); iter++) cout << *iter << " "; cout << endl; }
Alternative method working directly on the array:
#include <iostream> #include <iterator> #include <algorithm> // helper template template<typename T> T* end(T (&array)[size]) { return array+size; } int main() { int data[] = { 1, 2, 3, 2, 3, 4 }; std::sort(data, end(data)); int* new_end = std::unique(data, end(data)); std::copy(data, new_end, std::ostream_iterator<int>(std::cout, " "); std::cout << std::endl; }
C#
List<int> nums = new List<int>( new int { 1, 1, 2, 3, 4, 4 } ); List<int> unique = new List<int>(); foreach( int i in nums ) if( !unique.Contains( i ) ) unique.Add( i );
Common Lisp
To remove duplicates non-destructively:
(remove-duplicates '(1 3 2 9 1 2 3 8 8 1 0 2)) > (9 3 8 1 0 2)
Or, to remove duplicates in-place:
(delete-duplicates '(1 3 2 9 1 2 3 8 8 1 0 2)) > (9 3 8 1 0 2)
D
void main() { int[] data = [1, 2, 3, 2, 3, 4]; int[int] hash; foreach(el; data) hash[el] = 0; }
E
[1,2,3,2,3,4].asSet().getElements()
Erlang
List = [1, 2, 3, 2, 2, 4, 5, 5, 4, 6, 6, 5]. Set = sets:from_list(List).
Forth
Forth has no built-in hashtable facility, so the easiest way to achieve this goal is to take the "uniq" program as an example.
The word uniq, if given a sorted array of cells, will remove the duplicate entries and return the new length of the array. For simplicity, uniq has been written to process cells (which are to Forth what "int" is to C), but could easily be modified to handle a variety of data types through deferred procedures, etc.
The input data is assumed to be sorted.
\ Increments a2 until it no longer points to the same value as a1 \ a3 is the address beyond the data a2 is traversing. : skip-dups ( a1 a2 a3 -- a1 a2+n ) dup rot ?do over @ i @ <> if drop i leave then cell +loop ; \ Compress an array of cells by removing adjacent duplicates \ Returns the new count : uniq ( a n -- n2 ) over >r \ Original addr to return stack cells over + >r \ "to" addr now on return stack, available as r@ dup begin ( write read ) dup r@ < while 2dup @ swap ! \ copy one cell cell+ r@ skip-dups cell 0 d+ \ increment write ptr only repeat r> 2drop r> - cell / ;
Here is another implementation of "uniq" that uses a popular parameters and local variables extension words. It is structurally the same as the above implementation, but uses less overt stack manipulation.
: uniqv { a n \ r e -- n } a n cells+ to e a dup to r \ the write address lives on the stack begin r e < while r @ over ! r cell+ e skip-dups to r cell+ repeat a - cell / ;
To test this code, you can execute:
create test 1 , 2 , 3 , 2 , 6 , 4 , 5 , 3 , 6 , here test - cell / constant ntest : .test ( n -- ) 0 ?do test i cells + ? loop ; test ntest 2dup cell-sort uniq .test
output
1 2 3 4 5 6 ok
Haskell
values = [1,2,3,2,3,4] unique = List.nub values
IDL
result = uniq( array[sort( array )] )
J
] a=: 4 5 ?@$ 13 NB. 4 by 5 matrix of numbers chosen from 0 to 12 4 3 2 8 0 1 9 5 1 7 6 3 9 9 4 2 1 5 3 2 , a NB. sequence of the same elements 4 3 2 8 0 1 9 5 1 7 6 3 9 9 4 2 1 5 3 2 ~. , a NB. unique elements 4 3 2 8 0 1 9 5 7 6
The verb ~. removes duplicate items from any array (numeric, character, or other; vector, matrix, rank-n array). For example:
~. 'chthonic eleemosynary paronomasiac' chtoni elmsyarp
Java
import java.util.Set; import java.util.HashSet; import java.util.Arrays;
Object[] data = {1, 2, 3, "a", "b", "c", 2, 3, 4, "b", "c", "d"}; Set<Object> uniqueSet = new HashSet<Object>(Arrays.asList(data)); Object[] unique = uniqueSet.toArray();
MAXScript
uniques = #(1, 2, 3, "a", "b", "c", 2, 3, 4, "b", "c", "d") for i in uniques.count to 1 by -1 do ( id = findItem uniques uniques[i] if (id != i) do deleteItem uniques i )
Objective-C
NSArray *items = [NSArray arrayWithObjects:@"A", @"B", @"C", @"B", @"A", nil]; NSSet *unique = [NSSet setWithArray:items];
OCaml
let uniq lst = let unique_set = Hashtbl.create (List.length lst) in List.iter (fun x -> Hashtbl.replace unique_set x ()) lst; Hashtbl.fold (fun x () xs -> x :: xs) unique_set [] let _ = uniq [1;2;3;2;3;4]
Perl
use List::MoreUtils qw(uniq); my @uniq = uniq qw(1 2 3 a b c 2 3 4 b c d);
Without modules:
my %seen; my @uniq = grep {!$seen{$_}++} qw(1 2 3 a b c 2 3 4 b c d);
PHP
$list = array(1, 2, 3, 'a', 'b', 'c', 2, 3, 4, 'b', 'c', 'd'); $unique_list = array_unique($list);
Pop11
;;; Initial array lvars ar = {1 2 3 2 3 4}; ;;; Create a hash table lvars ht= newmapping([], 50, 0, true); ;;; Put all array as keys into the hash table lvars i; for i from 1 to length(ar) do 1 -> ht(ar(i)) endfor;
;;; Collect keys into a list lvars ls = []; appdata(ht, procedure(x); cons(front(x), ls) -> ls; endprocedure);
Prolog
uniq(Data,Uniques) :- sort(Data,Uniques).
Example usage:
?- uniq([1, 2, 3, 2, 3, 4],Xs). Xs = [1, 2, 3, 4]
Python
items = [1, 2, 3, 'a', 'b', 'c', 2, 3, 4, 'b', 'c', 'd'] unique = list(set(items))
See also http://www.peterbe.com/plog/uniqifiers-benchmark and http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/52560
Raven
[ 1 2 3 'a' 'b' 'c' 2 3 4 'b' 'c' 'd' ] as items items copy unique print
list (8 items) 0 => 1 1 => 2 2 => 3 3 => "a" 4 => "b" 5 => "c" 6 => 4 7 => "d"
Ruby
ary = [1,1,2,1,'redundant',[1,2,3],[1,2,3],'redundant'] uniq_ary = ary.uniq # => [1, 2, "redundant", [1, 2, 3]]
Scala
val list = List(1,2,3,4,2,3,4,99) val l2 = list.removeDuplicates // l2: scala.List[scala.Int] = List(1,2,3,4,99)
Tcl
The concept of an "array" in TCL is strictly associative - and since there cannot be duplicate keys, there cannot be a redundant element in an array. What is called "array" in many other languages is probably better represented by the "list" in TCL (as in LISP).
set result [lsort -unique $listname]