Remove duplicate elements

From Rosetta Code
Revision as of 20:43, 8 September 2009 by Rdm (talk | contribs) (→‎{{header|J}})
Task
Remove duplicate elements
You are encouraged to solve this task according to the task description, using any language you may know.

Given an Array, derive a data structure containing a sequence of elements, derive a sequence of elements in which all duplicates are removed.

There are basically three approaches seen here:

  • Put the elements into a hash table which does not allow duplicates. The complexity is O(n) on average, and O(n^2) worst case. This approach requires a hash function for your type (which is compatible with equality), either built-in to your language, or provided by the user.
  • Sort the elements and remove consecutive duplicate elements. The complexity of the best sorting algorithms is O(n log n). This approach requires that your type be "comparable", i.e. have an ordering. Putting the elements into a self-balancing binary search tree is a special case of sorting.
  • Go through the list, and for each element, check the rest of the list to see if it appears again, and discard it if it does. The complexity is O(n^2). The up-shot is that this always works on any type (provided that you can test for equality).

Ada

Works with: GNAT version GPL 2007

<lang ada>

with Ada.Containers.Ordered_Sets;
with Ada.Text_IO; use Ada.Text_IO;

procedure Unique_Set is
   package Int_Sets is new Ada.Containers.Ordered_Sets(Integer);
   use Int_Sets;
   Nums : array (Natural range <>) of Integer := (1,2,3,4,5,5,6,7,1);
   Unique : Set;
   Set_Cur : Cursor;
   Success : Boolean;
begin
   for I in Nums'range loop
      Unique.Insert(Nums(I), Set_Cur, Success);
   end loop;
   Set_Cur := Unique.First;
   loop
      Put_Line(Item => Integer'Image(Element(Set_Cur)));
      exit when Set_Cur = Unique.Last;
      Set_Cur := Next(Set_Cur);
   end loop;
end Unique_Set;

</lang>

APL

Works with: Dyalog APL

The primitive monad ∪ means "unique", so:

     ∪ 1 2 3 1 2 3 4 1
1 2 3 4
Works with: APL2
     w←1 2 3 1 2 3 4 1
     ((⍳⍨w)=⍳⍴w)/w
1 2 3 4

AppleScript

<lang applescript> set array to {1, 2, 3, "a", "b", "c", 2, 3, 4, "b", "c", "d"} set unique to {} repeat with i in array

   -- very important -- list index starts at 1 not 0
   if (i is not in unique) then
       set unique to unique & i
   end if

end repeat </lang>

AutoHotkey

Built in Sort has an option to remove duplicates <lang AutoHotkey>a = 1,2,1,4,5,2,15,1,3,4 Sort, a, a, NUD`, MsgBox % a  ; 1,2,3,4,5,15</lang>

AWK

We produce an array a with duplicates from a string; then index a second array b with the contents of a, so that duplicates make only one entry; then produce a string with the keys of b, which is finally output.

$ awk 'BEGIN{split("a b c d c b a",a);for(i in a)b[a[i]]=1;r="";for(i in b)r=r" "i;print r}'
a b c d

C

Since there's no way to know ahead of time how large the new data structure will need to be, we'll return a linked list instead of an array.

<lang c>

  1. include <stdio.h>
  2. include <stdlib.h>

struct list_node {int x; struct list_node *next;}; typedef struct list_node node;

node * uniq(int *a, unsigned alen)

{if (alen == 0) return NULL;
 node *start = malloc(sizeof(node));
 if (start == NULL) exit(EXIT_FAILURE);
 start->x = a[0];
 start->next = NULL;
 for (int i = 1 ; i < alen ; ++i)
    {node *n = start;
     for (;; n = n->next)
        {if (a[i] == n->x) break;
         if (n->next == NULL)
            {n->next = malloc(sizeof(node));
             n = n->next;
             if (n == NULL) exit(EXIT_FAILURE);
             n->x = a[i];
             n->next = NULL;
             break;}}}
 return start;}

int main(void)

  {int a[] = {1, 2, 1, 4, 5, 2, 15, 1, 3, 4};
   for (node *n = uniq(a, 10) ; n != NULL ; n = n->next)
       printf("%d ", n->x);
   puts("");
   return 0;}

</lang>

C++

This version uses std::set, which requires its element type be comparable using the < operator. <lang cpp>

  1. include <set>
  2. include <iostream>

using namespace std;

int main() {

   typedef set<int> TySet;
   int data[] = {1, 2, 3, 2, 3, 4};
   TySet unique_set(data, data + 6);
   cout << "Set items:" << endl;
   for (TySet::iterator iter = unique_set.begin(); iter != unique_set.end(); iter++)
         cout << *iter << " ";
   cout << endl;

} </lang>

This version uses hash_set, which is part of the SGI extension to the Standard Template Library. It is not part of the C++ standard library. It requires that its element type have a hash function.

Works with: GCC

<lang cpp>

  1. include <ext/hash_set>
  2. include <iostream>

using namespace std;

int main() {

   typedef __gnu_cxx::hash_set<int> TyHash;
   int data[] = {1, 2, 3, 2, 3, 4};
   TyHash unique_set(data, data + 6);
   cout << "Set items:" << endl;
   for (TyHash::iterator iter = unique_set.begin(); iter != unique_set.end(); iter++)
         cout << *iter << " ";
   cout << endl;

} </lang>

This version uses unordered_set, which is part of the TR1, which is likely to be included in the next version of C++. It is not part of the C++ standard library. It requires that its element type have a hash function.

Works with: GCC

<lang cpp>

  1. include <tr1/unordered_set>
  2. include <iostream>

using namespace std;

int main() {

   typedef tr1::unordered_set<int> TyHash;
   int data[] = {1, 2, 3, 2, 3, 4};
   TyHash unique_set(data, data + 6);
   cout << "Set items:" << endl;
   for (TyHash::iterator iter = unique_set.begin(); iter != unique_set.end(); iter++)
         cout << *iter << " ";
   cout << endl;

} </lang>

Alternative method working directly on the array:

<lang cpp>

  1. include <iostream>
  2. include <iterator>
  3. include <algorithm>

// helper template template<typename T> T* end(T (&array)[size]) { return array+size; }

int main() {

 int data[] = { 1, 2, 3, 2, 3, 4 };
 std::sort(data, end(data));
 int* new_end = std::unique(data, end(data));
 std::copy(data, new_end, std::ostream_iterator<int>(std::cout, " ");
 std::cout << std::endl;

} </lang>

C#

C# 2.0

Works with: MSVS version 2005 and .Net Framework 2.0

<lang csharp>int[] nums = { 1, 1, 2, 3, 4, 4 }; List<int> unique = new List<int>(); for (int i = 0; i < nums.Length; i++)

   if (!unique.Contains(nums[i]))
       unique.Add(nums[i]);</lang>

C# 3.0

Works with: MSVS version 2008 and .Net Framework 3.5

<lang csharp>int[] nums = {1, 1, 2, 3, 4, 4}; int[] unique = nums.Distinct().ToArray();</lang>

Common Lisp

To remove duplicates non-destructively:

<lang lisp> (remove-duplicates '(1 3 2 9 1 2 3 8 8 1 0 2)) > (9 3 8 1 0 2) </lang>

Or, to remove duplicates in-place:

<lang lisp> (delete-duplicates '(1 3 2 9 1 2 3 8 8 1 0 2)) > (9 3 8 1 0 2) </lang>


D

<lang d> void main() {

   int[] data = [1, 2, 3, 2, 3, 4];
   int[int] hash;
   foreach(el; data)
       hash[el] = 0;

} </lang>

E

[1,2,3,2,3,4].asSet().getElements()

Erlang

List = [1, 2, 3, 2, 2, 4, 5, 5, 4, 6, 6, 5].
Set = sets:from_list(List).

Factor

 USING: sets ;
 V{ 1 2 1 3 2 4 5 } prune .
 V{ 1 2 3 4 5 }

Forth

Forth has no built-in hashtable facility, so the easiest way to achieve this goal is to take the "uniq" program as an example.

The word uniq, if given a sorted array of cells, will remove the duplicate entries and return the new length of the array. For simplicity, uniq has been written to process cells (which are to Forth what "int" is to C), but could easily be modified to handle a variety of data types through deferred procedures, etc.

The input data is assumed to be sorted.

\ Increments a2 until it no longer points to the same value as a1
\ a3 is the address beyond the data a2 is traversing.
: skip-dups ( a1 a2 a3 -- a1 a2+n )
    dup rot ?do
      over @ i @ <> if drop i leave then
    cell +loop ;

\ Compress an array of cells by removing adjacent duplicates
\ Returns the new count
: uniq ( a n -- n2 )
   over >r             \ Original addr to return stack
   cells over + >r     \ "to" addr now on return stack, available as r@
   dup begin           ( write read )
      dup r@ <
   while
      2dup @ swap !    \ copy one cell
      cell+ r@ skip-dups
      cell 0 d+        \ increment write ptr only
   repeat  r> 2drop  r> - cell / ;

Here is another implementation of "uniq" that uses a popular parameters and local variables extension words. It is structurally the same as the above implementation, but uses less overt stack manipulation.

: uniqv { a n \ r e -- n }
    a n cells+ to e
    a dup to r
    \ the write address lives on the stack
    begin
      r e <
    while
      r @ over !
      r cell+ e skip-dups to r
      cell+
    repeat
    a - cell / ;

To test this code, you can execute:

create test 1 , 2 , 3 , 2 , 6 , 4 , 5 , 3 , 6 ,
here test - cell / constant ntest
: .test ( n -- ) 0 ?do test i cells + ? loop ; 

test ntest 2dup cell-sort uniq .test

output

1 2 3 4 5 6 ok

Groovy

<lang groovy>def list = [1, 2, 3, 'a', 'b', 'c', 2, 3, 4, 'b', 'c', 'd'] assert list.size() == 12 println " Original List: ${list}"

// Filtering the List list.unique() assert list.size() == 8 println " Filtered List: ${list}"

list = [1, 2, 3, 'a', 'b', 'c', 2, 3, 4, 'b', 'c', 'd'] assert list.size() == 12

// Converting to Set def set = new HashSet(list) assert set.size() == 8 println " Set: ${set}"

// Converting to Order-preserving Set set = new LinkedHashSet(list) assert set.size() == 8 println "List-ordered Set: ${set}"</lang>

Output:

   Original List: [1, 2, 3, a, b, c, 2, 3, 4, b, c, d]
   Filtered List: [1, 2, 3, a, b, c, 4, d]
             Set: [1, d, 2, 3, 4, b, c, a]
List-ordered Set: [1, 2, 3, a, b, c, 4, d]

Haskell

<lang haskell> values = [1,2,3,2,3,4] unique = List.nub values </lang>

IDL

 result = uniq( array[sort( array )] )

J

   ] a=: 4 5 ?@$ 13  NB. 4 by 5 matrix of numbers chosen from 0 to 12
4 3 2 8 0
1 9 5 1 7
6 3 9 9 4
2 1 5 3 2

   , a     NB. sequence of the same elements
4 3 2 8 0 1 9 5 1 7 6 3 9 9 4 2 1 5 3 2
   ~. , a  NB. unique elements
4 3 2 8 0 1 9 5 7 6

The verb ~. removes duplicate items from any array (numeric, character, or other; vector, matrix, rank-n array). For example:

   ~. 'chthonic eleemosynary paronomasiac'
chtoni elmsyarp

Or

   0 1 1 2 0 +/0 1 2
0 1 2
1 2 3
1 2 3
2 3 4
0 1 2
   ~. 0 1 1 2 0 +/0 1 2
0 1 2
1 2 3
2 3 4

Java

Works with: Java version 1.5

<lang java5> import java.util.Set; import java.util.HashSet; import java.util.Arrays;

Object[] data = {1, 2, 3, "a", "b", "c", 2, 3, 4, "b", "c", "d"}; Set<Object> uniqueSet = new HashSet<Object>(Arrays.asList(data)); Object[] unique = uniqueSet.toArray(); </lang>

Works with: UCB Logo
show remdup [1 2 3 a b c 2 3 4 b c d]   ; [1 a 2 3 4 b c d]

MAXScript

uniques = #(1, 2, 3, "a", "b", "c", 2, 3, 4, "b", "c", "d")
for i in uniques.count to 1 by -1 do
(
    id = findItem uniques uniques[i]
    if (id != i) do deleteItem uniques i
)

Mathematica

Built-in function: <lang Mathematica>

DeleteDuplicates[{0, 2, 1, 4, 2, 0, 3, 1, 1, 1, 0, 3}]

</lang> gives back: <lang Mathematica>

{0, 2, 1, 4, 3}

</lang> Custom function (reordering of elements is possible): <lang Mathematica>

NoDupes[input_List] := Split[Sort[input]]All, 1
NoDupes[{0, 2, 1, 4, 2, 0, 3, 1, 1, 1, 0, 3}]

</lang> gives back: <lang Mathematica> {0, 1, 2, 3, 4} </lang>

Nial

uniques := [1, 2, 3, 'a', 'b', 'c', 2, 3, 4, 'b', 'c', 'd']
cull uniques
=+-+-+-+-+-+-+-+-+
=|1|2|3|a|b|c|4|d|
=+-+-+-+-+-+-+-+-+

Using strand form

cull 1 1 2 2 3 3
=1 2 3

Objective-C

<lang objc> NSArray *items = [NSArray arrayWithObjects:@"A", @"B", @"C", @"B", @"A", nil];

NSSet *unique = [NSSet setWithArray:items]; </lang>

OCaml

<lang ocaml> let uniq lst =

 let unique_set = Hashtbl.create (List.length lst) in
   List.iter (fun x -> Hashtbl.replace unique_set x ()) lst;
   Hashtbl.fold (fun x () xs -> x :: xs) unique_set []

let _ =

 uniq [1;2;3;2;3;4]

</lang>

Perl

<lang perl> use List::MoreUtils qw(uniq);

my @uniq = uniq qw(1 2 3 a b c 2 3 4 b c d); </lang>

Without modules: <lang perl> my %seen; my @uniq = grep {!$seen{$_}++} qw(1 2 3 a b c 2 3 4 b c d); </lang>

Alternately: <lang perl> my %hash = map { $_ => 1 } qw(1 2 3 a b c 2 3 4 b c d); my @uniq = keys %hash; </lang>

PHP

<lang php> $list = array(1, 2, 3, 'a', 'b', 'c', 2, 3, 4, 'b', 'c', 'd'); $unique_list = array_unique($list); </lang>

Pop11

;;; Initial array
lvars ar = {1 2 3 2 3 4};
;;; Create a hash table
lvars ht= newmapping([], 50, 0, true);
;;; Put all array as keys into the hash table
lvars i;
for i from 1 to length(ar) do
   1 -> ht(ar(i))
endfor;
;;; Collect keys into a list
lvars ls = [];
appdata(ht, procedure(x); cons(front(x), ls) -> ls; endprocedure);

PowerShell

The common array for both approaches: <lang powershell>$data = 1,2,3,1,2,3,4,1</lang> Using a hash table to remove duplicates: <lang powershell>$h = @{} foreach ($x in $data) {

   $h[$x] = 1

} $h.Keys</lang> Sorting and removing duplicates along the way can be done with the Sort-Object cmdlet. <lang powershell>$data | Sort-Object -Unique</lang>

Prolog

<lang prolog> uniq(Data,Uniques) :- sort(Data,Uniques). </lang>

Example usage: <lang prolog> ?- uniq([1, 2, 3, 2, 3, 4],Xs). Xs = [1, 2, 3, 4] </lang>

Python

<lang python> items = [1, 2, 3, 'a', 'b', 'c', 2, 3, 4, 'b', 'c', 'd'] unique = list(set(items)) </lang> See also http://www.peterbe.com/plog/uniqifiers-benchmark and http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/52560

R

items <- c(1,2,3,2,4,3,2)

unique (items)

Raven

[ 1 2 3 'a' 'b' 'c' 2 3 4 'b' 'c' 'd' ] as items
items copy unique print
list (8 items)
 0 => 1
 1 => 2
 2 => 3
 3 => "a"
 4 => "b"
 5 => "c"
 6 => 4
 7 => "d"

Ruby

<lang ruby> ary = [1,1,2,1,'redundant',[1,2,3],[1,2,3],'redundant'] uniq_ary = ary.uniq

  1. => [1, 2, "redundant", [1, 2, 3]]

</lang>

Scala

<lang scala> val list = List(1,2,3,4,2,3,4,99) val l2 = list.removeDuplicates // l2: scala.List[scala.Int] = List(1,2,3,4,99) </lang>

Scheme

<lang scheme> (define (remove-duplicates l)

 (do ((a '() (if (member (car l) a) a (cons (car l) a)))
      (l l (cdr l)))
   ((null? l) (reverse a))))

(remove-duplicates (list 1 2 1 3 2 4 5)) </lang>

<lang scheme> (1 2 3 4 5) </lang>

Some implementations provide remove-duplicates in their standard library.

SETL

<lang SETL>items := [0,7,6,6,4,9,7,1,2,3,2]; print(unique(items));</lang> Output in arbitrary order (convert tuple->set then set->tuple): <lang SETL>proc unique(items);

 return [item: item in {item: item in items}];

end proc;</lang>

Preserving source order <lang SETL>proc unique(items);

 seen := {};
 return [item: item in items, nps in {#seen} | #(seen with:= item) > nps];

end proc;</lang>

Smalltalk

<lang smalltalk>|aCollection| "Example of creating a collection" a := #( 1 1 2 'hello' 'world' #symbol #another 2 'hello' #symbol ). a asSet printNl.</lang>

Output:

Set (1 2 #symbol 'world' #another 'hello' )

Tcl

The concept of an "array" in TCL is strictly associative - and since there cannot be duplicate keys, there cannot be a redundant element in an array. What is called "array" in many other languages is probably better represented by the "list" in TCL (as in LISP). <lang tcl>set result [lsort -unique $listname]</lang>

UnixPipes

Assuming a sequence is represented by lines in a file. <lang bash> bash$ # original list bash$ printf '6\n2\n3\n6\n4\n2\n' 6 2 3 6 4 2 bash$ # made uniq bash$ printf '6\n2\n3\n6\n4\n2\n'|sort -n|uniq 2 3 4 6 bash$ </lang>

or

<lang bash> bash$ # original list bash$ printf '6\n2\n3\n6\n4\n2\n' 6 2 3 6 4 2 bash$ # made uniq bash$ printf '6\n2\n3\n6\n4\n2\n'|sort -u 2 3 4 6 bash$ </lang>

Ursala

The algorithm is to partition the list by equality and take one representative from each class, which can be done by letting the built in partition operator, |=, use its default comparison relation. This works on lists of any type including character strings but the comparison is based only on structural equivalence. It's up to the programmer to decide whether that's a relevant criterion for equivalence or else specify a better one. <lang Ursala>#cast %s

example = |=hS& 'mississippi'</lang> output:

'mspi'


Vedit macro language

The input "array" is an edit buffer where each line is one element. <lang vedit> Sort(0, File_Size) // sort the data While(Replace("^(.*)\N\1$", "\1", REGEXP+BEGIN+NOERR)){} // remove duplicates </lang>