Remove duplicate elements: Difference between revisions

From Rosetta Code
Content added Content deleted
m (→‎{{header|C++}}: Syntax highlighting)
m (<code>)
Line 8: Line 8:
=={{header|Ada}}==
=={{header|Ada}}==
{{works with|GNAT|GPL 2007}}
{{works with|GNAT|GPL 2007}}
<ada>
<code ada>
with Ada.Containers.Ordered_Sets;
with Ada.Containers.Ordered_Sets;
with Ada.Text_IO; use Ada.Text_IO;
with Ada.Text_IO; use Ada.Text_IO;
Line 30: Line 30:
end loop;
end loop;
end Unique_Set;
end Unique_Set;
</ada>
</code>


== {{header|APL}} ==
== {{header|APL}} ==
Line 44: Line 44:


=={{header|AppleScript}}==
=={{header|AppleScript}}==
<code applescript>
set array to {1, 2, 3, "a", "b", "c", 2, 3, 4, "b", "c", "d"}
set array to {1, 2, 3, "a", "b", "c", 2, 3, 4, "b", "c", "d"}
set unique to {}
set unique to {}
repeat with i in array
repeat with i in array
-- very important -- list index starts at 1 not 0
-- very important -- list index starts at 1 not 0
if (i is not in unique) then
set unique to unique & i
if (i is not in unique) then
set unique to unique & i
end if
end repeat
end if
end repeat
</code>


=={{header|C}}==
=={{header|C}}==
Since there's no way to know ahead of time how large the new data structure will need to be, we'll return a linked list instead of an array.
Since there's no way to know ahead of time how large the new data structure will need to be, we'll return a linked list instead of an array.


<code c>
<c>#include <stdio.h>
#include <stdio.h>
#include <stdlib.h>
#include <stdlib.h>


Line 88: Line 91:
printf("%d ", n->x);
printf("%d ", n->x);
puts("");
puts("");
return 0;}</c>
return 0;}
</code>


=={{header|C++}}==
=={{header|C++}}==
This version uses <tt>std::set</tt>, which requires its element type be comparable using the < operator.
This version uses <tt>std::set</tt>, which requires its element type be comparable using the < operator.
<cpp>
<code cpp>
#include <set>
#include <set>
#include <iostream>
#include <iostream>
Line 108: Line 112:
cout << endl;
cout << endl;
}
}
</cpp>
</code>


This version uses <tt>hash_set</tt>, which is part of the SGI extension to the Standard Template Library. It is not part of the C++ standard library. It requires that its element type have a hash function.
This version uses <tt>hash_set</tt>, which is part of the SGI extension to the Standard Template Library. It is not part of the C++ standard library. It requires that its element type have a hash function.


{{works with|GCC}}
{{works with|GCC}}
<cpp>
<code cpp>
#include <ext/hash_set>
#include <ext/hash_set>
#include <iostream>
#include <iostream>
Line 129: Line 133:
cout << endl;
cout << endl;
}
}
</cpp>
</code>


This version uses <tt>unordered_set</tt>, which is part of the TR1, which is likely to be included in the next version of C++. It is not part of the C++ standard library. It requires that its element type have a hash function.
This version uses <tt>unordered_set</tt>, which is part of the TR1, which is likely to be included in the next version of C++. It is not part of the C++ standard library. It requires that its element type have a hash function.


{{works with|GCC}}
{{works with|GCC}}
<cpp>
<code cpp>
#include <tr1/unordered_set>
#include <tr1/unordered_set>
#include <iostream>
#include <iostream>
Line 150: Line 154:
cout << endl;
cout << endl;
}
}
</cpp>
</code>


Alternative method working directly on the array:
Alternative method working directly on the array:


<cpp>
<code cpp>
#include <iostream>
#include <iostream>
#include <iterator>
#include <iterator>
Line 170: Line 174:
std::cout << std::endl;
std::cout << std::endl;
}
}
</cpp>
</code>


=={{header|C sharp|C #}}==
=={{header|C sharp|C #}}==
C# 2.0
C# 2.0
{{works with |MSVS|2005 and .Net Framework 2.0}}
{{works with |MSVS|2005 and .Net Framework 2.0}}
<code csharp>
List<int> nums = new List<int>( new int { 1, 1, 2, 3, 4, 4 } );
List<int> unique = new List<int>();
List<int> nums = new List<int>( new int { 1, 1, 2, 3, 4, 4 } );
List<int> unique = new List<int>();
foreach( int i in nums )
foreach( int i in nums )
if( !unique.Contains( i ) )
unique.Add( i );
if( !unique.Contains( i ) )
unique.Add( i );
</code>


C# 3.0
C# 3.0
{{works with |MSVS|2008 and .Net Framework 3.5}}
{{works with |MSVS|2008 and .Net Framework 3.5}}
<code csharp>
var unique = (new int { 1, 1, 2, 3, 4, 4 }).Distinct();
var unique = (new int { 1, 1, 2, 3, 4, 4 }).Distinct();
</code>


=={{header|Common Lisp}}==
=={{header|Common Lisp}}==
Line 189: Line 197:
To remove duplicates non-destructively:
To remove duplicates non-destructively:


<code lisp>
(remove-duplicates '(1 3 2 9 1 2 3 8 8 1 0 2))
> (9 3 8 1 0 2)
(remove-duplicates '(1 3 2 9 1 2 3 8 8 1 0 2))
> (9 3 8 1 0 2)
</code>


Or, to remove duplicates in-place:
Or, to remove duplicates in-place:


<code lisp>
(delete-duplicates '(1 3 2 9 1 2 3 8 8 1 0 2))
> (9 3 8 1 0 2)
(delete-duplicates '(1 3 2 9 1 2 3 8 8 1 0 2))
> (9 3 8 1 0 2)

</code>




=={{header|D}}==
=={{header|D}}==
<pre>
<code d>
void main() {
void main() {
int[] data = [1, 2, 3, 2, 3, 4];
int[] data = [1, 2, 3, 2, 3, 4];
Line 207: Line 218:
hash[el] = 0;
hash[el] = 0;
}
}
</pre>
</code>


=={{header|E}}==
=={{header|E}}==
Line 281: Line 292:


=={{header|Haskell}}==
=={{header|Haskell}}==
<code haskell>
values = [1,2,3,2,3,4]
values = [1,2,3,2,3,4]
unique = List.nub values
unique = List.nub values
</code>


=={{header|IDL}}==
=={{header|IDL}}==
Line 301: Line 313:
4 3 2 8 0 1 9 5 7 6
4 3 2 8 0 1 9 5 7 6


The verb<tt> ~. </tt>removes duplicate items from <i>any</i> array (numeric, character, or other; vector, matrix, rank-n array). For example:
The verb<tt> ~. </tt>removes duplicate items from ''any'' array (numeric, character, or other; vector, matrix, rank-n array). For example:


~. 'chthonic eleemosynary paronomasiac'
~. 'chthonic eleemosynary paronomasiac'
Line 308: Line 320:
=={{header|Java}}==
=={{header|Java}}==
{{works with|Java|1.5}}
{{works with|Java|1.5}}
<code java5>
import java.util.Set;
import java.util.HashSet;
import java.util.Set;
import java.util.Arrays;
import java.util.HashSet;
import java.util.Arrays;


Object[] data = {1, 2, 3, "a", "b", "c", 2, 3, 4, "b", "c", "d"};
Object[] data = {1, 2, 3, "a", "b", "c", 2, 3, 4, "b", "c", "d"};
Set<Object> uniqueSet = new HashSet<Object>(Arrays.asList(data));
Set<Object> uniqueSet = new HashSet<Object>(Arrays.asList(data));
Object[] unique = uniqueSet.toArray();
Object[] unique = uniqueSet.toArray();
</code>


=={{header|Logo}}==
=={{header|Logo}}==
Line 340: Line 354:


=={{header|Objective-C}}==
=={{header|Objective-C}}==
<code objc>
NSArray *items = [NSArray arrayWithObjects:@"A", @"B", @"C", @"B", @"A", nil];


NSSet *unique = [NSSet setWithArray:items];
NSArray *items = [NSArray arrayWithObjects:@"A", @"B", @"C", @"B", @"A", nil];
</code>
NSSet *unique = [NSSet setWithArray:items];



=={{header|OCaml}}==
=={{header|OCaml}}==
<code ocaml>
let uniq lst =
let uniq lst =
let unique_set = Hashtbl.create (List.length lst) in
let unique_set = Hashtbl.create (List.length lst) in
List.iter (fun x -> Hashtbl.replace unique_set x ()) lst;
Hashtbl.fold (fun x () xs -> x :: xs) unique_set []
List.iter (fun x -> Hashtbl.replace unique_set x ()) lst;
Hashtbl.fold (fun x () xs -> x :: xs) unique_set []

let _ =
let _ =
uniq [1;2;3;2;3;4]
uniq [1;2;3;2;3;4]
</code>


=={{header|Perl}}==
=={{header|Perl}}==
{{libheader|List::MoreUtils}}
{{libheader|List::MoreUtils}}
<code perl>
use List::MoreUtils qw(uniq);
use List::MoreUtils qw(uniq);

my @uniq = uniq qw(1 2 3 a b c 2 3 4 b c d);
my @uniq = uniq qw(1 2 3 a b c 2 3 4 b c d);
</code>


Without modules:
Without modules:
<code perl>
my %seen;
my %seen;
my @uniq = grep {!$seen{$_}++} qw(1 2 3 a b c 2 3 4 b c d);
my @uniq = grep {!$seen{$_}++} qw(1 2 3 a b c 2 3 4 b c d);
</code>


=={{header|PHP}}==
=={{header|PHP}}==
<code php>
$list = array(1, 2, 3, 'a', 'b', 'c', 2, 3, 4, 'b', 'c', 'd');
$list = array(1, 2, 3, 'a', 'b', 'c', 2, 3, 4, 'b', 'c', 'd');
$unique_list = array_unique($list);
$unique_list = array_unique($list);
</code>


=={{header|Pop11}}==
=={{header|Pop11}}==
Line 386: Line 408:


=={{header|Prolog}}==
=={{header|Prolog}}==
<code prolog>

uniq(Data,Uniques) :- sort(Data,Uniques).
uniq(Data,Uniques) :- sort(Data,Uniques).
</code>


Example usage:
Example usage:
<code prolog>

?- uniq([1, 2, 3, 2, 3, 4],Xs).
?- uniq([1, 2, 3, 2, 3, 4],Xs).
Xs = [1, 2, 3, 4]
Xs = [1, 2, 3, 4]
</code>


=={{header|Python}}==
=={{header|Python}}==
<code python>
items = [1, 2, 3, 'a', 'b', 'c', 2, 3, 4, 'b', 'c', 'd']
items = [1, 2, 3, 'a', 'b', 'c', 2, 3, 4, 'b', 'c', 'd']
unique = list(set(items))
unique = list(set(items))
</code>
See also http://www.peterbe.com/plog/uniqifiers-benchmark and http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/52560
See also http://www.peterbe.com/plog/uniqifiers-benchmark and http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/52560


Line 415: Line 441:


=={{header|Ruby}}==
=={{header|Ruby}}==
<code ruby>
ary = [1,1,2,1,'redundant',[1,2,3],[1,2,3],'redundant']
ary = [1,1,2,1,'redundant',[1,2,3],[1,2,3],'redundant']
uniq_ary = ary.uniq
uniq_ary = ary.uniq
# => [1, 2, "redundant", [1, 2, 3]]
# => [1, 2, "redundant", [1, 2, 3]]
</code>


=={{header|Scala}}==
=={{header|Scala}}==
<code scala>
val list = List(1,2,3,4,2,3,4,99)
val list = List(1,2,3,4,2,3,4,99)
val l2 = list.removeDuplicates
val l2 = list.removeDuplicates
// l2: scala.List[scala.Int] = List(1,2,3,4,99)
// l2: scala.List[scala.Int] = List(1,2,3,4,99)
</code>


=={{header|Scheme}}==
=={{header|Scheme}}==
<code scheme>
<scheme>(define (remove-duplicates l)
(define (remove-duplicates l)
(do ((a '() (if (member (car l) a) a (cons (car l) a)))
(do ((a '() (if (member (car l) a) a (cons (car l) a)))
(l l (cdr l)))
(l l (cdr l)))
((null? l) (reverse a))))
((null? l) (reverse a))))


(remove-duplicates (list 1 2 1 3 2 4 5))</scheme>
(remove-duplicates (list 1 2 1 3 2 4 5))
</code>


<scheme>(1 2 3 4 5)</scheme>
<code scheme>
(1 2 3 4 5)
</code>


Some implementations provide remove-duplicates in their standard library.
Some implementations provide remove-duplicates in their standard library.
Line 440: Line 474:
The concept of an "array" in TCL is strictly associative - and since there cannot be duplicate keys, there cannot be a redundant element in an array. What is called "array" in many other languages is probably better represented by the "list" in TCL (as in LISP).
The concept of an "array" in TCL is strictly associative - and since there cannot be duplicate keys, there cannot be a redundant element in an array. What is called "array" in many other languages is probably better represented by the "list" in TCL (as in LISP).


<code tcl>
set result [lsort -unique $listname]
set result [lsort -unique $listname]
</code>


=={{header|UnixPipes}}==
=={{header|UnixPipes}}==
Assuming a sequence is represented by lines in a file.
Assuming a sequence is represented by lines in a file.
<code bash>
<c>bash$ # original list
bash$ # original list
bash$ printf '6\n2\n3\n6\n4\n2\n'
bash$ printf '6\n2\n3\n6\n4\n2\n'
6
6
Line 458: Line 495:
4
4
6
6
bash$</c>
bash$
</code>


or
or


<code bash>
<c>bash$ # original list
bash$ # original list
bash$ printf '6\n2\n3\n6\n4\n2\n'
bash$ printf '6\n2\n3\n6\n4\n2\n'
6
6
Line 476: Line 515:
4
4
6
6
bash$</c>
bash$
</code>

Revision as of 16:02, 28 January 2009

Task
Remove duplicate elements
You are encouraged to solve this task according to the task description, using any language you may know.

Given an Array, derive a data structure containing a sequence of elements, derive a sequence of elements in which all duplicates are removed.

There are basically three approaches seen here:

  • Put the elements into a hash table which does not allow duplicates. The complexity is O(n) on average, and O(n^2) worst case. This approach requires a hash function for your type (which is compatible with equality), either built-in to your language, or provided by the user.
  • Sort the elements and remove consecutive duplicate elements. The complexity of the best sorting algorithms is O(n log n). This approach requires that your type be "comparable", i.e. have an ordering. Putting the elements into a self-balancing binary search tree is a special case of sorting.
  • Go through the list, and for each element, check the rest of the list to see if it appears again, and discard it if it does. The complexity is O(n^2). The up-shot is that this always works on any type (provided that you can test for equality).

Ada

Works with: GNAT version GPL 2007

with Ada.Containers.Ordered_Sets;
with Ada.Text_IO; use Ada.Text_IO;

procedure Unique_Set is
   package Int_Sets is new Ada.Containers.Ordered_Sets(Integer);
   use Int_Sets;
   Nums : array (Natural range <>) of Integer := (1,2,3,4,5,5,6,7,1);
   Unique : Set;
   Set_Cur : Cursor;
   Success : Boolean;
begin
   for I in Nums'range loop
      Unique.Insert(Nums(I), Set_Cur, Success);
   end loop;
   Set_Cur := Unique.First;
   loop
      Put_Line(Item => Integer'Image(Element(Set_Cur)));
      exit when Set_Cur = Unique.Last;
      Set_Cur := Next(Set_Cur);
   end loop;
end Unique_Set;

APL

Works with: Dyalog APL

The primitive monad ∪ means "unique", so:

     ∪ 1 2 3 1 2 3 4 1
1 2 3 4
Works with: APL2
     w←1 2 3 1 2 3 4 1
     ((⍳⍨w)=⍳⍴w)/w
1 2 3 4

AppleScript

set array to {1, 2, 3, "a", "b", "c", 2, 3, 4, "b", "c", "d"} set unique to {} repeat with i in array

   -- very important -- list index starts at 1 not 0
   if (i is not in unique) then
       set unique to unique & i
   end if

end repeat

C

Since there's no way to know ahead of time how large the new data structure will need to be, we'll return a linked list instead of an array.

  1. include <stdio.h>
  2. include <stdlib.h>

struct list_node {int x; struct list_node *next;}; typedef struct list_node node;

node * uniq(int *a, unsigned alen)

{if (alen == 0) return NULL;
 node *start = malloc(sizeof(node));
 if (start == NULL) exit(EXIT_FAILURE);
 start->x = a[0];
 start->next = NULL;
 for (int i = 1 ; i < alen ; ++i)
    {node *n = start;
     for (;; n = n->next)
        {if (a[i] == n->x) break;
         if (n->next == NULL)
            {n->next = malloc(sizeof(node));
             n = n->next;
             if (n == NULL) exit(EXIT_FAILURE);
             n->x = a[i];
             n->next = NULL;
             break;}}}
 return start;}

int main(void)

  {int a[] = {1, 2, 1, 4, 5, 2, 15, 1, 3, 4};
   for (node *n = uniq(a, 10) ; n != NULL ; n = n->next)
       printf("%d ", n->x);
   puts("");
   return 0;}

C++

This version uses std::set, which requires its element type be comparable using the < operator.

  1. include <set>
  2. include <iostream>

using namespace std;

int main() {

   typedef set<int> TySet;
   int data[] = {1, 2, 3, 2, 3, 4};
   TySet unique_set(data, data + 6);
   cout << "Set items:" << endl;
   for (TySet::iterator iter = unique_set.begin(); iter != unique_set.end(); iter++)
         cout << *iter << " ";
   cout << endl;

}

This version uses hash_set, which is part of the SGI extension to the Standard Template Library. It is not part of the C++ standard library. It requires that its element type have a hash function.

Works with: GCC

  1. include <ext/hash_set>
  2. include <iostream>

using namespace std;

int main() {

   typedef __gnu_cxx::hash_set<int> TyHash;
   int data[] = {1, 2, 3, 2, 3, 4};
   TyHash unique_set(data, data + 6);
   cout << "Set items:" << endl;
   for (TyHash::iterator iter = unique_set.begin(); iter != unique_set.end(); iter++)
         cout << *iter << " ";
   cout << endl;

}

This version uses unordered_set, which is part of the TR1, which is likely to be included in the next version of C++. It is not part of the C++ standard library. It requires that its element type have a hash function.

Works with: GCC

  1. include <tr1/unordered_set>
  2. include <iostream>

using namespace std;

int main() {

   typedef tr1::unordered_set<int> TyHash;
   int data[] = {1, 2, 3, 2, 3, 4};
   TyHash unique_set(data, data + 6);
   cout << "Set items:" << endl;
   for (TyHash::iterator iter = unique_set.begin(); iter != unique_set.end(); iter++)
         cout << *iter << " ";
   cout << endl;

}

Alternative method working directly on the array:

  1. include <iostream>
  2. include <iterator>
  3. include <algorithm>

// helper template template<typename T> T* end(T (&array)[size]) { return array+size; }

int main() {

 int data[] = { 1, 2, 3, 2, 3, 4 };
 std::sort(data, end(data));
 int* new_end = std::unique(data, end(data));
 std::copy(data, new_end, std::ostream_iterator<int>(std::cout, " ");
 std::cout << std::endl;

}

C#

C# 2.0

Works with: MSVS version 2005 and .Net Framework 2.0

List<int> nums = new List<int>( new int { 1, 1, 2, 3, 4, 4 } ); List<int> unique = new List<int>(); foreach( int i in nums )

 if( !unique.Contains( i ) )
   unique.Add( i );

C# 3.0

Works with: MSVS version 2008 and .Net Framework 3.5

var unique = (new int { 1, 1, 2, 3, 4, 4 }).Distinct();

Common Lisp

To remove duplicates non-destructively:

(remove-duplicates '(1 3 2 9 1 2 3 8 8 1 0 2)) > (9 3 8 1 0 2)

Or, to remove duplicates in-place:

(delete-duplicates '(1 3 2 9 1 2 3 8 8 1 0 2)) > (9 3 8 1 0 2)


D

void main() {

   int[] data = [1, 2, 3, 2, 3, 4];
   int[int] hash;
   foreach(el; data)
       hash[el] = 0;

}

E

[1,2,3,2,3,4].asSet().getElements()

Erlang

List = [1, 2, 3, 2, 2, 4, 5, 5, 4, 6, 6, 5].
Set = sets:from_list(List).

Factor

 USING: sets ;
 V{ 1 2 1 3 2 4 5 } prune .
 V{ 1 2 3 4 5 }

Forth

Forth has no built-in hashtable facility, so the easiest way to achieve this goal is to take the "uniq" program as an example.

The word uniq, if given a sorted array of cells, will remove the duplicate entries and return the new length of the array. For simplicity, uniq has been written to process cells (which are to Forth what "int" is to C), but could easily be modified to handle a variety of data types through deferred procedures, etc.

The input data is assumed to be sorted.

\ Increments a2 until it no longer points to the same value as a1
\ a3 is the address beyond the data a2 is traversing.
: skip-dups ( a1 a2 a3 -- a1 a2+n )
    dup rot ?do
      over @ i @ <> if drop i leave then
    cell +loop ;

\ Compress an array of cells by removing adjacent duplicates
\ Returns the new count
: uniq ( a n -- n2 )
   over >r             \ Original addr to return stack
   cells over + >r     \ "to" addr now on return stack, available as r@
   dup begin           ( write read )
      dup r@ <
   while
      2dup @ swap !    \ copy one cell
      cell+ r@ skip-dups
      cell 0 d+        \ increment write ptr only
   repeat  r> 2drop  r> - cell / ;

Here is another implementation of "uniq" that uses a popular parameters and local variables extension words. It is structurally the same as the above implementation, but uses less overt stack manipulation.

: uniqv { a n \ r e -- n }
    a n cells+ to e
    a dup to r
    \ the write address lives on the stack
    begin
      r e <
    while
      r @ over !
      r cell+ e skip-dups to r
      cell+
    repeat
    a - cell / ;

To test this code, you can execute:

create test 1 , 2 , 3 , 2 , 6 , 4 , 5 , 3 , 6 ,
here test - cell / constant ntest
: .test ( n -- ) 0 ?do test i cells + ? loop ; 

test ntest 2dup cell-sort uniq .test

output

1 2 3 4 5 6 ok

Haskell

values = [1,2,3,2,3,4] unique = List.nub values

IDL

 result = uniq( array[sort( array )] )

J

   ] a=: 4 5 ?@$ 13  NB. 4 by 5 matrix of numbers chosen from 0 to 12
4 3 2 8 0
1 9 5 1 7
6 3 9 9 4
2 1 5 3 2

   , a     NB. sequence of the same elements
4 3 2 8 0 1 9 5 1 7 6 3 9 9 4 2 1 5 3 2
   ~. , a  NB. unique elements
4 3 2 8 0 1 9 5 7 6

The verb ~. removes duplicate items from any array (numeric, character, or other; vector, matrix, rank-n array). For example:

   ~. 'chthonic eleemosynary paronomasiac'
chtoni elmsyarp

Java

Works with: Java version 1.5

import java.util.Set; import java.util.HashSet; import java.util.Arrays;

Object[] data = {1, 2, 3, "a", "b", "c", 2, 3, 4, "b", "c", "d"}; Set<Object> uniqueSet = new HashSet<Object>(Arrays.asList(data)); Object[] unique = uniqueSet.toArray();

Works with: UCB Logo
show remdup [1 2 3 a b c 2 3 4 b c d]   ; [1 a 2 3 4 b c d]

MAXScript

uniques = #(1, 2, 3, "a", "b", "c", 2, 3, 4, "b", "c", "d")
for i in uniques.count to 1 by -1 do
(
    id = findItem uniques uniques[i]
    if (id != i) do deleteItem uniques i
)

Nial

uniques := [1, 2, 3, 'a', 'b', 'c', 2, 3, 4, 'b', 'c', 'd']
cull uniques
=+-+-+-+-+-+-+-+-+
=|1|2|3|a|b|c|4|d|
=+-+-+-+-+-+-+-+-+

Using strand form

cull 1 1 2 2 3 3
=1 2 3

Objective-C

NSArray *items = [NSArray arrayWithObjects:@"A", @"B", @"C", @"B", @"A", nil];

NSSet *unique = [NSSet setWithArray:items];

OCaml

let uniq lst =

 let unique_set = Hashtbl.create (List.length lst) in
   List.iter (fun x -> Hashtbl.replace unique_set x ()) lst;
   Hashtbl.fold (fun x () xs -> x :: xs) unique_set []

let _ =

 uniq [1;2;3;2;3;4]

Perl

use List::MoreUtils qw(uniq);

my @uniq = uniq qw(1 2 3 a b c 2 3 4 b c d);

Without modules: my %seen; my @uniq = grep {!$seen{$_}++} qw(1 2 3 a b c 2 3 4 b c d);

PHP

$list = array(1, 2, 3, 'a', 'b', 'c', 2, 3, 4, 'b', 'c', 'd'); $unique_list = array_unique($list);

Pop11

;;; Initial array
lvars ar = {1 2 3 2 3 4};
;;; Create a hash table
lvars ht= newmapping([], 50, 0, true);
;;; Put all array as keys into the hash table
lvars i;
for i from 1 to length(ar) do
   1 -> ht(ar(i))
endfor;
;;; Collect keys into a list
lvars ls = [];
appdata(ht, procedure(x); cons(front(x), ls) -> ls; endprocedure);

Prolog

uniq(Data,Uniques) :- sort(Data,Uniques).

Example usage: ?- uniq([1, 2, 3, 2, 3, 4],Xs). Xs = [1, 2, 3, 4]

Python

items = [1, 2, 3, 'a', 'b', 'c', 2, 3, 4, 'b', 'c', 'd'] unique = list(set(items)) See also http://www.peterbe.com/plog/uniqifiers-benchmark and http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/52560

Raven

[ 1 2 3 'a' 'b' 'c' 2 3 4 'b' 'c' 'd' ] as items
items copy unique print
list (8 items)
 0 => 1
 1 => 2
 2 => 3
 3 => "a"
 4 => "b"
 5 => "c"
 6 => 4
 7 => "d"

Ruby

ary = [1,1,2,1,'redundant',[1,2,3],[1,2,3],'redundant'] uniq_ary = ary.uniq

  1. => [1, 2, "redundant", [1, 2, 3]]

Scala

val list = List(1,2,3,4,2,3,4,99) val l2 = list.removeDuplicates // l2: scala.List[scala.Int] = List(1,2,3,4,99)

Scheme

(define (remove-duplicates l)

 (do ((a '() (if (member (car l) a) a (cons (car l) a)))
      (l l (cdr l)))
   ((null? l) (reverse a))))

(remove-duplicates (list 1 2 1 3 2 4 5))

(1 2 3 4 5)

Some implementations provide remove-duplicates in their standard library.

Tcl

The concept of an "array" in TCL is strictly associative - and since there cannot be duplicate keys, there cannot be a redundant element in an array. What is called "array" in many other languages is probably better represented by the "list" in TCL (as in LISP).

set result [lsort -unique $listname]

UnixPipes

Assuming a sequence is represented by lines in a file. bash$ # original list bash$ printf '6\n2\n3\n6\n4\n2\n' 6 2 3 6 4 2 bash$ # made uniq bash$ printf '6\n2\n3\n6\n4\n2\n'|sort -n|uniq 2 3 4 6 bash$

or

bash$ # original list bash$ printf '6\n2\n3\n6\n4\n2\n' 6 2 3 6 4 2 bash$ # made uniq bash$ printf '6\n2\n3\n6\n4\n2\n'|sort -u 2 3 4 6 bash$