Remove duplicate elements: Difference between revisions
(add E example) |
m (→[[Python]]: Added prolog section) |
||
Line 146: | Line 146: | ||
$list = array(1, 2, 3, 'a', 'b', 'c', 2, 3, 4, 'b', 'c', 'd'); |
$list = array(1, 2, 3, 'a', 'b', 'c', 2, 3, 4, 'b', 'c', 'd'); |
||
$unique_list = array_unique($list); |
$unique_list = array_unique($list); |
||
==[[Prolog]]== |
|||
[[Category:Prolog]] |
|||
==[[Python]]== |
==[[Python]]== |
Revision as of 15:33, 4 April 2007
You are encouraged to solve this task according to the task description, using any language you may know.
Given an Array, derive a data structure containing a sequence of elements, derive a sequence of elements in which all duplicates are removed.
AppleScript
set array to {1, 2, 3, "a", "b", "c", 2, 3, 4, "b", "c", "d"} set unique to {} repeat with i in array -- very important -- list index starts at 1 not 0 if (i is not in unique) then set unique to unique & i end if end repeat
C++
#include <set> #include <iostream> using namespace std; int main( int argc, char* argv[] ) { typedef set<int> TyHash; int data[] = {1, 2, 3, 2, 3, 4}; TyHash hash(data, data + 6); cout << "Set items:" << endl; for (TyHash::iterator iter = hash.begin(); iter != hash.end(); iter++) cout << *iter << " "; cout << endl; }
C#
Compiler: MSVS 2005
List<int> nums = new List<int>( new int { 1, 1, 2, 3, 4, 4 } ); List<int> unique = new List<int>(); foreach( int i in nums ) if( !unique.Contains( i ) ) unique.Add( i );
D
void main() { int[] data = [1, 2, 3, 2, 3, 4]; int[int] hash; foreach(el; data) hash[el] = 0; }
E
[1,2,3,2,3,4].asSet().getElements()
Forth
Forth has no built-in hashtable facility, so the easiest way to achieve this goal is to take the "uniq" program as an example.
The work uniq, if given a sorted array of cells, will remove the duplicate entries and return the new length of the array. For simplicity, uniq has been written to process cells (which are to Forth what "int" is to C), but could easily be modified to handle a variety of data types through deferred procedures, etc.
The input data is assumed to be sorted.
\ Increments a2 until it no longer points to the same value as a1 \ a3 is the address beyond the data a2 is traversing. : skip-dups ( a1 a2 a3 -- a1 a2+n ) dup rot ?do over @ i @ <> if drop i leave then cell +loop ; \ Compress an array of cells by removing adjacent duplicates \ Returns the new count : uniq ( a n -- n2 ) over >r \ Original addr to return stack cells over + >r \ "to" addr now on return stack, available as r@ dup begin ( write read ) dup r@ < while 2dup @ swap ! \ copy one cell cell+ r@ skip-dups cell 0 d+ \ increment write ptr only repeat r> 2drop r> - cell / ;
Here is another implementation of "uniq" that uses a popular parameters and local variables extension words. It is structurally the same as the above implementation, but uses less overt stack manipulation.
: uniqv { a n \ r e -- n } a n cells+ to e a dup to r \ the write address lives on the stack begin r e < while r @ over ! r cell+ e skip-dups to r cell+ repeat a - cell / ;
To test this code, you can execute:
create test 1 , 2 , 3 , 2 , 6 , 4 , 5 , 3 , 6 , here test - cell / constant ntest : .test ( n -- ) 0 ?do test i cells + ? loop ; test ntest 2dup cell-sort uniq .test
output
1 2 3 4 5 6 ok
Haskell
import List values = [1,2,3,2,3,4] unique = nub values
IDL
result = uniq( array[sort( array )] )
Java
//Using Java 1.5/5.0 Object[] data = new Object[] {1, 2, 3, "a", "b", "c", 2, 3, 4, "b", "c", "d"}; Set uniqueSet = new HashSet(Arrays.asList(data)); Object[] unique = uniqueSet.toArray();
Perl
Interpeter: Perl
my %hash; my @list = (1, 2, 3, 'a', 'b', 'c', 2, 3, 4, 'b', 'c', 'd'); @hash{@list} = 1; # the keys of %hash now contain the unique list my @unique_list = sort keys(%hash);
PHP
$list = array(1, 2, 3, 'a', 'b', 'c', 2, 3, 4, 'b', 'c', 'd'); $unique_list = array_unique($list);
Prolog
Python
data = [1, 2, 3, 'a', 'b', 'c', 2, 3, 4, 'b', 'c', 'd']
Using sets
unique = list(set(data))
See also http://www.peterbe.com/plog/uniqifiers-benchmark and http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/52560
Ruby
ary = [1,1,2,1,'redundant',[1,2,3],[1,2,3],'redundant'] uniq_ary = ary.uniq # => [1, 2, "redundant", [1, 2, 3]]
Scala
val list = List(1,2,3,4,2,3,4,99) val l2 = list.removeDuplicates // l2: scala.List[scala.Int] = List(1,2,3,4,99)
Tcl
The concept of an "array" in TCL is strictly associative - and since there cannot be duplicate keys, there cannot be a redundant element in an array. What is called "array" in many other languages is probably better represented by the "list" in TCL (as in LISP).
set result [lsort -unique $listname]