Jump to content

Perceptron: Difference between revisions

m
syntax highlighting fixup automation
(add task to arm assembly 32 bits raspberry pi)
m (syntax highlighting fixup automation)
Line 21:
{{trans|Python}}
 
<langsyntaxhighlight lang="11l">V TRAINING_LENGTH = 2000
 
T Perceptron
Line 94:
print(‘Trained’)
L(row) result
print(row.join(‘’))</langsyntaxhighlight>
 
{{out}}
Line 143:
=={{header|ARM Assembly}}==
{{works with|as|Raspberry Pi <br> or android 32 bits with application Termux}}
<syntaxhighlight lang="arm assembly">
<lang ARM Assembly>
/* ARM assembly Raspberry PI or andoid with termux */
/* program perceptron3.s */
Line 937:
/***************************************************/
.include "../affichage.inc"
</syntaxhighlight>
</lang>
=={{header|Delphi}}==
{{libheader| System.SysUtils}}
Line 946:
{{libheader| System.UITypes}}
{{Trans|Java}}
<syntaxhighlight lang="delphi">
<lang Delphi>
unit main;
 
Line 1,101:
end;
 
end.</langsyntaxhighlight>
Form settings (main.dfm)
<syntaxhighlight lang="delphi">
<lang Delphi>
object Form1: TForm1
ClientHeight = 360
Line 1,116:
end
end
</syntaxhighlight>
</lang>
{{out}}
[[https://ibb.co/pX7QHLS]]
Line 1,123:
{{works with|GNU Forth}}
Where it says <code>[email protected]</code> it should say <code>f&#64;</code>.
<langsyntaxhighlight Forthlang="forth">require random.fs
here seed !
 
Line 1,211:
500 timesTrain evaluate ;
 
go bye</langsyntaxhighlight>
Example output:
<pre>After 0 trainings: 10.16 % accurate
Line 1,234:
Yo solo lo transcribo.<br>
I just transcribe it.
<langsyntaxhighlight lang="freebasic">
Function rnd2 As Single
Return Rnd()-Rnd()
Line 1,332:
Sleep 100
Wend
</syntaxhighlight>
</lang>
 
=={{header|Go}}==
Line 1,338:
<br>
This is based on the Java entry but just outputs the final image (as a .png file) rather than displaying its gradual build up. It also uses a different color scheme - blue and red circles with a black dividing line.
<langsyntaxhighlight lang="go">package main
 
import (
Line 1,442:
perc.draw(dc, 2000)
dc.SavePNG("perceptron.png")
}</langsyntaxhighlight>
 
=={{header|Java}}==
{{works with|Java|8}}
<langsyntaxhighlight lang="java">import java.awt.*;
import java.awt.event.ActionEvent;
import java.util.*;
Line 1,564:
});
}
}</langsyntaxhighlight>
 
=={{header|JavaScript}}==
Uses P5 lib.
<langsyntaxhighlight lang="javascript">
const EPOCH = 1500, TRAINING = 1, TRANSITION = 2, SHOW = 3;
 
Line 1,699:
}
}
</syntaxhighlight>
</lang>
[[File:perceptronJS.png]]
 
Line 1,705:
 
=={{header|Julia}}==
<langsyntaxhighlight lang="julia"># file module.jl
 
module SimplePerceptrons
Line 1,741:
 
end # module SimplePerceptrons
</syntaxhighlight>
</lang>
 
<langsyntaxhighlight lang="julia"># file _.jl
 
const SP = include("module.jl")
Line 1,769:
ahat, bhat = p.weights[1] / p.weights[2], -p.weights[3] / p.weights[2]
Plots.abline!(bhat, ahat, label = "predicted line")
</syntaxhighlight>
</lang>
 
=={{header|Kotlin}}==
{{trans|Java}}
<langsyntaxhighlight lang="scala">// version 1.1.4-3
 
import java.awt.*
Line 1,867:
}
}
}</langsyntaxhighlight>
 
=={{header|Lua}}==
Simple implementation allowing for any number of inputs (in this case, just 1), testing of the Perceptron, and training.
<langsyntaxhighlight lang="lua">local Perceptron = {}
Perceptron.__index = Perceptron
 
Line 1,937:
print(i..":", node:test({i}))
end
</syntaxhighlight>
</lang>
{{out}}
<pre>Untrained results:
Line 1,956:
=={{header|Nim}}==
{{trans|Pascal}}
<langsyntaxhighlight Nimlang="nim">import random
 
type
Line 2,022:
train(weights, 4)
echo "Output from perceptron after 5 training runs:"
showOutput(weights)</langsyntaxhighlight>
 
{{out}}
Line 2,115:
=={{header|Pascal}}==
This is a text-based implementation, using a 20x20 grid (just like the original Mark 1 Perceptron had). The rate of improvement drops quite markedly as you increase the number of training runs.
<langsyntaxhighlight lang="pascal">program Perceptron;
 
(*
Line 2,230:
writeln( 'Output from perceptron after 5 training runs:' );
showOutput( weights )
end.</langsyntaxhighlight>
{{out}}
<pre>Target output for the function f(x) = 2x + 1:
Line 2,325:
learning rate, and max iterations. Plots accuracy vs. iterations and displays the training data
in blue/black=above/incorrect and green/red=below/incorrect [all blue/green = 100% accurate].
<langsyntaxhighlight Phixlang="phix">-- demo\rosetta\Perceptron.exw
--
-- The learning curve turned out more haphazard than I imagined, and adding a
Line 2,659:
IupClose()
end procedure
main()</langsyntaxhighlight>
 
=={{header|Python}}==
{{header|Python 3}}
<langsyntaxhighlight lang="python">import random
 
TRAINING_LENGTH = 2000
Line 2,735:
print('Trained')
for row in result:
print(''.join(v for v in row))</langsyntaxhighlight>
{{out}}
<pre>
Line 2,783:
=={{header|Racket}}==
{{trans|Java}}
<langsyntaxhighlight lang="racket">#lang racket
(require 2htdp/universe
2htdp/image)
Line 2,859:
(big-bang the-demo (to-draw draw-demo) (on-tick tick-handler)))
(module+ main (demo))</langsyntaxhighlight>
 
Run it and see the image for yourself, I can't get it onto RC!
Line 2,865:
=={{header|Raku}}==
{{trans|Go}}
<syntaxhighlight lang="raku" perl6line># 20201116 Raku programming solution
 
use MagickWand;
Line 2,917:
$o.create( $w, $h, "white" );
$perc.draw($o);
$o.write('./perceptron.png') or die</langsyntaxhighlight>
 
=={{header|REXX}}==
{{trans|Java}}
<langsyntaxhighlight lang="rexx">/* REXX */
Call init
Call time 'R'
Line 3,024:
y.i=nextDouble()*height
End
Return</langsyntaxhighlight>
{{out}}
<pre>Point x f(x) r y ff ok zz
Line 3,146:
=={{header|Scala}}==
===Java Swing Interoperability===
<langsyntaxhighlight Scalalang="scala">import java.awt._
import java.awt.event.ActionEvent
 
Line 3,228:
})
 
}</langsyntaxhighlight>
 
=={{header|Scheme}}==
<langsyntaxhighlight lang="scheme">(import (scheme base)
(scheme case-lambda)
(scheme write)
Line 3,327:
", percent correct is "
(number->string (perceptron 'test test-set))
"\n"))))</langsyntaxhighlight>
{{out}}
<pre>#(-0.5914540100624854 1.073343782042039 -0.29780862758499393)
Line 3,354:
=={{header|Smalltalk}}==
{{works with|GNU Smalltalk}}
<langsyntaxhighlight Smalltalklang="smalltalk">Number extend [
 
activate
Line 3,459:
]
 
Perceptron test.</langsyntaxhighlight>
Example output:
<pre>After 0 trainings: 14.158 % accuracy
Line 3,477:
=={{header|Wren}}==
{{trans|Pascal}}
<langsyntaxhighlight lang="ecmascript">import "random" for Random
 
var rand = Random.new()
Line 3,556:
train.call(weights, 4)
System.print("Output from perceptron after 5 training runs:")
showOutput.call(weights)</langsyntaxhighlight>
 
{{out}}
Line 3,651:
=={{header|XLISP}}==
Like the Pascal example, this is a text-based program using a 20x20 grid. It is slightly more general, however, because it allows the function that is to be learnt and the perceptron's bias and learning constant to be passed as arguments to the <tt>trainer</tt> and <tt>perceptron</tt> objects.
<langsyntaxhighlight lang="scheme">(define-class perceptron
(instance-variables weights bias learning-constant) )
(define-method (perceptron 'initialize b lc)
Line 3,728:
(newline)
(ptron 'learn training 4)
(ptron 'print-grid)</langsyntaxhighlight>
{{out}}
<pre>Target output for y = 2x + 1:
Line 3,821:
{{trans|Java}}
Uses the PPM class from http://rosettacode.org/wiki/Bitmap/Bresenham%27s_line_algorithm#zkl
<langsyntaxhighlight lang="zkl">class Perceptron{
const c=0.00001;
var [const] W=640, H=350;
Line 3,843:
foreach i in (weights.len()){ weights[i]+=c*error*xy1a[i] }
}
}</langsyntaxhighlight>
<langsyntaxhighlight lang="zkl">p:=Perceptron(3);
p.training.apply2(p.train);
 
Line 3,855:
pixmap.circle(x,y,8,color);
}
pixmap.writeJPGFile("perceptron.zkl.jpg");</langsyntaxhighlight>
{{out}}
[[File:Perceptron.zkl.jpg]]
10,343

edits

Cookies help us deliver our services. By using our services, you agree to our use of cookies.