Perceptron: Difference between revisions
m
syntax highlighting fixup automation
(add task to arm assembly 32 bits raspberry pi) |
Thundergnat (talk | contribs) m (syntax highlighting fixup automation) |
||
Line 21:
{{trans|Python}}
<
T Perceptron
Line 94:
print(‘Trained’)
L(row) result
print(row.join(‘’))</
{{out}}
Line 143:
=={{header|ARM Assembly}}==
{{works with|as|Raspberry Pi <br> or android 32 bits with application Termux}}
<syntaxhighlight lang="arm assembly">
/* ARM assembly Raspberry PI or andoid with termux */
/* program perceptron3.s */
Line 937:
/***************************************************/
.include "../affichage.inc"
</syntaxhighlight>
=={{header|Delphi}}==
{{libheader| System.SysUtils}}
Line 946:
{{libheader| System.UITypes}}
{{Trans|Java}}
<syntaxhighlight lang="delphi">
unit main;
Line 1,101:
end;
end.</
Form settings (main.dfm)
<syntaxhighlight lang="delphi">
object Form1: TForm1
ClientHeight = 360
Line 1,116:
end
end
</syntaxhighlight>
{{out}}
[[https://ibb.co/pX7QHLS]]
Line 1,123:
{{works with|GNU Forth}}
Where it says <code>[email protected]</code> it should say <code>f@</code>.
<
here seed !
Line 1,211:
500 timesTrain evaluate ;
go bye</
Example output:
<pre>After 0 trainings: 10.16 % accurate
Line 1,234:
Yo solo lo transcribo.<br>
I just transcribe it.
<
Function rnd2 As Single
Return Rnd()-Rnd()
Line 1,332:
Sleep 100
Wend
</syntaxhighlight>
=={{header|Go}}==
Line 1,338:
<br>
This is based on the Java entry but just outputs the final image (as a .png file) rather than displaying its gradual build up. It also uses a different color scheme - blue and red circles with a black dividing line.
<
import (
Line 1,442:
perc.draw(dc, 2000)
dc.SavePNG("perceptron.png")
}</
=={{header|Java}}==
{{works with|Java|8}}
<
import java.awt.event.ActionEvent;
import java.util.*;
Line 1,564:
});
}
}</
=={{header|JavaScript}}==
Uses P5 lib.
<
const EPOCH = 1500, TRAINING = 1, TRANSITION = 2, SHOW = 3;
Line 1,699:
}
}
</syntaxhighlight>
[[File:perceptronJS.png]]
Line 1,705:
=={{header|Julia}}==
<
module SimplePerceptrons
Line 1,741:
end # module SimplePerceptrons
</syntaxhighlight>
<
const SP = include("module.jl")
Line 1,769:
ahat, bhat = p.weights[1] / p.weights[2], -p.weights[3] / p.weights[2]
Plots.abline!(bhat, ahat, label = "predicted line")
</syntaxhighlight>
=={{header|Kotlin}}==
{{trans|Java}}
<
import java.awt.*
Line 1,867:
}
}
}</
=={{header|Lua}}==
Simple implementation allowing for any number of inputs (in this case, just 1), testing of the Perceptron, and training.
<
Perceptron.__index = Perceptron
Line 1,937:
print(i..":", node:test({i}))
end
</syntaxhighlight>
{{out}}
<pre>Untrained results:
Line 1,956:
=={{header|Nim}}==
{{trans|Pascal}}
<
type
Line 2,022:
train(weights, 4)
echo "Output from perceptron after 5 training runs:"
showOutput(weights)</
{{out}}
Line 2,115:
=={{header|Pascal}}==
This is a text-based implementation, using a 20x20 grid (just like the original Mark 1 Perceptron had). The rate of improvement drops quite markedly as you increase the number of training runs.
<
(*
Line 2,230:
writeln( 'Output from perceptron after 5 training runs:' );
showOutput( weights )
end.</
{{out}}
<pre>Target output for the function f(x) = 2x + 1:
Line 2,325:
learning rate, and max iterations. Plots accuracy vs. iterations and displays the training data
in blue/black=above/incorrect and green/red=below/incorrect [all blue/green = 100% accurate].
<
--
-- The learning curve turned out more haphazard than I imagined, and adding a
Line 2,659:
IupClose()
end procedure
main()</
=={{header|Python}}==
{{header|Python 3}}
<
TRAINING_LENGTH = 2000
Line 2,735:
print('Trained')
for row in result:
print(''.join(v for v in row))</
{{out}}
<pre>
Line 2,783:
=={{header|Racket}}==
{{trans|Java}}
<
(require 2htdp/universe
2htdp/image)
Line 2,859:
(big-bang the-demo (to-draw draw-demo) (on-tick tick-handler)))
(module+ main (demo))</
Run it and see the image for yourself, I can't get it onto RC!
Line 2,865:
=={{header|Raku}}==
{{trans|Go}}
<syntaxhighlight lang="raku"
use MagickWand;
Line 2,917:
$o.create( $w, $h, "white" );
$perc.draw($o);
$o.write('./perceptron.png') or die</
=={{header|REXX}}==
{{trans|Java}}
<
Call init
Call time 'R'
Line 3,024:
y.i=nextDouble()*height
End
Return</
{{out}}
<pre>Point x f(x) r y ff ok zz
Line 3,146:
=={{header|Scala}}==
===Java Swing Interoperability===
<
import java.awt.event.ActionEvent
Line 3,228:
})
}</
=={{header|Scheme}}==
<
(scheme case-lambda)
(scheme write)
Line 3,327:
", percent correct is "
(number->string (perceptron 'test test-set))
"\n"))))</
{{out}}
<pre>#(-0.5914540100624854 1.073343782042039 -0.29780862758499393)
Line 3,354:
=={{header|Smalltalk}}==
{{works with|GNU Smalltalk}}
<
activate
Line 3,459:
]
Perceptron test.</
Example output:
<pre>After 0 trainings: 14.158 % accuracy
Line 3,477:
=={{header|Wren}}==
{{trans|Pascal}}
<
var rand = Random.new()
Line 3,556:
train.call(weights, 4)
System.print("Output from perceptron after 5 training runs:")
showOutput.call(weights)</
{{out}}
Line 3,651:
=={{header|XLISP}}==
Like the Pascal example, this is a text-based program using a 20x20 grid. It is slightly more general, however, because it allows the function that is to be learnt and the perceptron's bias and learning constant to be passed as arguments to the <tt>trainer</tt> and <tt>perceptron</tt> objects.
<
(instance-variables weights bias learning-constant) )
(define-method (perceptron 'initialize b lc)
Line 3,728:
(newline)
(ptron 'learn training 4)
(ptron 'print-grid)</
{{out}}
<pre>Target output for y = 2x + 1:
Line 3,821:
{{trans|Java}}
Uses the PPM class from http://rosettacode.org/wiki/Bitmap/Bresenham%27s_line_algorithm#zkl
<
const c=0.00001;
var [const] W=640, H=350;
Line 3,843:
foreach i in (weights.len()){ weights[i]+=c*error*xy1a[i] }
}
}</
<
p.training.apply2(p.train);
Line 3,855:
pixmap.circle(x,y,8,color);
}
pixmap.writeJPGFile("perceptron.zkl.jpg");</
{{out}}
[[File:Perceptron.zkl.jpg]]
|