Talk:Entropy: Difference between revisions

(→‎What about a less dull example?: Some good ideas here)
(→‎Alternate form: new section)
Line 6:
::Better yet, a bonus task of computing the entropy of each solution on the page. :) --[[User:TimToady|TimToady]] 19:23, 25 February 2013 (UTC)
::: I like computing the entropy of “<tt>Rosetta Code</tt>” (it's about 3.08496, assuming my code is right); a more self-referential one is fine too, except it involves features that might block some languages from participating. (The draft [[Entropy/Narcissist|child task]] is a better place for that.) –[[User:Dkf|Donal Fellows]] 09:31, 26 February 2013 (UTC)
 
== Alternate form ==
 
Not sure this is very useful, but I was wondering if one could not find a more concise way of writing this.
 
If we call <math>N</math> the length of the string and <math>n_c</math> the number of occurrences of the character c, we have:
 
<math>H = \sum_c -p_c \ln p_c = \sum_c -\frac{n_c}{N} \ln \frac{n_c}{N} = \ln N - \frac{1}{N}\sum_c n_c\ln n_c </math>
 
In perl6, this allows a slightly simpler formula, i.e. not using hyperoperators:
 
<lang Perl 6>sub entropy(@a) {
log(@a) - @a R/ [+] map -> \n { n * log n }, @a.bag.values
}</lang>
 
For what it's worth.--[[User:Grondilu|Grondilu]] 18:14, 4 March 2013 (UTC)
1,934

edits