Talk:Entropy: Difference between revisions
Content added Content deleted
(→What about a less dull example?: Some good ideas here) |
(→Alternate form: new section) |
||
Line 6: | Line 6: | ||
::Better yet, a bonus task of computing the entropy of each solution on the page. :) --[[User:TimToady|TimToady]] 19:23, 25 February 2013 (UTC) |
::Better yet, a bonus task of computing the entropy of each solution on the page. :) --[[User:TimToady|TimToady]] 19:23, 25 February 2013 (UTC) |
||
::: I like computing the entropy of “<tt>Rosetta Code</tt>” (it's about 3.08496, assuming my code is right); a more self-referential one is fine too, except it involves features that might block some languages from participating. (The draft [[Entropy/Narcissist|child task]] is a better place for that.) –[[User:Dkf|Donal Fellows]] 09:31, 26 February 2013 (UTC) |
::: I like computing the entropy of “<tt>Rosetta Code</tt>” (it's about 3.08496, assuming my code is right); a more self-referential one is fine too, except it involves features that might block some languages from participating. (The draft [[Entropy/Narcissist|child task]] is a better place for that.) –[[User:Dkf|Donal Fellows]] 09:31, 26 February 2013 (UTC) |
||
== Alternate form == |
|||
Not sure this is very useful, but I was wondering if one could not find a more concise way of writing this. |
|||
If we call <math>N</math> the length of the string and <math>n_c</math> the number of occurrences of the character c, we have: |
|||
<math>H = \sum_c -p_c \ln p_c = \sum_c -\frac{n_c}{N} \ln \frac{n_c}{N} = \ln N - \frac{1}{N}\sum_c n_c\ln n_c </math> |
|||
In perl6, this allows a slightly simpler formula, i.e. not using hyperoperators: |
|||
<lang Perl 6>sub entropy(@a) { |
|||
log(@a) - @a R/ [+] map -> \n { n * log n }, @a.bag.values |
|||
}</lang> |
|||
For what it's worth.--[[User:Grondilu|Grondilu]] 18:14, 4 March 2013 (UTC) |
Revision as of 18:14, 4 March 2013
What about a less dull example?
What about using "Rosetta code" as an example instead of the dull "1223334444"?--Grondilu 17:53, 22 February 2013 (UTC)
- Or even funnier: write a program who computes its own entropy :) --Grondilu 12:40, 25 February 2013 (UTC)
- Better yet, a bonus task of computing the entropy of each solution on the page. :) --TimToady 19:23, 25 February 2013 (UTC)
- I like computing the entropy of “Rosetta Code” (it's about 3.08496, assuming my code is right); a more self-referential one is fine too, except it involves features that might block some languages from participating. (The draft child task is a better place for that.) –Donal Fellows 09:31, 26 February 2013 (UTC)
- Better yet, a bonus task of computing the entropy of each solution on the page. :) --TimToady 19:23, 25 February 2013 (UTC)
Alternate form
Not sure this is very useful, but I was wondering if one could not find a more concise way of writing this.
If we call the length of the string and the number of occurrences of the character c, we have:
In perl6, this allows a slightly simpler formula, i.e. not using hyperoperators:
<lang Perl 6>sub entropy(@a) {
log(@a) - @a R/ [+] map -> \n { n * log n }, @a.bag.values
}</lang>
For what it's worth.--Grondilu 18:14, 4 March 2013 (UTC)