Talk:Monte Carlo methods: Difference between revisions

From Rosetta Code
Content added Content deleted
Line 28: Line 28:


: If you want to use the stddev formula, then each <math>x_i</math> takes the value of either 1 (landing in circle) or 0 (not). The average is <math>\mu = p</math> as mentioned above; now <math>\sigma^2 = {1\over N} \sum (x_i - p)^2</math>. Note that there are going to be about <math>Np</math> of those <math>x_i</math>s with value <math>1</math>, and <math>N(1-p)</math> with value <math>0</math>, so <math>\sum(x_i - p)^2 \approx Np(1-p)^2 + N(1-p)(0-p)^2 = Np(1-p)</math>. See how it comes back to the same formula? --[[User:Ledrug|Ledrug]] ([[User talk:Ledrug|talk]]) 06:17, 5 May 2014 (UTC)
: If you want to use the stddev formula, then each <math>x_i</math> takes the value of either 1 (landing in circle) or 0 (not). The average is <math>\mu = p</math> as mentioned above; now <math>\sigma^2 = {1\over N} \sum (x_i - p)^2</math>. Note that there are going to be about <math>Np</math> of those <math>x_i</math>s with value <math>1</math>, and <math>N(1-p)</math> with value <math>0</math>, so <math>\sum(x_i - p)^2 \approx Np(1-p)^2 + N(1-p)(0-p)^2 = Np(1-p)</math>. See how it comes back to the same formula? --[[User:Ledrug|Ledrug]] ([[User talk:Ledrug|talk]]) 06:17, 5 May 2014 (UTC)

:: Thank you very much that explains the origin of the formula. Even though following your reasoning the formula should be:
:: <math> \sum(x_i - p)^2 \approx Np(1-p)</math>,
:: But because we have a factor of <math> {1 N} </math> we must take into account, then the resulting stddev should be:
:: <math>\sigma^2 = p(1-p)</math>,
:: Meaning that the formula implemented has an extra factor of <math> {1\over N} </math> inside the square root and a factor of <math> p </math> outside of the square root, meaning:
:: error = val * sqrt(val * (1 - val) / sampled) * 4, when it should be:
:: error = sqrt(val * (1 - val)) * 4;
:: Am I missing something else here? Sorry for the intrigue I'm no expert in probability, but I'm curious as to the implementation.-[[User:Chibby0ne|Chibby0ne]] ([[User talk:Chibby0ne|talk]]) 17:18, 17 May 2014 (UTC)

Revision as of 17:18, 17 May 2014

Python shell sessions as examples

I noted that someone had changed another Python shell session used as an example, into the 'normal' definition of a function followed by the shell session just used to show the answer when the function is called.
I don't think this should be done here, as I am attempting to show how the shell might be used for such a task. It is still Python. The repitition of the input expression is because in idle, the built-in graphical IDE for Python, you would hit return in a previous expression to re-enter it. In the non-graphical shell, you can scroll through previous input to re-enter lines. It can give the immediate feedback, and 'spirit of exploration' you get when working with a calculator. --Paddy3118 05:22, 2 October 2008 (UTC)

Error formula in C implementation

What formula is being used for the error calculation in the C Implementation?

At first I thought it was the formula for standard deviation but the code is:

error = val * sqrt(val (1 - val) / sampled) * 4;

The factor 4 is explained because we are not interested in the ratio , but in so both the value and the error must be multiplied by 4. The rest of the code translates to:



But according to Wikipedia the formula is this:


Can somebody explain this more clearly? I'm not yet convinced this is correct

Randomly throwing a point in a square, and it has chance p of being in the circle, while (1-p) chance of otherwise. If you throw N points and count the number of times n that they landed in the circle, n would follow wp:binomial distribution (look up the variance formula there). Here we are taking p = n/N as the ratio between areas of circle and square, but n is subject to statistical fluctuation. Assuming that we had the senses to throw a large enough N so n/N wouldn't be a completely bogus estimate of p, but we'd still like to know how far off it could be from p's true value. This is where the variance comes in: it tells you, given N and a rough knowledge of p, how much uncertainty of n (and p) one should expect.
If you want to use the stddev formula, then each takes the value of either 1 (landing in circle) or 0 (not). The average is as mentioned above; now . Note that there are going to be about of those s with value , and with value , so . See how it comes back to the same formula? --Ledrug (talk) 06:17, 5 May 2014 (UTC)
Thank you very much that explains the origin of the formula. Even though following your reasoning the formula should be:
,
But because we have a factor of we must take into account, then the resulting stddev should be:
,
Meaning that the formula implemented has an extra factor of inside the square root and a factor of outside of the square root, meaning:
error = val * sqrt(val * (1 - val) / sampled) * 4, when it should be:
error = sqrt(val * (1 - val)) * 4;
Am I missing something else here? Sorry for the intrigue I'm no expert in probability, but I'm curious as to the implementation.-Chibby0ne (talk) 17:18, 17 May 2014 (UTC)