# 100 prisoners

100 prisoners is a draft programming task. It is not yet considered ready to be promoted as a complete task, for reasons that should be found in its talk page.
 This page uses content from Wikipedia. The original article was at 100 prisoners problem. The list of authors can be seen in the page history. As with Rosetta Code, the text of Wikipedia is available under the GNU FDL. (See links for details on variance)

The Problem
• 100 prisoners are individually numbered 1 to 100
• A room having a cupboard of 100 opaque drawers numbered 1 to 100, that cannot be seen from outside.
• Cards numbered 1 to 100 are placed randomly, one to a drawer, and the drawers all closed; at the start.
• Prisoners start outside the room
• They can decide some strategy before any enter the room.
• Prisoners enter the room one by one, can open a drawer, inspect the card number in the drawer, then close the drawer.
• A prisoner can open no more than 50 drawers.
• A prisoner tries to find his own number.
• A prisoner finding his own number is then held apart from the others.
• If all 100 prisoners find their own numbers then they will all be pardoned.
1. Simulate several thousand instances of the game where the prisoners randomly open draws
2. Simulate several thousand instances of the game where the prisoners use the optimal strategy mentioned in the wikipedia article, of:
• First opening the drawer whose outside number is his prisoner number.
• If the card within has his number then he succeeds otherwise he opens the drawer with the same number as that of the revealed card. (until he opens his maximum).

Show and compare the computed probabilities of success for the two strategies, here, on this page.

Reference
1. The unbelievable solution to the 100 prisoner puzzle standupmaths (Video).

## Perl 6

Works with: Rakudo version 2019.07.1
Translation of: Python

<lang perl6>my @prisoners = ^100; my \$half = floor +@prisoners / 2;

sub random (\$n) {

```   ^\$n .race.map( {
my @drawers = @prisoners.pick: *;
@prisoners.map( -> \$prisoner {
my \$found = 0;
for @drawers.pick(\$half) -> \$card {
\$found = 1 and last if \$card == \$prisoner
}
last unless \$found;
\$found
}
).sum == @prisoners
}
).grep( *.so ).elems / \$n * 100
```

}

sub optimal (\$n) {

```   ^\$n .race.map( {
my @drawers = @prisoners.pick: *;
@prisoners.map( -> \$prisoner {
my \$found = 0;
my \$card = @drawers[\$prisoner];
if \$card == \$prisoner {
\$found = 1
} else {
for ^(\$half - 1) {
\$card = @drawers[\$card];
\$found = 1 and last if \$card == \$prisoner
}
}
last unless \$found;
\$found
}
).sum == @prisoners
}
).grep( *.so ).elems / \$n * 100
```

}

my \$n = 10_000; say " Simulation count: \$n\n" ~ sprintf(" Random play wins: %4.1f%% of simulations\n", random \$n) ~ sprintf("Optimal play wins: %4.1f%% of simulations\n", optimal \$n);</lang>

Output:
``` Simulation count: 10000
Random play wins:  0.0% of simulations
Optimal play wins: 31.9% of simulations
```

## Python

<lang python>import random

def play_random(n):

```   # using 0-99 instead of ranges 1-100
pardoned = 0
in_drawer = list(range(100))
sampler = list(range(100))
for _round in range(n):
random.shuffle(in_drawer)
found = False
for prisoner in range(100):
found = False
to_sample = random.sample(sampler, 50)
for go in range(50):
card = in_drawer[to_sample[go]]
if card == prisoner:
found = True
break
break
if found:
pardoned += 1
return pardoned / n * 100   # %
```

def play_optimal(n):

```   # using 0-99 instead of ranges 1-100
pardoned = 0
in_drawer = list(range(100))
for _round in range(n):
random.shuffle(in_drawer)
for prisoner in range(100):
reveal = prisoner
found = False
for go in range(50):
card = in_drawer[reveal]
if card == prisoner:
found = True
break
reveal = card
break
if found:
pardoned += 1
return pardoned / n * 100   # %
```

if __name__ == '__main__':

```   n = 10_000
print(" Simulation count:", n)
print(f" Random play wins: {play_random(n):4.1f}% of simulations")
print(f"Optimal play wins: {play_optimal(n):4.1f}% of simulations")</lang>
```
Output:
``` Simulation count: 10000
Random play wins:  0.0% of simulations
Optimal play wins: 30.9% of simulations```