Can You Beat My Single Layer Neural Network?

Thanks to Pyscript for their ingenious solution to simple interactive webpages.

Future Improvements

To enhance the abilities of my game network, I would like to start by creating an output layer of 3 nodes which output to a softmax activation function: each node would represent the percent chance the player's next move is either Rock, Paper, or Scissors out of 100%. With this, we could expect some marginal improvements because my current system of output on a tanh function, which ranges from -1 to 1, implies that -1, or Rock, is infinitely far away from 1, Paper. Using this function gives the network the assumption that if the player chooses Rock, there was a 0% chance they could have chosen Paper instead, which is not always the case. An example output involving horses and cats is given here.

I'd also like to include a sort of "dice roll" function that randomly selects a value following the percent chance to ensure that the computer remains somewhat random and significantly more difficult to win against. As of right now, a player could in theory consistently win against the computer if they can identify their own patterns and subvert them which is how professional RPS players operate (yes, they do exist - watch the Las Vegas tournament here). By adding in an element of randomness that still covers the player's most probable moves, the computer would be able to more efficiently win against RPS players who think too much about their moves.

Outside of choosing among probable moves and introducing effective randomness, I would like to collect some training data from many rounds of players so the network comes already initialized when gameplay starts - eliminating the need for a practice round to get the computer up to speed. Finally, I would like to input more player moves and adjust the win-loss function that adds 0.3 to the player move to check if it can be further optimized.

Images provided by Sewade Ogun, Wikipedia Commons, and the last chart is my own


"I never lose. I either win, or I update my parameters."

- Neuralnetson Mandela

Galen Holland 2022