BBR Battle Icon

BBR Battle Calculator

Dates
September 2023 - Present
Role
Builder / Founder
Link
@bbrbattle.com
NumPy LogoNumPy
Optuna LogoOptuna
Redux LogoRedux
Material UI LogoMaterial UI
NextJS LogoNextJS
React LogoReact
tailwindcss Logotailwindcss
Flask LogoFlask
MongoDB LogoMongoDB
Figma LogoFigma
Overview

I built this website as a tool for the BloodBath Redemption (BBR) community of Axis & Allies (A&A). BBR is a variant of the classic board game - A&A 1940 edition. The game is played over 8 rounds with 9 nations taking sequential actions in each round. Combat in this game is based on salvo combat model, a mathematical representation of combat occuring in waves between forces of varying relative strengths and sizes. In the game, combat is carried out via a well-defined sequence between attacking and defending forces. Battle outcomes are determined via a stochastic process, rolling dice.

Solution

BBR Battle Calculator is a Monte Carlo simulator that follows the rules of combat to determine battle outcomes and provide a statistical analysis to the user. This site has 500+ users, capitalizing a large share of our limited community of 800 (and growing). While other A&A battle calculators exist, this site is specific to BBR, which is why it is so popular in our community. Some of the predominant differences are the application of "technology", which which change the stats of affected units, different "combined arms" rules, the addition of Anti-Aircraft defense to select ships, and "target strikes".

Innovation

One of the problems I wrestled with while initially while building this site was how each side (the attacker and defender) could intelligently take casualties. While a static ordering is easy to implement, it is suboptimal in most situations. Additionally, in many cases, an "optimal" policy for selecting casualties is unknown, and is mired in player bias. To solve this problem I have turned to reinforcement learning. I converted the initial (statically ordered) calculator into a simulation environment that I used to train a Deep Q-Learning algorithm on the state-value function of different unit configurations. Using this policy, the attacker selects casualties that maximizes its expected outcome (value of units remaining after combat), and the defender selects actions (casualties) that minimize it.

Future

I intend to continue the development of this site, building new tools for players to utilize, and allowing the site to better encompass a broader range of tactical situations that players face while playing. Additionally, I would like to build new tools, expanding the reach, accessibility, and playability of this game.