Mempathy is a video game narrative experience that transforms the relationship with anxiety. The video game’s goal is to offer a reflective experience, and the winning state is defined by a feeling of advancement and companionship towards anxiety. The idea of progress is supported in art by watercolour progression and on discovering a personalized conversation across the different chapters of the game.
The gameplay is developed according to the following structure: firstly, the player unlocks a conversation through clickable spheres ( StarObjects ) following a series of blue water coloured scenes making several choices corresponding to several constellations draws. …
During these days the Dream Arcade Jam has been part of itch.io platform and some were able to take part on it . Lots of fun and lessons learnt from this experience .
Here comes some lessons learnt about Game design and Animation from this experience!
Arcade games come presented normally as competitive games , but also there are some great cooperative games . For us there was this idea of using cooperation as one of the core ideas of the video game jam, so we launched into some research with classics such as Bubble Bobble or Street of Rage so we gather during the idea development for brainstorm some core idea about cooperation in arcade games…
Communication is one of the components of MARL and an active area of research itself, as it might influence the final performance of agents, and it affects coordination or negotiation directly. Effective communication is essential in order to interact successfully, solving the challenges of cooperation, coordination, and negotiation between several agents.
Most research in multiagent systems has tried to address the communication needs of an agent: what information to send, when, and to whom and result in strategies that are optimized for the specific application for which they are adopted. Known communication protocols such as Cheap talk can be seen as “doing by talking” in which talk precedes action. …
StarCraft II has been present as a machine learning environment for research since BloodWar. A couple of years ago, DeepMind released pysc2, a research environment for StarCraft II and later in 2019, Whiteson Oxford Research Lab open-sourced SMAC, a cooperative multiagent environment based on pysc2 with a cooperative setup, meaning that in this environment multiple agents cooperate towards a common goal. …
Today we will dig into a paper ripped of A Unified Game-Theoretic Approach to Multi-agent Reinforcement Learning , one of the core ideas that has been used for the development of #AlphaStar . There are several concepts in AlphaStar that won´t be treated here . The aim is to dig in the concepts that what has been as the “Nash League” conceptual functioning and how game theory came to mix with reinforcement learning .
At the end of this article you should have a notion of Double Oracle algorithm, Deep Cognitive Hierarchies and Policy-Space Response Oracles .
For this post you should be familiarized with some concepts about game theory , like the setup of the strategic game in form of the payoff matrix, the understanding of Nash Equilibria and best response. You can visit some conceptual implementations and a numpy python implementation…
In this article we will dive into a Tencent Paper + University of Rochester + Northwestern University paper and a new mini-game developed by myself ( Blue Moon) that proposes a learning environment for Tech Tree development .
A significant part of TSTARTBOTS paper — Tencent AI Lab + University of Rochester + Northwestern University — comes down to propose a hierarchical model of actions inside the huge action space of Starcraft II Learning environment .
The human thinking model in the game summarizes itself in several levels : macro or global strategy , map control and battle executions or micro. …
“Hierach?” Talandar Protoss selected order unit quote
You can try your own mini-game by changing the following steps in your pySC2 folder
If you want to test it against alredy built-in mini-games from DeepMind and Blizzard :
You can customize the agent provided in several ways : You can change the policy and still use a DQN agent, that means that you will take another approach to the learning strategy but still use a Neural Network for function approximation. The trade-off between exploration and explotation is challenging and an on-going research topic. A recommended approach by keras-rl authors is Boltzmann-style exploration [3] , so if you feel like, give it a try and feel free to share your results ! …
“The Void will answer” Mohandar Protoss trained order unit quote
It´s time to get the agent running! Type in your console .
$ python3 CNN_LSTM.py
There exist a pre-configured callback in the agent that allows to you to run Tensorboard. Once you start your training, type in your console. You should see something like this. Note that the path/Graph will be created once the training has started.
$ tensorboard --logdir path/Graph --host localhost --port 8088
“Willingly.” Mothership Protoss selected order unit quote
Here is presented an overview of the agent´s code : an informal walkthrough and descriptive inspection of the code, in order to help others to understand and improve the implementation . In a functional overview, the code is making the following steps.
We understand the agent as the Learner , the decision maker . There might be many different agents that can be used for this challenge The goal of the agent is to learn a policy -control strategy- that maximizes the expected return -cumulative, discounted reward-. The agent uses knowledge of state transitions, of the form (st, at, st+1, rt+1) in order to learn and improve its policy . In DQN , we use a Neural Network as a function approximator for the Q-values. …
“The task ahead is difficult, but worthwhile” Karax Protoss repeatedly selected unit quote
You can configure all the requirements for getting started in the pysc2 official repository or in the step II of this tutorial. In this section we will test the running environment with the mini-game that we are going to train.
Open a terminal and type
$ python -m pysc2.bin.agent --map HallucinIce
You should see something like this
About