Mempathy is a video game narrative experience that transforms the relationship with anxiety. The video game’s goal is to offer a reflective experience, and the winning state is defined by a feeling of advancement and companionship towards anxiety. The idea of progress is supported in art by watercolour progression and on discovering a personalized conversation across the different chapters of the game.

Image for post
Image for post
Try Mempathy Demo here

Game Design

The gameplay is developed according to the following structure: firstly, the player unlocks a conversation through clickable spheres ( StarObjects ) following a series of blue water coloured scenes making several choices corresponding to several constellations draws. …

Image for post
Image for post

During these days the Dream Arcade Jam has been part of itch.io platform and some were able to take part on it . Lots of fun and lessons learnt from this experience .

Here comes some lessons learnt about Game design and Animation from this experience!

Seeds complete GamePlay

Idea : An arcade about love ?

Arcade games come presented normally as competitive games , but also there are some great cooperative games . For us there was this idea of using cooperation as one of the core ideas of the video game jam, so we launched into some research with classics such as Bubble Bobble or Street of Rage so we gather during the idea development for brainstorm some core idea about cooperation in arcade games…

Communication is one of the components of MARL and an active area of research itself, as it might influence the final performance of agents, and it affects coordination or negotiation directly. Effective communication is essential in order to interact successfully, solving the challenges of cooperation, coordination, and negotiation between several agents.

Most research in multiagent systems has tried to address the communication needs of an agent: what information to send, when, and to whom and result in strategies that are optimized for the specific application for which they are adopted. Known communication protocols such as Cheap talk can be seen as “doing by talking” in which talk precedes action. …

Image for post
Image for post
‘I knew you would find your way here…eventually ‘— Queen of Blades to Zeratul —

StarCraft II has been present as a machine learning environment for research since BloodWar. A couple of years ago, DeepMind released pysc2, a research environment for StarCraft II and later in 2019, Whiteson Oxford Research Lab open-sourced SMAC, a cooperative multiagent environment based on pysc2 with a cooperative setup, meaning that in this environment multiple agents cooperate towards a common goal. …

Image for post
Image for post

Today we will dig into a paper ripped of A Unified Game-Theoretic Approach to Multi-agent Reinforcement Learning , one of the core ideas that has been used for the development of #AlphaStar . There are several concepts in AlphaStar that won´t be treated here . The aim is to dig in the concepts that what has been as the “Nash League” conceptual functioning and how game theory came to mix with reinforcement learning .

At the end of this article you should have a notion of Double Oracle algorithm, Deep Cognitive Hierarchies and Policy-Space Response Oracles .

For this post you should be familiarized with some concepts about game theory , like the setup of the strategic game in form of the payoff matrix, the understanding of Nash Equilibria and best response. You can visit some conceptual implementations and a numpy python implementation…

In this article we will dive into a Tencent Paper + University of Rochester + Northwestern University paper and a new mini-game developed by myself ( Blue Moon) that proposes a learning environment for Tech Tree development .

A significant part of TSTARTBOTS paper — Tencent AI Lab + University of Rochester + Northwestern University — comes down to propose a hierarchical model of actions inside the huge action space of Starcraft II Learning environment .

The human thinking model in the game summarizes itself in several levels : macro or global strategy , map control and battle executions or micro. …

Lab : Customize the agent

“Hierach?” Talandar Protoss selected order unit quote

Image for post
Image for post

Try your own mini-game

You can try your own mini-game by changing the following steps in your pySC2 folder

  • Create your mini-game. You can visit this tutorial that changes DefeatRoaches into DefeatWhatever melee
  • Add your mini-game to the array in pysc2\maps\mini_games.py
  • Add the mini-game name into the Flag config and training_game section of the agent_CNN+LSTM.py
  • Run the agent in your console

If you want to test it against alredy built-in mini-games from DeepMind and Blizzard :

  • Add the mini-game name into the Flag config of the agent_CNN+LSTM.py
  • Run the agent in your console

Your own policy or your own RL agent

You can customize the agent provided in several ways : You can change the policy and still use a DQN agent, that means that you will take another approach to the learning strategy but still use a Neural Network for function approximation. The trade-off between exploration and explotation is challenging and an on-going research topic. A recommended approach by keras-rl authors is Boltzmann-style exploration [3] , so if you feel like, give it a try and feel free to share your results ! …

Lab : Running and training the agent

“The Void will answer” Mohandar Protoss trained order unit quote

Image for post
Image for post

Running and training

It´s time to get the agent running! Type in your console .

$ python3 CNN_LSTM.py

Visualizing in Tensorboard

There exist a pre-configured callback in the agent that allows to you to run Tensorboard. Once you start your training, type in your console. You should see something like this. Note that the path/Graph will be created once the training has started.

$ tensorboard --logdir path/Graph --host localhost --port 8088

Lab : Jumping into the machine learning agent

“Willingly.” Mothership Protoss selected order unit quote

Image for post
Image for post

Into the machine learning agent

Here is presented an overview of the agent´s code : an informal walkthrough and descriptive inspection of the code, in order to help others to understand and improve the implementation . In a functional overview, the code is making the following steps.

  • Import statements from libraries : pySC2, keras and keras-rl
  • Load actions from the API
  • Configure flags and parameters
  • Configure processor with observations and batches
  • Define the environment
  • Agent model DNN Architecture
  • Process of training the game

Agent Overview and DNN Architecture

We understand the agent as the Learner , the decision maker . There might be many different agents that can be used for this challenge The goal of the agent is to learn a policy -control strategy- that maximizes the expected return -cumulative, discounted reward-. The agent uses knowledge of state transitions, of the form (st, at, st+1, rt+1) in order to learn and improve its policy . In DQN , we use a Neural Network as a function approximator for the Q-values. …

Lab : Quickstart overview of pySC2

“The task ahead is difficult, but worthwhile” Karax Protoss repeatedly selected unit quote

Image for post
Image for post

Configuration

You can configure all the requirements for getting started in the pysc2 official repository or in the step II of this tutorial. In this section we will test the running environment with the mini-game that we are going to train.

Run Random Agent

Open a terminal and type

$ python -m pysc2.bin.agent --map HallucinIce

You should see something like this

About

gema.parreno.piqueras

Artificial Intelligence. Data visualization

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store