This blogpost aims to present Mempathy video game as a Safety and Alignment opportunity and the results and lessons learnt in implementing controlled language generation with Plug and Play Models (PPLM) for NPC design. The results show that safe and aligned conversation in narrative games goes beyond controlling language models and requires active design and human supervision, proposing Mempathy as a videogame for Alignment example. It might be useful as it offers an example of designing companionship in NPCs from the game perspective and presents results for implementing machine learning techniques in NPC players.

Fig 1. In Mempathy, the player guides the conversation with an NPC. Large Language Models with PPLM give consistency and fluency to the conversation, Mempathy Gameplay creates an aligned and safe conversation.

Index Terms — Narrative and Interactive…


Mempathy is a video game narrative experience that transforms the relationship with anxiety. The video game’s goal is to offer a reflective experience, and the winning state is defined by a feeling of advancement and companionship towards anxiety. The idea of progress is supported in art by watercolour progression and on discovering a personalized conversation across the different chapters of the game.

Try Mempathy Demo here

Game Design

The gameplay is developed according to the following structure: firstly, the player unlocks a conversation through clickable spheres ( StarObjects ) following a series of blue water coloured scenes making several choices corresponding to several constellations draws. …


During these days the Dream Arcade Jam has been part of itch.io platform and some were able to take part on it . Lots of fun and lessons learnt from this experience .

Here comes some lessons learnt about Game design and Animation from this experience!

Seeds complete GamePlay

Idea : An arcade about love ?

Arcade games come presented normally as competitive games , but also there are some great cooperative games . For us there was this idea of using cooperation as one of the core ideas of the video game jam, so we launched into some research with classics such as Bubble Bobble or…


Communication is one of the components of MARL and an active area of research itself, as it might influence the final performance of agents, and it affects coordination or negotiation directly. Effective communication is essential in order to interact successfully, solving the challenges of cooperation, coordination, and negotiation between several agents.

Most research in multiagent systems has tried to address the communication needs of an agent: what information to send, when, and to whom and result in strategies that are optimized for the specific application for which they are adopted. Known communication protocols such as Cheap talk can be seen…


‘I knew you would find your way here…eventually ‘— Queen of Blades to Zeratul —

StarCraft II has been present as a machine learning environment for research since BloodWar. A couple of years ago, DeepMind released pysc2, a research environment for StarCraft II and later in 2019, Whiteson Oxford Research Lab open-sourced SMAC, a cooperative multiagent environment based on pysc2 with a cooperative setup, meaning that in this environment multiple agents cooperate towards a common goal. …


Today we will dig into a paper ripped of A Unified Game-Theoretic Approach to Multi-agent Reinforcement Learning , one of the core ideas that has been used for the development of #AlphaStar . There are several concepts in AlphaStar that won´t be treated here . The aim is to dig in the concepts that what has been as the “Nash League” conceptual functioning and how game theory came to mix with reinforcement learning .

At the end of this article you should have a notion of Double Oracle algorithm, Deep Cognitive Hierarchies and Policy-Space Response Oracles .

For this post…


In this article we will dive into a Tencent Paper + University of Rochester + Northwestern University paper and a new mini-game developed by myself ( Blue Moon) that proposes a learning environment for Tech Tree development .

A significant part of TSTARTBOTS paper — Tencent AI Lab + University of Rochester + Northwestern University — comes down to propose a hierarchical model of actions inside the huge action space of Starcraft II Learning environment .

The human thinking model in the game summarizes itself in several levels : macro or global strategy , map control and battle executions or…


Lab : Customize the agent

“Hierach?” Talandar Protoss selected order unit quote

Try your own mini-game

You can try your own mini-game by changing the following steps in your pySC2 folder

  • Create your mini-game. You can visit this tutorial that changes DefeatRoaches into DefeatWhatever melee
  • Add your mini-game to the array in pysc2\maps\mini_games.py
  • Add the mini-game name into the Flag config and training_game section of the agent_CNN+LSTM.py
  • Run the agent in your console

If you want to test it against alredy built-in mini-games from DeepMind and Blizzard :

  • Add the mini-game name into the Flag config of the agent_CNN+LSTM.py
  • Run the agent in your console

Your own policy or your own RL agent

You can customize the…


Lab : Running and training the agent

“The Void will answer” Mohandar Protoss trained order unit quote

Running and training

It´s time to get the agent running! Type in your console .

$ python3 CNN_LSTM.py

Visualizing in Tensorboard

There exist a pre-configured callback in the agent that allows to you to run Tensorboard. Once you start your training, type in your console. You should see something like this. Note that the path/Graph will be created once the training has started.

$ tensorboard --logdir path/Graph --host localhost --port 8088


Lab : Jumping into the machine learning agent

“Willingly.” Mothership Protoss selected order unit quote

Into the machine learning agent

Here is presented an overview of the agent´s code : an informal walkthrough and descriptive inspection of the code, in order to help others to understand and improve the implementation . In a functional overview, the code is making the following steps.

  • Import statements from libraries : pySC2, keras and keras-rl
  • Load actions from the API
  • Configure flags and parameters
  • Configure processor with observations and batches
  • Define the environment
  • Agent model DNN Architecture
  • Process of training the game

Agent Overview and DNN Architecture

We understand the agent as the Learner , the decision maker . There might be…

gema.parreno.piqueras

Artificial Intelligence. Data visualization

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store