Skip to content

Commit

Permalink
Initial paper list, code links, and figures (#1)
Browse files Browse the repository at this point in the history
  • Loading branch information
XinrunXu authored Feb 22, 2024
1 parent e80edf8 commit 4e9a661
Show file tree
Hide file tree
Showing 3 changed files with 128 additions and 1 deletion.
129 changes: 128 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
@@ -1 +1,128 @@
# GPA-LM
# A Survey on Game Playing Agents and Large Models: Methods, Applications, and Challenges
---
[![Awesome](https://awesome.re/badge.svg)](https://awesome.re)
![PRs Welcome](https://img.shields.io/badge/PRs-Welcome-green)
![Stars](https://img.shields.io/github/stars/BAAI-Agents/GPA-LM)
[![Visits Badge](https://badges.pufler.dev/visits/BAAI-Agents/GPA-LM)](https://badges.pufler.dev/visits/BAAI-Agents/GPA-LM)
![Forks](https://img.shields.io/github/forks/BAAI-Agents/GPA-LM)

🏃 **Coming soon**: Add one-sentence intro to each paper.

<div align=center><img src="Survey_Diagram.png" width="80%" /></div>

## 🌟 News
[2024/02/06] Creation of this repository to maintain the list of papers on LLM-based agents for game playing. More papers are coming soon!

<div align=center><img src="Survey_timeline.png" width="100%" /></div>

## 2023/12
- [2023/12/19] Large Language Models Play StarCraft II: Benchmarks and A Chain of Summarization Approach. [[paper](https://arxiv.org/abs/2312.11865)]
- [2023/12/14] Auto MC-Reward: Automated Dense Reward Design with Large Language Models for Minecraft. [[paper](https://arxiv.org/pdf/2312.09238.pdf)]
- [2023/12/12] MP5: A Multi-modal Open-ended Embodied System in Minecraft via Active Perception. [[paper](https://arxiv.org/abs/2312.07472)] [[code](https://github.com/IranQin/MP5)]
- [2023/12/08] Apollo's Oracle: Retrieval-Augmented Reasoning in Multi-Agent Debates. [[paper](https://arxiv.org/pdf/2312.04854.pdf)] [[code](https://github.com/FutureForMe/MADRA)]
- [2023/12/07] A Framework for Exploring Player Perceptions of LLM-Generated Dialogue in Commercial Video Games. [[paper](https://aclanthology.org/2023.findings-emnlp.151.pdf)] [[website](https://pl.aiwright.dev/)]
- [2023/12/05] Creative Agents: Empowering Agents with Imagination for Creative Tasks. [[paper](https://arxiv.org/pdf/2312.02519.pdf)] [[code](https://github.com/PKU-RL/Creative-Agents)]
- [2023/12/04] Visual Encoders for Data-Efficient Imitation Learning in Modern Video Games. [[paper](https://arxiv.org/pdf/2312.02312.pdf)]
- [2023/12/02] Building Open-Ended Embodied Agent via Language-Policy Bidirectional Adaptation. [[paper](https://arxiv.org/pdf/2401.00006.pdf)]
- [2023/12/01] Deciphering Digital Detectives: Understanding LLM Behaviors and Capabilities in Multi-Agent Mystery Games. [[paper](https://arxiv.org/pdf/2312.00746.pdf)]

## 2023/11
- [2023/11/20] DesignGPT: Multi-Agent Collaboration in Design. [[paper](https://arxiv.org/pdf/2311.11591.pdf)]
- [2023/11/14] MAgIC: Investigation of Large Language Model Powered Multi-Agent in Cognition, Adaptability, Rationality and Collaboration. [[paper](https://arxiv.org/abs/2311.08562)] [[code](https://github.com/cathyxl/MAgIC)]
- [2023/11/10] Jarvis-1: Open-World Multi-Task Agents with Memory-Augmented Multimodal Language Models. [[paper](https://arxiv.org/abs/2311.05997)] [[code](https://github.com/CraftJarvis/JARVIS-1)]

## 2023/10
- [2023/10/29] Language Agents with Reinforcement Learning for Strategic Play in the Werewolf Game. [[paper](https://arxiv.org/abs/2310.18940)]
- [2023/10/23] Yihuai Lan: LLM-Based Agent Society Investigation: Collaboration and Confrontation in Avalon Gameplay. [[paper](https://arxiv.org/abs/2310.14985)]
- [2023/10/20] Steve-Eye: Equipping LLM-based Embodied Agents with Visual Perception in Open Worlds. [[paper](https://arxiv.org/pdf/2310.13255.pdf)] [[code](https://github.com/BAAI-Agents/Steve-Eye)]
- [2023/10/19] GestureGPT: Zero-shot Interactive Gesture Understanding and Grounding with Large Language Model Agents. [[paper](https://arxiv.org/pdf/2310.12821.pdf)]
- [2023/10/16] Character-LLM: A Trainable Agent for Role-Playing. [[paper](https://arxiv.org/pdf/2310.10158.pdf)] [[code](https://github.com/choosewhatulike/trainable-agents)]
- [2023/10/13] LLaMA Rider: Spurring Large Language Models to Explore the Open World. [[paper](https://arxiv.org/abs/2310.08922)]
- [2023/10/12] Groot: Learning to Follow Instructions by Watching Gameplay Videos. [[paper](https://arxiv.org/abs/2310.08235)] [[code](https://craftjarvis-groot.github.io/)]
- [2023/10/12] Octopus: Embodied Vision-Language Programmer from Environmental Feedback. [[paper](https://arxiv.org/abs/2310.08588)] [[code](https://github.com/dongyh20/Octopus)]
- [2023/10/10] Metaagents: Simulating Interactions of Human Behaviors for LLM-Based Task-Oriented Coordination via Collaborative Generative Agents. [[paper](https://arxiv.org/abs/2310.06500)]
- [2023/10/09] Humanoid Agents: Platform for Simulating Human-like Generative Agents. [[paper](https://arxiv.org/abs/2310.05418)] [[code](https://github.com/HumanoidAgents/HumanoidAgents)]
- [2023/10/08] AvalonBench: Evaluating LLMs Playing the Game of Avalon. [[paper](https://arxiv.org/pdf/2310.05036.pdf)] [[code](https://github.com/jonathanmli/Avalon-LLM)]
- [2023/10/03] HOLMES: Towards End-to-End Embodied Decision Making with Multi-modal Large Language Model. [[paper](https://arxiv.org/pdf/2310.02071.pdf)] [[code](https://github.com/pkunlp-icler/PCA-EVAL)]
- [2023/10/02] SmartPlay: A Benchmark for LLMs as Intelligent Agents. [[paper](https://arxiv.org/abs/2310.01557)] [[code](https://github.com/LLMsmartplay/SmartPlay)]
- [2023/10/02] Avalon's Game of Thoughts: Battle Against Deception through Recursive Contemplation. [[paper](https://arxiv.org/pdf/2310.01320.pdf)]
- [2023/10/01] RoleLLM: Benchmarking, Eliciting, and Enhancing Role-Playing Abilities of Large Language Models. [[paper](https://arxiv.org/abs/2310.00746)] [[code](https://github.com/InteractiveNLP-Team/RoleLLM-public)]

## 2023/09
- [2023/09/29] Suspision-Agent: Playing Imperfect Information Games with Theory of Mind Aware GPT-4. [[paper](https://arxiv.org/pdf/2309.17277.pdf)] [[code](https://github.com/CR-Gjx/Suspicion-Agent)]
- [2023/09/29] Autoagents: A Framework for Automatic Agent Generation. [[paper](https://arxiv.org/pdf/2309.17288.pdf)] [[code](https://github.com/Link-AGI/AutoAgents)]
- [2023/09/21] True Knowledge Comes from Practice: Aligning Large Language Models with Embodied Environments via Reinforcement Learning. [[paper](https://arxiv.org/pdf/2401.14151.pdf)] [[code](https://github.com/WeihaoTan/TWOSOME)]
- [2023/09/18] MindAgent: Emergent Gaming Interaction. [[paper](https://arxiv.org/abs/2309.09971)] [[code](https://github.com/mindagent/mindagent)]
- [2023/09/14] Agents: An Open-source Framework for Autonomous Language Agents. [[paper](https://arxiv.org/pdf/2309.07870.pdf)] [[code](https://github.com/aiwaves-cn/agents)]
- [2023/09/09] Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf. [[paper](https://browse.arxiv.org/pdf/2309.04658.pdf)]

## 2023/08
- [2023/08/23] Are ChatGPT and GPT-4 Good Poker Players?--A Pre-Flop Analysis. [[paper](https://arxiv.org/pdf/2308.12466.pdf)]
- [2023/08/22] Proagent: Constructing Proactive Cooperative AI Using Large Language Models. [[paper](https://arxiv.org/abs/2308.11339)] [[code](https://github.com/PKU-Alignment/ProAgent)]
- [2023/08/21] Agentverse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors in Agents. [[paper](https://arxiv.org/abs/2308.10848)] [[code](https://github.com/OpenBMB/AgentVerse)]
- [2023/08/16] Autogen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation Framework. [[paper](https://arxiv.org/pdf/2308.08155.pdf)]
- [2023/08/15] CALYPSO: LLMs as Dungeon Master's Assistants. [[paper](https://arxiv.org/pdf/2308.07540.pdf)]
- [2023/08/01] MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework. [[paper](https://arxiv.org/abs/2308.00352)] [[code](https://github.com/geekan/MetaGPT)]

## 2023/07
- [2023/07/24] Tachikuma: Understanding Complex Interactions with Multi-Character and Novel Objects by large Language Models. [[paper](https://arxiv.org/pdf/2307.12573.pdf)]
- [2023/07/12] Sayplan: Grounding Large Language models using 3D Scene Graphs for Scalable Task Planning. [[paper](https://arxiv.org/abs/2307.06135)]
- [2023/07/05] Building Cooperative Embodied Agents Modularly with Large Language Models. [[paper](https://arxiv.org/pdf/2307.02485.pdf)] [[code](https://github.com/UMass-Foundation-Model/Co-LLM-Agents/)]
- [2023/07/04] TaPA: Embodied Task Planning with Large Language Models. [[paper](https://arxiv.org/pdf/2307.01848.pdf)] [[code](https://github.com/Gary3410/TaPA)]

## 2023/06
- [2023/06/23] Video pretraining (VPT): Learning to Act by Watching Unlabeled Online Videos. [[paper](https://arxiv.org/pdf/2206.11795.pdf)]
- [2023/06/20] SPRINT: Scalable Policy Pre-Training via Language Instruction Relabeling. [[paper](https://arxiv.org/abs/2306.11886)] [[code](https://github.com/clvrai/sprint)]
- [2023/06/01] STEVE-1: A Generative Model for Text-to-Behavior in Minecraft. [[paper](https://arxiv.org/pdf/2306.00937.pdf)] [[code](https://github.com/Shalev-Lifshitz/STEVE-1)]

## 2023/05
- [2023/05/30] AlphaBlock: Embodied Finetuning for Vision-Language Reasoning in Robot Manipulation. [[paper](https://arxiv.org/pdf/2305.18898.pdf)]
- [2023/05/26] Playing repeated games with Large Language Models. [[paper](https://arxiv.org/abs/2305.16867)]
- [2023/05/25] Voyager: An Open-Ended Embodied Agent with Large Language Models. [[paper](https://arxiv.org/pdf/2305.16291.pdf)] [[code](https://github.com/MineDojo/Voyager)]
- [2023/05/25] Ghost in the Minecraft: Generally Capable Agents for Open-World Environments via Large Language Models with Text-based Knowledge and Memory. [[paper](https://arxiv.org/pdf/2305.17144.pdf)] [[code](https://github.com/OpenGVLab/GITM)]
- [2023/05/24] SPRING: Studying Papers and Reasoning to Play Games. [[paper](https://arxiv.org/abs/2305.15486)] [[code](https://github.com/Holmeswww/SPRING)]
- [2023/05/23] Improving Factuality and Reasoning in Language Models through Multiagent Debate. [[paper](https://arxiv.org/abs/2305.14325)] [[code](https://github.com/composable-models/llm_multiagent_debate)]
- [2023/05/17] Improving Language Model Negotiation with Self-Play and In-Context Learning from AI Feedback. [[paper](https://arxiv.org/abs/2305.10142)] [[code](https://github.com/FranxYao/GPT-Bargaining)]
- [2023/05/09] Tidybot: Personalized Robot Assistance with Large Language Models. [[paper](https://arxiv.org/pdf/2305.05658.pdf)] [[code](https://github.com/jimmyyhwu/tidybot)]
- [2023/05/01] ArK: Augmented Reality with Knowledge Interactive Emergent Ability. [[paper](https://arxiv.org/pdf/2305.00970.pdf)]

## 2023/04
- [2023/04/07] Generative Agents: Interactive Simulacra of Human Behavior. [[paper](https://arxiv.org/pdf/2304.03442.pdf)] [[code](https://github.com/joonspk-research/generative_agents)]
- [2023/04/06] Can Large Language Models Play Text Games Well? Current State-of-the-Art and Open Questions. [[paper](https://arxiv.org/pdf/2304.02868.pdf)] [[code](https://github.com/hongyuanmei/chatgpt-play-zork/tree/main)]

## 2023/03
- [2023/03/31] CAMEL: Communicative Agents for ''Mind'' Exploration of Large Language Model Society. [[paper](https://arxiv.org/pdf/2303.17760.pdf)] [[code](https://github.com/camel-ai/camel)]
- [2023/03/29] Skill Reinforcement Learning and Planning for Open-World Long-Horizon Tasks. [[paper](https://arxiv.org/abs/2303.16563)] [[code](https://github.com/PKU-RL/Plan4MC)]
- [2023/03/08] Visual ChatGPT: Talking, Drawing and Editing with Visual Foundation Models. [[paper](https://arxiv.org/pdf/2303.04671.pdf)] [[code](https://github.com/moymix/TaskMatrix)]
- [2023/03/06] PaLM-E: An Embodied Multimodal Language Model. [[paper](https://arxiv.org/abs/2303.03378)]

## 2023/02
- [2023/02/12] MarioGPT: Open-Ended Text2Level Generation through Large Language Models. [[paper](https://arxiv.org/abs/2302.05981)] [[code](https://github.com/shyamsn97/mario-gpt)]
- [2023/02/03] Describe, Explain, Plan and Select: Interactive Planning with LLMs Enables Open-World Multi-Task Agents. [[paper](https://arxiv.org/pdf/2302.01560.pdf)] [[code](https://github.com/CraftJarvis/MC-Planner)]

## 2023/01
- [2023/01/21] Open-World Multi-Task Control through Goal-Aware Representation Learning and Adaptive Horizon Prediction. [[paper](https://arxiv.org/abs/2301.10034)] [[code](https://github.com/CraftJarvis/MC-Controller)]

## 2022
- [2022/11/21] Robotic Skill Acquisition via Instruction Augmentation with Vision-Language Models. [[paper](https://arxiv.org/abs/2211.11736)]
- [2022/08/08] Social Simulacra: Creating Populated Prototypes for Social Computing Systems. [[paper](https://arxiv.org/abs/2208.04024)]
- [2022/07/12] Inner Monologue: Embodied Reasoning through Planning with Language Models. [[paper](https://arxiv.org/abs/2207.05608)]
- [2022/06/07] Minedojo: Building Open-Ended Embodied Agents with Internet-Scale Knowledge. [[paper](https://arxiv.org/abs/2206.08853)] [[code](https://github.com/MineDojo/MineDojo)]

## Citation
If you find this repository useful, please cite our paper:

```
@misc{xi2023rise,
title={A Survey on Game Playing Agents and Large Models: Methods, Applications, and Challenges},
author={Xinrun Xu, Yuxin Wang, Chaoyi Xu, Ziluo Ding, Jiechuan Jiang, Zhiming Ding, Börje F. Karlsson},
year={2024},
eprint={2402.XXXXX},
archivePrefix={arXiv},
primaryClass={cs.AI}
}
```

## Contact
- Börje F. Karlsson [@tellarin](https://github.com/tellarin): [email protected]
- Xinrun Xu [@XinrunXu](https://github.com/XinrunXu): [email protected]
Binary file added Survey_Diagram.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added Survey_timeline.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.

0 comments on commit 4e9a661

Please sign in to comment.