Table of Contents
- Emergence of Grounded Compositional Language in Multi-Agent Populations
- Idea Symposium
- Additional Discussed Papers
|Andre||Emergence of Grounded Compositional Language in Multi-Agent Populations (paper)||Abstract: By capturing statistical patterns in large corpora, machine learning has enabled significant advances in natural language processing, including in machine translation, question answering, and sentiment analysis. However, for agents to intelligently interact with humans, simply capturing the statistical patterns is insufficient. In this paper we investigate if, and how, grounded compositional language can emerge as a means to achieve goals in multi-agent populations. Towards this end, we propose a multi-agent learning environment and learning methods that bring about emergence of a basic compositional language. This language is represented as streams of abstract discrete symbols uttered by agents over time, but nonetheless has a coherent structure that possesses a defined vocabulary and syntax. We also observe emergence of nonverbal communication such as pointing and guiding when language communication is unavailable.|
After this presentation, we will have an ‘idea zoo symposium’ - many members are thinking about cool ideas and this will be a place to share and think through them together.
Emergence of Grounded Compositional Language in Multi-Agent Populations
A popular science article has been written on this paper: here.
Access slides here.
- Split neural network - each neuron becomes two neural networks.
- Shared weights
- Systematic activation function discovery
- Modeling the neuron using a more complex unit of computation
- ‘First order’ networks vs \(n\)th order networks
- Shared weights
- Lottery Ticket Hypothesis + variations
- ‘Progressive Random Search’ optimizer - begin from a good ticket and freeze, randomize + freeze more, repeat
- Adversarial examples on humans - do adversarial examples exist for humans? Can we find them? Is there a difference between those two questions?
- Adversarial attacks on Hebbian networks
- Second derivative optimizer - using concavity estimations to better understand movement
- Relationship to Taylor series
Additional Discussed Papers
- Understanding Deep Learning Requires Rethinking Generalization
- Backprop Diffusion is Biologically Plausible
- Feature Visualization: How neural networks build up their understanding of images
- Deconstructing Lottery Tickets: Zeros, Signs, and the Supermask
- The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks
- Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets
- Stanford Seminar: Can the brain do backprop?