Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
keywords:
agent-based modeling
computational modeling
mathematical modeling
bayesian modeling
learning
language acquisition
Lewis signaling game (LSG) and similar coordination
games have been used to model the emergence and
evolution of language. However both Nash
equilibria and learning or evolutionary
dynamics often result in suboptimal signaling systems.
We present a sequential reinforcement
learning (SRL) model based on a novel sequential binary decision
process. SRL has low cognitive demands and parameter
count and exhibits lateral inhibition without additional
assumptions.
We prove all scenarios converge to an optimal signaling
system in all N state, N signal LSGs with arbitrary
state probabilities and further
explore its properties with numerical simulations.
Next, we model a signaling game with
agents who both speak and hear while using
one state of learning (instead of two, as is common).
Agents have a probability
distribution for meanings in a given context.
Speaking agents use the distribution to
choose a meaning and use SRL model to
choose a signal.
Hearing agents use Bayes to combine their state of learning
with their meaning distribution to guess a meaning.
An agent's state of learning is reinforced
from habit of speaking and guessing a meaning.
Numerical simulations indicate both agents
converge to the same optimal system
without external reinforcement
as happens in language acquisition.