@omarsar0
Multi-agent AI systems are poor at communication. The default approach in multi-agent RL today focuses almost entirely on task success rates. Can agents coordinate? Did they solve the problem? The actual cost of communication is rarely measured or optimized. But in real-world systems, bandwidth, energy, and compute are finite. Every message has a price. This new research introduces three Communication Efficiency Metrics (CEMs) and a framework for learning protocols that are both effective and efficient. They find that communication inefficiency arises primarily from poorly designed optimization objectives rather than inherent information needs. The researchers propose three metrics: - Information Entropy Efficiency Index (IEI) measures how compact messages are. - Specialization Efficiency Index (SEI) captures whether agents develop distinct roles rather than sending redundant information. - Topology Efficiency Index (TEI) tracks task success relative to communication frequency. By augmenting training loss functions with these metrics, they achieve dual improvements. CommNet saw an increase in success rate while also improving topology efficiency. IC3Net also improved the success rate with better efficiency. Counterintuitively, one-round communication with efficiency augmentation consistently outperformed two-round baseline configurations. More communication rounds degraded TEI significantly due to overhead. Communication efficiency and task performance can improve simultaneously rather than trading off. The takeaway for AI devs is to build better objectives, not more messages, to unlock coordination. π (bookmark it)