LLM Ensemble Achieves Breakthrough in Content Categorization with Majority Rules Approach
Published on November 1, 2025 at 05:00 AM
Researchers at RingCentral Inc. and Relevad Corporation have introduced an innovative ensemble framework for unstructured text categorization using large language models (LLMs). This framework, known as eLLM (ensemble large language model), addresses common weaknesses found in individual LLMs, such as inconsistency, hallucination, category inflation, and misclassification.
The eLLM approach integrates multiple models, leveraging diverse architectures, training paradigms, and knowledge bases. Testing on a human-annotated corpus of 8,660 samples using the Interactive Advertising Bureau (IAB) hierarchical taxonomy revealed that eLLM yields a substantial performance improvement of up to 65% in F1-score compared to the strongest single model. Furthermore, the ensemble demonstrated the ability to approach near human-expert-level performance.
The team formalized the ensemble process through a mathematical model of collective decision-making and established principled aggregation criteria. The results indicate that eLLM improves both robustness and accuracy, offering a scalable and reliable solution for taxonomy-based classification. This advancement may significantly reduce dependence on human expert labeling. The researchers suggest that this work builds toward a new theory of collaborative AI, where orchestrated ensembles transform unstructured chaos into ordered, expert-level insight.