Some cortical neurons are modular, cleanly encoding an interpretable variable, while others appear to be best understood as a confusing mixture. In extreme cases, the same neuron can flip-flop from modular to mixed-selective in the course of a day. This inconsistency is a puzzle. Towards answering this puzzle, the field has constructed normative theories, which seek to explain why the brain chooses to break apart the world into pieces in this dilettantish way. Previous theories have been largely algorithmic, showing how mixed-selectivity is necessary for the computation of nonlinear functions. These theories are clearly useful, but we argue there is a second, underappreciated predictor: energy efficiency. We construct a simple efficient coding theory for multiple variables, and derive novel conditions that govern whether the optimal code is modular or mixed selective. In particular, if two variables are range-independent (meaning they can be correlated, but all combinations of outcomes are still possible), a modular code is optimal. We find that these conditions match the patterns observed in neural data. In developing this theory we also make some technical contributions, in particular, we show that a family of efficient coding problems can be rewritten as convex (a property others might find useful) and use this to derive the first tight identifiability criterion for semi-nonnegative matrix factorisation, a classic machine learning method. In sum, we present ideas that we have found useful for thinking through the way single neurons in brains or artificial networks behave, allowing us to squeeze more conceptual juice from the same recordings than if we had ignored the single neuron properties.