Cellular Au-Tonnetz: A Unified Audio-Visual MIDI Generator

1 Jan 2024 — Tom Didiot-Cook (2024)
Tonnetzcellular automataIoTaudio-visualsynesthesiaMIDIESP32

Summary

Presents a novel tool for music creation that merges sound and light through a unified system. Leverages the Tonnetz, cellular automata, and embedded electronics (ESP32) to offer users an intuitive platform to explore musical harmony and synesthetic experiences. Generates sound and light simultaneously rather than reactively.

Key Contributions

  • Unified audio-visual generation (not sound-reactive — sound and light are generated from the same source)
  • Tonnetz as the underlying harmonic space for both MIDI and LED patterns
  • Cellular automata for evolving musical pattern generation
  • IoT architecture enabling multi-unit installations and web-based control
  • Democratises music creation — complex musical concepts made accessible through multi-sensory interface

Methods

  • ESP32-S3 embedded platform with addressable LEDs
  • Tonnetz geometry mapped to physical hexagonal layout
  • Cellular automata rules generate evolving states on the Tonnetz grid
  • MIDI output for sound, LED patterns for visual synchronisation
  • Web control panel (LEMP stack) for multi-unit management

Connections to Active Projects

AutoTonnetz: This IS the AutoTonnetz paper. Foundational reference for all future development.

Organised Sound: The perception study builds on this system — raters evaluate the audio-visual outputs this tool generates.

Swarm Robotics: Multi-unit IoT architecture is a primitive form of swarm behaviour — units can potentially influence each other’s state. Future work could explore emergent behaviour across networked Tonnetz units.

Technical Details

  • Hardware: FireBeetle ESP32-S3, WS2812B addressable LEDs
  • Firmware: PlatformIO/Arduino framework
  • Web: nginx + PHP + MariaDB + JavaScript control panel
  • Communication: MQTT for IoT messaging
  • Multiple hardware revisions: TZ5, TZ7, MegaHex