
Arduino x LLM — Voice-Controlled IoT
Bridging physical hardware and AI — speak to your Arduino and it intelligently responds via a dual LLM pipeline for voice transcription and intent processing.
Timeline
January 2026
Role
Full Stack & IoT Developer
Status
CompletedTechnology Stack
Arduino x LLM — Voice-Controlled IoT
Overview
This project bridges the gap between physical hardware and cutting-edge AI. Speak to your Arduino, and it intelligently responds by performing actions or displaying information — all powered by Large Language Models.
How It Works
- Voice Command — Press and hold a button on the Arduino, and it starts recording your audio command
- Audio to Text — Release the button, and the recorded audio is sent to an Audio LLM for transcription
- Intelligent Processing — The transcribed text goes to a "Thinking LLM" which processes the command and determines the action
- Action/Display — The LLM's output is sent back to the Arduino, which executes the command (buzzer, lights, LCD display)
Features
- Voice-Controlled Hardware — Natural language commands to control physical devices
- Dual LLM Pipeline — Audio LLM for transcription + Thinking LLM for intent processing
- Real-time Response — Fast processing pipeline from voice to hardware action
- Extensible Actions — Buzzer, lights, LCD display, and more
