Back to Projects
Arduino x LLM — Voice-Controlled IoT
CompletedArduinoPythonNode.js+2 more

Arduino x LLM — Voice-Controlled IoT

Bridging physical hardware and AI — speak to your Arduino and it intelligently responds via a dual LLM pipeline for voice transcription and intent processing.

Timeline

January 2026

Role

Full Stack & IoT Developer

Status
Completed

Technology Stack

Arduino
Python
Node.js
JavaScript
LLM

Arduino x LLM — Voice-Controlled IoT

Overview

This project bridges the gap between physical hardware and cutting-edge AI. Speak to your Arduino, and it intelligently responds by performing actions or displaying information — all powered by Large Language Models.

How It Works

  1. Voice Command — Press and hold a button on the Arduino, and it starts recording your audio command
  2. Audio to Text — Release the button, and the recorded audio is sent to an Audio LLM for transcription
  3. Intelligent Processing — The transcribed text goes to a "Thinking LLM" which processes the command and determines the action
  4. Action/Display — The LLM's output is sent back to the Arduino, which executes the command (buzzer, lights, LCD display)

Features

  • Voice-Controlled Hardware — Natural language commands to control physical devices
  • Dual LLM Pipeline — Audio LLM for transcription + Thinking LLM for intent processing
  • Real-time Response — Fast processing pipeline from voice to hardware action
  • Extensible Actions — Buzzer, lights, LCD display, and more

Design & Developed by Shivam Kaushal
© 2026. All rights reserved.