Back to Projects
Fine-tuning LLM with Google Colab
CompletedPythonTensorFlowGoogle Colab+2 more

Fine-tuning LLM with Google Colab

Step-by-step walkthrough of fine-tuning Tiny-LLaMA inside Google Colab — covering tokenization, training setup, model export, and local deployment.

Timeline

November 2025 - December 2025

Role

AI/ML Engineer

Status
Completed

Technology Stack

Python
TensorFlow
Google Colab
Hugging Face
PyTorch

Fine-tuning LLM with Google Colab

Overview

A comprehensive project demonstrating how to fine-tune the Tiny-LLaMA model using Google Colab, making AI model customization accessible to everyone. Uses a dataset that converts HTML to JSON, with detailed explanations of each step.

What's Covered

  • What each import and library does
  • How tokenization and training setup works
  • LoRA (Parameter-Efficient Fine-Tuning) configuration
  • How to fine-tune and export your model
  • How to download and run it locally

Tech Stack

  • Language: Python
  • Platform: Google Colab
  • Model: Tiny-LLaMA 1.1B
  • Libraries: Hugging Face Transformers, PyTorch, PEFT, TRL, bitsandbytes

Design & Developed by Shivam Kaushal
© 2026. All rights reserved.