Back to Blog
How I Built a Terminal Based AI Coding Agent (Like a Mini Cursor)

How I Built a Terminal Based AI Coding Agent (Like a Mini Cursor)

June 4, 2025 (25d ago)

aipythonchaiaurcodeagentic-aicursor-aimini-cursor

β€œWhat if your terminal could think like a developer?”
That simple question led me to build my own AI-powered coding assistant a terminal-based agent capable of generating entire projects, writing full-stack code and responding intelligently to follow-up prompts. Think of it as my mini version of Cursor but inside your command line.

In this post, I’ll take you through the journey of building this agent: the motivations, technical architecture, learnings and how you can extend it for your own needs.


πŸš€ Motivation: Why I Built This

As developers, we spend a lot of time switching between code editors, terminal windows and documentation. I wanted to streamline this to build a CLI tool that feels like pair-programming with an intelligent assistant.

I was deeply inspired by modern dev tools like Cursor and by mentors like Piyush Garg and Hitesh Choudhary, who emphasize building tools, not just learning about them.

So I asked myself:
Can I build an AI agent that builds software β€” directly from the terminal?


βš™οΈ What It Does: Key Features


🧠 How It Works: Behind the Scenes

🧩 1. Natural Language Understanding

Using the OpenAI API (or LLMs like Mistral, LLaMA via Ollama), the user’s prompt is parsed using structured system instructions. Example system prompt:

You are a software engineer that builds full-stack apps. Given user instructions, generate project folders, write frontend and backend code, and execute commands when needed.

πŸ—‚οΈ 2. Project Context Parsing

The agent reads the existing folder structure and file content to understand the current state. This context is used in every follow-up prompt.

🧾 3. File Writing Logic

The agent determines:

It uses structured

fs
operations in Node.js to write code into actual
.js
,
.ts
,
.jsx
,
.json
, and
.env
files.

πŸ§ͺ 4. Command Execution

A shell wrapper executes commands like:

child_process.exec('npm install')

Outputs are captured and logged to the user like a real CLI experience.


πŸ–ΌοΈ Sample Interaction

bashCopyEdit> initialize react + express project
βœ… Project scaffold created

> add login page with email/password
🧠 Context parsed
πŸ“ Writing Login.jsx, AuthRoutes.js, updating App.jsx...

> install dependencies
πŸ“¦ Running: npm install react-router-dom express bcrypt

> now add forgot password flow
πŸ” Updating AuthRoutes.js and adding ForgotPassword.jsx

⚠️ Challenges & Fixes

πŸ”„ Context Management

Problem: Model forgets context or overwrites unrelated files.
Fix: Used a snapshot of

file tree + diff
and attached recent edits in every prompt.

πŸ› οΈ Code Injection Issues

Problem: LLM inserted duplicate functions or broke existing code.
Fix: Added helper functions for safe append/replace, and file-specific constraints.

🐒 Performance Bottlenecks

Problem: Long command chains slowed the CLI.
Fix: Debounced command execution and showed stepwise loading.


πŸ“š Learnings from Prompt Engineering

I went deep into prompt formats and methods to get consistent results.

βœ… System + Task + Context Prompting

Combined system message + file tree + task message.

🧠 Few-Shot Prompting

Gave the model examples like:

// Instruction:
Add a new route to /register in Express

// Response:
Create file routes/register.js, write:
router.post('/register', ...)

🧡 Chain of Thought Prompting

Got better quality by encouraging the model to β€œthink” step-by-step:

First, check if user model exists. Then add the controller. Finally, update routes.

🧩 How Others Can Use or Extend It


πŸ“… What’s Next?


β˜• Final Thoughts

Building this mini Cursor taught me something simple but powerful:

πŸ’‘ Prompting is the new scripting.

You don’t need 1,000 lines of shell scripts anymore. Just an intelligent agent that understands what you want and writes what you mean.

It’s not just about automating tasks β€” it’s about building with AI, not just using it.