// field notes & honest takes

Things I've Been
Learning About AI

I'm not an ML researcher. I'm a robotics engineer who started using AI tools heavily in my grad research — and I have a lot of opinions about what works, what doesn't, and why that is. This is me writing that down.

Sai Jagadeesh
Sai Jagadeesh Muralikrishnan
Robotics Eng · College Park, MD
Featured
AI Tools

Six Months of Using AI Every Day in My Research — Here's What I Actually Think

Somewhere around mid-2025 I stopped treating AI tools as a curiosity and started treating them as part of my actual workflow. Not because I was told to, not because it was trendy — because I had a deadline on a navigation stack, a half-broken simulation environment, and not enough hours in the day. I needed help, and I got it from bots. That felt weird to admit at first. Now it doesn't.

"The moment AI stopped feeling like a toy was when it helped me find a race condition in a ROS2 node at 1 AM that I'd been staring at for three days."

What Even Are These "AI Bots"?

Before I get into my personal takes, let me be real about what these tools actually are — because the marketing is genuinely confusing. At the core, the bots I've been using are large language models (LLMs) — neural networks trained on massive text datasets to predict and generate text. That's it. They don't "think." They don't "understand" your problem the way a colleague does. But they're extraordinarily good at pattern-matching against a huge body of human knowledge, and that turns out to be insanely useful.

There are two flavors I kept reaching for: general chat assistants (ChatGPT, Claude, Gemini) and code-native tools (GitHub Copilot, Cursor, Codeium). They overlap more than their makers want you to think, but the context in which they're embedded changes everything about how useful they are day-to-day.

The Tools I Actually Tested

Tool What it is My use case Verdict
Claude (Anthropic) Chat LLM, long context window Paper summaries, code review, explaining math-heavy papers Daily driver
ChatGPT / o1 Chat LLM + reasoning model Brainstorming system architecture, debugging logic errors Strong second
GitHub Copilot Inline code completion + chat ROS2 boilerplate, writing launch files, quick Python scripts Always open
Perplexity AI Search engine + LLM synthesis Literature search, finding papers on specific sensor setups Great for research
Cursor AI-native code editor (VSCode fork) Refactoring entire ROS2 packages, multi-file edits Situational
Gemini Advanced Google's chat LLM Tried it for a month — Google Docs integration is the standout Niche use

Where AI Actually Helped Me — Honestly

Research comprehension: Reading three papers a day is part of grad life. Pasting the abstract and methods into Claude and asking "what assumptions is this making that would break in a real outdoor environment?" cut my paper-reading time roughly in half. It's not that I skip reading — I still read — but I arrive at the important questions faster.

Writing first drafts: I hate staring at a blank Google Doc. I don't hate editing. So now I describe what I want to say in bullet points, paste it in, and get a rough draft I can tear apart. It sounds lazy. It's actually just more efficient than the blank-page paralysis I used to sit in.

Debugging ROS2: This one surprised me the most. ROS2 has a lot of footguns — lifecycle nodes behaving weirdly, QoS mismatch errors that give you no useful stack trace, TF2 transforms going silent for no apparent reason. Describing the symptom to an LLM and asking it to reason through possible causes catches maybe 40% of bugs I would have otherwise spent hours on. Not 100%. But 40% is enormous when you're time-limited.

Generating boilerplate: Nobody enjoys writing a ROS2 launch file from scratch for the fifteenth time. Copilot and Claude handle that. I focus on the parts that actually require my specific domain knowledge.

The tools don't replace knowing what you're doing. They massively reduce the cost of the parts that don't require knowing what you're doing.

Where They Fall Short (And This Part Matters)

Anything cutting-edge in your specific domain: If I'm working on something that hasn't been written about extensively — a specific sensor configuration, a niche ROS2 package — the model hallucinates confidently. The more niche your question, the more you have to verify everything it says.

System-level debugging: When a bug lives in the interaction between hardware, the OS scheduler, and your ROS2 node — the model can help you think through it, but it's not going to catch it. That still needs you, staring at htop and dmesg output.

Long-term project coherence: An LLM session is stateless unless you're on a plan with memory. It doesn't know yesterday's architectural decision. I've gotten contradictory advice across sessions because I didn't re-establish enough context. Workflow discipline problem, not a tool problem — but it's real.

So What Are These Actually Useful For?

The framing I've landed on: AI tools are a force multiplier for competent people. They make a good engineer faster. They do not make a confused engineer clear. If you don't understand your problem well enough to ask a precise question, the model will confidently send you in the wrong direction.

Used well, they let you spend more mental energy on the actually hard parts of research — the insight, the experimental design, the interpretation — and less on the mechanical parts. That's a good trade. That's why I kept using them.

I'll keep writing here as I try new things, hit new limitations, and change my mind about tools I was wrong about. If any of this resonates, reach out — I'm always up to compare notes.

More Posts
Coding Tools

Copilot vs Cursor vs Plain Claude: Which One I Actually Reach For

I've run all three in my daily workflow simultaneously for two months. The winner isn't obvious — it depends heavily on what you're doing. Here's my honest breakdown.

Research

Using Perplexity for Literature Reviews — Better Than Google Scholar?

Perplexity became my first stop for new paper searches. It's not perfect, but it changed how I explore a new research area in week one of a project.

AI + Robotics

Asking an LLM to Help Debug My ROS2 Nav Stack — Results Were Mixed

Three real debugging sessions: one where AI nailed it, one where it wasted my time, and one where it got me 70% of the way and I had to take it from there.

Hot Take

AI Writing Tools Didn't Make Me a Worse Writer — They Made Me Faster

The concern I hear most from peers is that using AI to draft text atrophies your writing. I think that's backwards. Here's why I disagree, and what I actually do to stay sharp.

Research

How I Use Claude to Break Down Dense ML Papers in Under 20 Minutes

A repeatable prompt structure I developed for extracting key assumptions, experimental gaps, and relevance to my own work from any paper — fast.

Workflow

My Actual AI Toolstack Right Now — March 2026 Edition

What I'm using, what I dropped, what I picked up last month, and why I think this stack works for someone deep in robotics research.

Want to swap notes?

I'm always interested in talking robotics, AI tools, and what people are actually building.

Get in touch   View Portfolio