Tools like GitHub Copilot are great for generating small snippets of code, but Sean never relies on these tools to write big blocks of code. LLMs can also be great for syntax suggestions or to cover more ground while doing research quickly.
The key is to know exactly what you’re expecting before asking an LLM to write a snippet for you. Sean says he doesn’t want the computer to make decisions for him—he just wants AI to type out what he’s already devised.
Sean says he uses AI chats a lot, but never to write code from scratch. Rather, his objective is to validate his approach after he’s done writing his solution.
LLMs can be especially useful when you’re not in your comfort zone. They can help you figure out if the code you wrote in an unfamiliar language is idiomatic. But you wouldn’t need to make this kind of check if you’re coding in an environment you’re already familiar with.
Sean doesn’t turn to LLMs when he’s trying to fix a bug—at least not as his first choice. The reason? To fix a bug, you need to understand the system, and LLMs can’t do that.
However, once he’s exhausted all his usual methods, he’ll package everything he knows about the bug and feed it to an LLM to try his luck. In his experience, the AI either hits the jackpot on the first shot or never does.
When you get paged in the middle of the night, you’re probably not the strongest engineer—at least not immediately. When Sean got paged at 3 a.m., he got on duty as a tired, confused, and vaguely panicky engineer.
That’s the state in which an LLM might be better than an engineer. Sean thinks LLMs could help responders have a smoother way of managing incidents during these less than ideal conditions.
Right now, you can’t just hand an LLM a 5-million-line codebase and ask, “Can I add this new functionality?” or “Where should I add this new check?” However, Sean thinks that could change in the near future.
Companies—allegedly including HubSpot—are starting to hire software engineers who aren’t allowed to write code themselves; instead, they’re only allowed to prompt LLMs. There’s also a new trend called “vibe coding,” where people don’t even look at the code—they just let the machine generate it and guide it purely through prompts.