The Art of Pair-Coding with AI
- Yanbing Li
- Feb 7
- 3 min read
A Practical Guide for Modern Developers
Generative AI is rapidly reshaping how we build software—not by replacing developers, but by augmenting them. As someone who advises engineering teams and mentors software leaders, I've seen firsthand how a well-structured collaboration between a developer and a large language model (LLM) can unlock immense productivity, creativity, and technical precision.
This article distills the best practices we've developed at iSterna for using AI as a pair-coding partner. It covers what works, what doesn't, and how to make the most of this new collaboration model.
Be Specific
Provide clear, unambiguous goals.
Include expected input/output types, performance requirements, and coding style. Vague prompts lead to vague code. Be clear about what you want:
"Write a function to merge two sorted lists without using extra space."
"Optimize this algorithm to run in under 0.5 seconds on 10,000 nodes."
"Refactor this C/C++ code using object-oriented design."
“Write a function to find the median of a list of integers (O(n log n) or better). Must handle empty input gracefully.”
The more precise your goal, the better the LLM can assist.
Set Constraints
AI thrives on boundaries. Some examples:
"Don’t use recursion."
"Must be compatible with Python 3.7."
"Avoid external libraries."
Limitations or preconditions help the model generate solutions that actually fit your context.
Assign Roles
You’re not just talking to a chatbot. You’re assigning a job.
"Act as a compiler. Where would this fail?"
"You're a C/C++ code reviewer. Spot bad patterns."
"You’re my DevOps mentor. What’s wrong with this pipeline config?"
Giving the model a persona or function improves alignment and output quality.
Iterate, Test, Refine
AI is fast, but not always right. Use it like a human collaborator:
Ask for multiple approaches
Run quick tests on outputs
Give feedback and improve the prompt
Great results come from conversations, not one-shot prompting.
Use AI for Thinking, Not Just Typing
AI is not just autocomplete. It’s a co-designer.
Ask "What’s the time complexity of this?"
Say "Explain this bug to me like I’m a five years old."
Request "Give me edge cases this might fail."
These kinds of prompts tap into the model’s reasoning abilities—not just syntax.
Verify and Own the Output
Don’t blindly trust the result:
Validate logic
Check edge cases
Be mindful that AI still hallucinates
Able to explain the code at code reviews, demos or product launches etc. for code transparency.
Build Prompt Reusability
Save good prompts:
For test generation
For common refactorings
For project scaffolding
Organize in themes: "Debugging", "Security Review", "Optimization", “Scalability”, “Performance”, “Reusability”, “Sustenance”, “Coherency”, “agility” etc.
Bonus: Prompt Template You Can Steal
### Role
You are a [role: senior software engineer | debugger | architecture reviewer]
### Task
I want to [goal: build X / refactor Y / debug Z]
### Constraints
- Do not use [specific tools or patterns]
- Must meet [performance or compatibility] criteria
### Context
Here is the code I have:
[insert code here]Expected Behavior
Should handle [X, Y, Z] cases
Return value must be [format]
Needs to be secure and maintainable
Questions / Next Steps
What are the performance bottlenecks?
How can we simplify this?
Suggest unit tests to validate it
Start small. Try one or two principles today and build up your prompt discipline over time. Great pair coding with AI is not magic — it's method.
---
## Further Reading & References
---
Appendix A: References and Public Resources
The following references provide publicly available best practices, studies, and examples related to AI-assisted software development.
GitHub Copilot
Best Practices: https://docs.github.com/en/copilot/using-github-copilot/best-practices-for-using-github-copilot
Copilot Labs: https://githubnext.com/projects/copilot-labs/
Copilot Docs (GitHub): https://github.com/github/copilot-docs
OpenAI
Prompt Engineering Cookbook: https://cookbook.openai.com/articles/techniques_to_improve_reliability
Code Examples: https://platform.openai.com/examples
ChatGPT Use Cases Blog: https://openai.com/blog/chatgpt-plugins#use-cases
Google Codey (PaLM for Code)
Google Vertex AI (Generative AI Overview) https://cloud.google.com/vertex-ai/docs/generative-ai/overview
Official landing page for all generative AI models, including text, chat, and code generation capabilities.
Academic Research
Stanford + NYU + Columbia: Do Users Write More Insecure Code with AI Assistants? https://arxiv.org/abs/2211.03622
Peer-reviewed paper analyzing how AI-generated code may encourage insecure coding behaviors if users aren’t trained to critically assess suggestions. One of the foundational studies on AI-assisted programming risks.
CMU Study on AI Usage: https://arxiv.org/abs/2302.06590
LLMs as Pair Programmers (Berkeley + Microsoft):https://arxiv.org/abs/2303.12712
Additional Guides
Prompt Engineering Guide (Awesome List): https://github.com/dair-ai/Prompt-Engineering-Guide




Comments