top of page


Few-shot prompting Vs. Zero-shot prompting. Which approach and when?
Providing examples in your prompt is a technique known as few-shot prompting . Examples can be a great way to quickly communicate what your desired response should look like - it can often be easier to show with a few examples rather than trying to describe it with words. 'Shots' is a term taken from the field of machine learning. Each shot is an example given to the model before it performs the task. We often refer to 'few-shot' and 'zero-shot' prompting (where you don't
5 days ago4 min read


Role prompting - Giving AI a Persona
"You're an expert X." "Act as a Y" Role prompting is one of the most widely repeated tips in AI circles. The underlying idea is that if you assign the model a job title or area of expertise, it will respond more like a person who holds that role, making it more relevant, more precise, more expert-sounding. The research on whether that actually holds up is more nuanced than the advice suggests. And the nuance is important if you want to try this out yourself. What is role prom
Apr 35 min read


AI's Blindspot: The 'Lost in the Middle' Effect
There's a weird phenomenon with the way AI reads documents and conversations, and it negatively impacts the accuracy of the responses we get back. AI's accuracy when recalling information located in the middle of the context window is lower than if the same information were located at the start or the end . It's known as the 'Lost in the Middle' effect , and it's a problem that researchers have studied in some depth. The consequences of this are pretty simple to understand an
Mar 224 min read


Tokens and Context Windows: What They Are and Why They Matter
Learn what a context window is, how tokens work, and why they affect your everyday AI use - explained in plain English.
Mar 214 min read


Getting Started with AI: The Fundamentals of Prompting
Modern AI chat tools are powered by Large Language Models (LLMs), which are sophisticated systems trained on vast amounts of text data to understand and generate human-like responses. LLMs like Claude and Gemini don't truly "understand" in the human sense, but they excel at recognizing patterns in language and predicting what text should come next based on your input. This is why the quality of your prompt matters so much. The AI model can only work with what you give it, m
Jan 224 min read


Forget "Think step by step", Here's How to Actually Improve LLM Accuracy
And What Happened to CoT Prompting? Prefer to listen to this article instead? I used ElevenLabs TTS to create this narration. Check them out using my affiliate link, here . "Think step by step" ...was once great prompt engineering advice, but now seems to have little to no effect. In fact, what if I told you that this technique, at best, has little effect on output quality, and at worst, increases costs, latency, and may even reduce the accuracy of your response? It decreas
Jan 189 min read


Still copy-pasting into ChatGPT? Here’s how to turn your ideas into AI-powered apps
Learn to build AI-powered apps using LLM APIs. This beginner’s guide covers prompt engineering, function calling, and creating custom AI tools.
Jun 28, 202511 min read


You’re using ChatGPT wrong. Here’s how to prompt like a pro
Master advanced prompt engineering techniques like Chain-of-Thought, ReAct, and role-based prompting strategies to get smarter LLM responses
Jun 2, 202511 min read
bottom of page
