top of page
AI Literacy


Jargon Buster i. AI, Machine Learning, Deep Learning, Generative AI
AI, Machine Learning, Deep Learning, Generative AI, and LLMs are all terms commonly thrown around in tech writing, news and media, but they each refer to slightly different things and shouldn't be used interchangeably... even if they sometimes are. This guide explains what each one actually means, how they relate to each other, and why it matters which is which. What is AI, exactly? Artificial intelligence is the broadest of the four terms. It describes any computer system
Mar 234 min read


AI's Blindspot: The 'Lost in the Middle' Effect
There's a weird phenomenon with the way AI reads documents and conversations, and it negatively impacts the accuracy of the responses we get back. AI's accuracy when recalling information located in the middle of the context window is lower than if the same information were located at the start or the end . It's known as the 'Lost in the Middle' effect , and it's a problem that researchers have studied in some depth. The consequences of this are pretty simple to understand an
Mar 224 min read


Tokens and Context Windows: What They Are and Why They Matter
Learn what a context window is, how tokens work, and why they affect your everyday AI use - explained in plain English.
Mar 214 min read


Claude Code: Everything you need to know in 8 minutes or less
Claude Code is an AI tool that runs in the terminal. For a lot of people, that sentence is enough to put them off. That would be a mistake. While it may have been designed as a coding assistant, it's really just an agent capable of managing your files, using skills, and writing and running code (all by itself) to solve whatever problem you throw at it. Claude Code is one of the more capable AI tools currently available, and you do not need a programming background to get real
Mar 218 min read


Level-up your AI Agent with Skills Engineering
Skills engineering is how we teach AI agents to handle tasks in the way we want them done. Instead of hoping the agent figures it out, we provide detailed instructions that guide its decision-making process. But not all skills are created equal. A poorly written skill wastes tokens , confuses the agent , and produces inconsistent results . A well-crafted skill makes your agent faster, more reliable, and easier to maintain. The quality of your skills directly determines your
Mar 58 min read


Getting Started with Agent Skills
Write Once, Use Forever. Think about the last time you explained your workflow to a new hire at work. You probably didn't just hand them a list of tools and wish them luck. You walked them through the process, explained why things work the way they do, and maybe showed them an example or two. Three weeks later, they're flying solo. AI can't do that. At least not out of the box. Every conversation starts from scratch. There's no memory of how you like things done, no recollect
Mar 57 min read


MCP Servers: Giving Your AI Agent New Capabilities
AI agents are useful on their own. But out of the box, they're working in isolation — they can only see what you paste into the conversation. They can't check your calendar, search your company's files, pull live data from a platform you use, or send a message on your behalf. MCP servers change that. What is an MCP server? MCP stands for Model Context Protocol. The technical details aren't important. What matters is what it does: an MCP server is a connection between your AI
Mar 54 min read


Getting Started with AI: The Fundamentals of Prompting
Modern AI chat tools are powered by Large Language Models (LLMs), which are sophisticated systems trained on vast amounts of text data to understand and generate human-like responses. LLMs like Claude and Gemini don't truly "understand" in the human sense, but they excel at recognizing patterns in language and predicting what text should come next based on your input. This is why the quality of your prompt matters so much. The AI model can only work with what you give it, m
Jan 224 min read


Forget "Think step by step", Here's How to Actually Improve LLM Accuracy
And What Happened to CoT Prompting? Prefer to listen to this article instead? I used ElevenLabs TTS to create this narration. Check them out using my affiliate link, here . "Think step by step" ...was once great prompt engineering advice, but now seems to have little to no effect. In fact, what if I told you that this technique, at best, has little effect on output quality, and at worst, increases costs, latency, and may even reduce the accuracy of your response? It decreas
Jan 189 min read
bottom of page
