top of page


Seamless Content Ingestion for Claude-Obsidian Second Brain
There's been a lot of excitement recently around building a 'Second Brain' using Claude and Obsidian. For those who missed it, this idea was popularised by Andrej Karpathy and involves building a Wikipedia-style set of interconnected notes which can then be navigated and searched by AI agents . This means that we don't have to provide AI agents the same context over and over again . With this system, it knows who you are , what you're working on, your preferences, and all
Apr 229 min read


Few-shot prompting Vs. Zero-shot prompting. Which approach and when?
Providing examples in your prompt is a technique known as few-shot prompting . Examples can be a great way to quickly communicate what your desired response should look like - it can often be easier to show with a few examples rather than trying to describe it with words. 'Shots' is a term taken from the field of machine learning. Each shot is an example given to the model before it performs the task. We often refer to 'few-shot' and 'zero-shot' prompting (where you don't
Apr 204 min read


How I Built an AI-Powered Task Management System with Obsidian and Claude Code
Context engineering is the next big thing If you have been following the AI space recently, you may have come across the term context engineering . Coined by Shopify CEO Tobi Lütke and quickly endorsed by Andrej Karpathy, the argument is that "prompt engineering" undersells what is actually going on when you get consistently good results from an AI model. "The art of providing all the context for the task to be plausibly solvable by the LLM." - Tobi Lütke, Shopify CEO Clever
Apr 88 min read


AI's Blindspot: The 'Lost in the Middle' Effect
There's a weird phenomenon with the way AI reads documents and conversations, and it negatively impacts the accuracy of the responses we get back. AI's accuracy when recalling information located in the middle of the context window is lower than if the same information were located at the start or the end . It's known as the 'Lost in the Middle' effect , and it's a problem that researchers have studied in some depth. The consequences of this are pretty simple to understand an
Mar 224 min read


Tokens and Context Windows: What They Are and Why They Matter
Learn what a context window is, how tokens work, and why they affect your everyday AI use - explained in plain English.
Mar 214 min read
bottom of page
