Blog

Full-stack legal AI

Technical deep dives from the Bind team on designing and operating an AI-first legal stack: model behavior, policy-driven guardrails, multi-agent orchestration, and the systems infrastructure required for reliable autonomous contracting.

Mar 17, 2026 • Sami Kuikka

Prompt Optimization Shouldn't Require Rewriting Your App

How come we have advanved agent tracing products with minimal setup, but prompt optimization itself needing rewriting your product in new frameworks in order to work. My attempt on showing that it does not need to be the case

Read article

Mar 16, 2026 • Sami Kuikka

Building Memory Systems that Learn Across Conversations

How to create AI systems that adapt and improve their performance over time by learning from past interactions. Going over the MemGPT and Letta AI frameworks approach on bulding agents with memory systems that evolve through conversations.

Read article

Dec 22, 2025 • Sami Kuikka

Automated RAG Evaluation without Human Labels (Ragas in Practice)

Learning and building automated evaluation system with Ragas. Can LLMs automatically evaluate RAG systems? Are human labels always needed?

Read article

Dec 9, 2025 • Sami Kuikka

Shrinking Embeddings: Real Results with Mathryoshka Models

Exploring the practical benefits of the Matryoshka Representation Learning technique through my own experiment. Why some models can be shrunk without losing performance, how it works, and does it always work?

Read article

Nov 27, 2025 • Sami Kuikka

HyDE: Weaponizing Hallucinations for Better Retrieval

Explaining and building real use case for Hypothetical Document Embeddings (HyDE) to showcase that hallucinations are not always bad

Read article

Nov 24, 2025 • Sami Kuikka

Why Early Tokens Matter, and How Tree-of-Thoughts Tries to Fix It

Why LLMs are sensitive to their early tokens, and where prompting techniques like Chain-of-Thought and Tree-of-Thought get their power. This is my attempt to understand why early tokens matter, what ToT does behind the scenes, and whether it’s worth using outside of toy projects.

Read article

Nov 17, 2025 • Sami Kuikka

Teaching LLM to improve its own prompts

Building a GEPA-style prompt optimizer from scratch to tune DSPy chains with math-backed intuition.

Read article