

Welcome, friend
Hey there! 👋 I’m Ahmad, a Software Engineer with a background in Machine Learning, currently focused on Gen. AI and Large Language Models. My academic background includes dual BAs in Computer Science and Data Science, and my professional journey has taken me through innovative environments. I’m a builder at heart, whether it’s hacking an MVP together over the weekend, designing large-scale distributed systems, setting up complex home labs and network architecture, solving intricate data puzzles, or exploring the frontiers of 3D printing. When I’m not coding or tinkering with tech, you’ll find me reading a book, lifting at the gym, or drinking a cup of coffee while contemplating life and things.
Why am I here? To share this exhilarating journey of Code & Steel with you. Whether you’re here to talk Gen. AI/LLMs and Coding, explore my hardware setups, discuss the intricacies of machine learning, or share a laugh over the latest DIY disaster, I hope you leave here having learned something new.
Feel free to check out my about page , read my blogposts, and get in touch via any of the social links provided below. Welcome to my world of code, creation, and continuous learning.
Featured
-
First Came The Tokenizer—Understanding The Unsung Hero of LLMs
Posted on
7 MinutesWhy the Humble Tokenizer Is Where It All Starts
Before an LLM has a chance to process your message, the tokenizer has to digest the text into a stream of (usually) 16-bit integers. It’s not glamorous work, but make no mistake: This step has high implications on your model’s context window, training speed, prompt cost, and whether or not your favorite Unicode emoji makes it out alive.
Interactive Visual Tokenizerwww.ahmadosman.com/tokenizer
Let’s run through the main flavors of tokenizers, in true “what do I actually care about” fashion:
The OG tokenizer. You split on whitespace, assign every word an ID. It’s simple, but your vocab balloons to 500,000+ for English alone, and “dog” and “dogs” are considered different vocabs.
Shrink the vocab down to size, now every letter or symbol is its own token. Problem: even simple words turn into sprawling token chains, which slows training and loses the semantic chunking that makes LLMs shine.
This is where the modern magic happens. Break rare words into pieces, keep common ones whole. This is the approach adopted by basically every transformer since 2017: not too big, not too small, just right for GPU memory and token throughput.
Here are the big players, how they work, and why they matter:
How it works:
Why it matters:
-
So You Want to Learn LLMs? Here’s the Roadmap
Posted on
7 MinutesA Real-World, No-Bloat Guide to Building, Training, and Shipping LLMs
Welcome to the “how do I actually learn how LLMs work” guide. If you’ve got a CS background and you’re tired of the endless machine learning prerequisites, this is for you. I built this with past me in mind, I wish I had it all drawn out like this. This roadmap should have you comfortable with building, training, and exploring and researching.
The links at the end let you go as deep as you want. If you’re stuck, rewatch or reread. If you already know something, skip ahead. The phases are your guardrails, not handcuffs. By the end, you’ll have actually built the skills. Every resource, every project, every link is there for a reason. Use it, adapt it, and make it your own. I hope you don’t just use this as a collection of bookmarks.
Remember, you can always use DeepResearch when you’re stuck, need something broken down to first principles, want material tailored to your level, need to identify gaps, or just want to explore deeper.
This is blogpost #4 in my 101 Days of Blogging . If it sparks anything; ideas, questions, or critique, my DMs are open. Hope it gives you something useful to walk away with.
The short version:
You will:
The approach here is simple.
Learn by Layering: Build Intuition ➡️ Strengthen Theory ➡️ More Hands-on ➡️ Paper Deep Dives ➡️ Build Something Real.
You’re going to use four kinds of resources:
-
Software Engineers Aren't Getting Automated—Local AI Has To Win
Posted on
8 MinutesWhy Full-Stack Ownership is the Only Real Job Security in The Age of AI
Real technical ability is fading. Worried about AI replacing you? Build real technical depth. LLMs are leverage, a force multiplier, but only if you know what you’re doing. You’re not losing to AI. You’re losing to people who use AI better than you because they actually understand the tech. Get sharper.
This goes way beyond privacy or ideology. As optimization and model alignment get more personal (and more opaque), your only actual safety net is full local control. If you’re building a business, a workflow, or even a habit that depends on a remote black box, you’re not the customer; you’re the product. Full-stack ownership isn’t just to show off. It’s pure risk management.
The future belongs to those who can build, debug, and document, not just rent someone else’s toolchain. Bootcamps don’t cut it anymore.
“Every day these systems run is a miracle. Most engineers wouldn’t last five minutes outside their cloud sandbox.”
Our industry is obsessed with AI hype, most devs have never seen the bare metal, never written a real doc, and never owned their own stack. Meanwhile, the only thing standing between us and our systems’ total collapse is duct tape, a few command-line obsessives, and the shrinking number of people who still know how to fix things when the it all stops working. We’re staring down an industry where the median troubleshooting skill is somewhere between “reboot and pray” and “copy-paste from Stack Overflow”.
So please, stop the doomscroll and quit worrying about being replaced. LLMs amplify you; they don’t substitute for you. The edge is in the hard parts: critical thinking, debugging, taste for clean architecture, putting it all together. That’s not going anywhere. The job is shifting not getting eliminated: more architecture, more security, more maintenance, more troubleshooting. Still deeply human, and still non-trivial to automate.
This is blogpost #3 in my 101 Days of Blogging . If it sparks anything; ideas, questions, or critique, my DMs are open. Hope it gives you something useful to walk away with.
Yesterday morning I hosted an X/Twitter Audio Space on how LLMs, open-source, and the gravitational pull of platform centralization are forcing us all to rethink what it actually means to be a developer. The cloud got us coddled… The cloud was a mistake , and I believe that next decade’s winners won’t be the ones who just ship the most code (LLMs are really good at that BTW), but the ones who get obsessed with understanding, documenting, and actually owning their tools, top to bottom.
Let’s set the scene. Google Cloud outage just crashed the internet. X/Twitter is in full panic mode , Cursor/Claude Code/Windsurf/etc aren’t working anymore. LLMs have become the default code generator, human programming skills are fading. For me, I didn’t even notice the outage until I got online. My local agents, running on my hardware from my basement , kept running.
-
Ultimate DeepResearch Prompt Builder—Template, Workflow, Pro Tips
Posted on
15 MinutesThe Exact Prompt Engineering System Powering My DeepResearch Workflow
TL;DR: I feed the template below into Gemini 2.5 Pro to build the DeepResearch prompt. Then I use the output to run DeepResearch with. You’ll find more context further down, but the main idea is simple: Just drop your core ideas between TOPIC BEGINS HERE and TOPIC ENDS HERE. The rest builds itself.
Google just doesn’t cut it anymore: I’m the guy who wired a mini–data center into his basement. When you’ve got almost 3-dozen GPUs humming at 3 a.m. and a brain that treats half-baked ideas like Pokémon’s you gotta catch ’em all, shallow Googling just doesn’t cut it. I needed a research system that could keep up with the chaos in my head, force clarity, and let me ship faster than my cats can yank the UPS cable while livestreaming (true story ).
DeepResearch and this framework turn my chaotic untangled thoughts into informative, in-depth, comprehensive reports. I also use it to learn anything and I have it wired into the loop of how I code with agents. Today, I’m sharing this workflow with you.
This is blogpost #2 in my 101 Days of Blogging . If it sparks anything; ideas, questions, or critique, my DMs are open. Hope it gives you something useful to walk away with.
I believe that if you genuinely want to move the needle, whether you’re an indie builder, a founder hunting for market clarity, or just someone tired of getting subpar answers, you need a proper system. Something structured that transforms vague curiosities into pinpoint insights, ruthlessly forces clarity, prevents endless rabbit holes, and delivers actual value (think high signal, zero noise). To me, that’s DeepResearch.
Before DeepResearch, I’d “just check one thing” and suddenly I’m 138 tabs deep with outdated blogposts and conflicting info. DeepResearch fixed that, but only because I learned how to use it. There’s a method to the clarity.
DeepResearch, among many things, could be:
Traditional research often ends up vague, ambiguous, and misses key insights entirely. So I built-and obsessively iterated-a prompt framework designed explicitly to fix these problems. This approach guides both me and the AI to:
This became the Ultimate DeepResearch Prompt Builder Template, the backbone of every serious AI-driven research I execute.
-
Just Like GPUs, We Need To Be Stress Tested: 101 Days of Blogging
Posted on
4 Minutes101 Days of Technical Blogging, Consistency, and Self-Experimentation
Writing is how we come to understand ourselves, a gift to our future selves, a record of what once mattered. It grounds our thoughts and gives them shape.
This one is for me. I hope you enjoy it too.
The past few months have given me a lot to think about. Life can happen to you out of nowhere, faster than a finger snap, and you’ve only got yourself-mostly-to keep it together.
In life, you’re either getting smarter or dumber. Stronger or weaker. More efficient or completely helpless. Subject to dependence or reliance. The latter is becoming exponentially easier, and the trend will only accelerate in the years ahead.
“I want to live happily in a world I don’t understand.” ― Nassim Nicholas Taleb, Antifragile: Things That Gain From Disorder
Don’t be that guy.
Being prepared is fundamental to your survival, but not only that… Being prepared is our only duty in life: to ourselves, to our loved ones, and to everything we care about. So, I am no longer taking time for granted, and I will always be prepared.
Actions-per-minute matter. A lot. We’re entering an era where productivity multipliers, across the board, are approaching infinity. That has to be harnessed, deliberately and fast. Or else…
So, I’ve made a decision: I’m going to stress-test myself—across the board, for an extended amount of time. No more skipped workouts. No more pushed plans. No more dragging out already-soft deadlines. I have to show up. Fully. For all of it.