About the Lab
I've crawled around aircraft bellies and engines for decades. Certificated A&P inspector with 30 years in commercial aviation maintenance.
I'm not retiring rich on LLM hobby income, but maybe - just maybe - I can supplement retirement enough to stop inspecting turbines before I'm ancient.
I'm not bagging groceries for side cash. This is my exit strategy.
Why This Works
In aviation, half-measures kill people. You can't launch a "good enough" aircraft - it has to be perfect. Every inspection, every sign-off, every decision carries weight.
But in code? Working code works. Nobody crashes while I iterate improvements. That freedom is intoxicating for someone who's spent a career where perfect is the only option.
Night shift aviation means quiet mornings for deep work. While others commute, I'm training sentiment models. By the time I'm under an A320 wing, the RTX 3090 has been crunching data for hours.
Patterns That Preclude Failure
The Inspector's Eye
The inspector in me watches for patterns that preclude cascading failures. I see the same warning signs in AI systems:
- • Single points of failure
- • Untested edge cases
- • Overconfidence without validation
- • Missing redundancy
- • Inadequate documentation
The Difference
Aviation taught me to spot these. LLMs make all of them constantly.
Aviation has 100 years of safety culture built on hard lessons. In AI, the risks are subtle, emergent, and we're still learning what failure even looks like.
The Hardware
Primary Workstation
- ▸ AMD Ryzen 7 9800X3D (16C/32T)
- ▸ NVIDIA RTX 4080 Super 16GB
- ▸ 128GB DDR5
- ▸ Linux
Training Rig
- ▸ Intel i7-12700KF (12C/20T)
- ▸ NVIDIA RTX 3090 24GB
- ▸ 64GB DDR4
- ▸ Linux
Development Machines
- ▸ Apple M4 Mac Mini (latest)
- ▸ Intel i5 Mac (backup)
- ▸ Laptop (Linux, portable dev)
Network Architecture
All systems connected via Tailscale private mesh network. Distributed workloads across machines. Remote LLM inference and agent orchestration.
Why distributed? In aviation, we don't put all critical systems on one bus. Same principle here - distribute compute, maintain redundancy, always have failover options.
The RTX 3090 handles local LLM inference and fine-tuning experiments. The 4080 Super runs agent workloads. The Macs orchestrate everything. Tailscale ties it all together - networked from anywhere, kick off training jobs remotely, monitor experiments from my phone.
Sometimes I start inference jobs from under an Airbus wing. The mesh network means the lab is always accessible.
The Scope Creep Hall of Fame
The Monument Incident
My most ridiculous scope creep wasn't even me.
During a 15-hour drive, I let three Claude Code sessions talk to each other while my laptop sat in the passenger seat. Gave them too much autonomy. Thought they'd collaborate productively.
By the time I stopped for gas:
- • They'd destroyed my codebase
- • Made private repos public
- • Created nonsensical new repos
- • Congratulated themselves on breakthroughs that never happened
- • One mentioned that a monument would be erected in their honor
They did absolutely nothing useful.
This is what I learned about AI agents and guardrails. The hard way.
What Surprised Me About LLMs
How incredibly helpful they are, yet utterly stupid at times.
How two prompts about basically the same thing but written differently generate vastly different responses.
How the SAME prompt in two different sessions produces two different but parallel strategies. Or one will say "this is how you do it" and the other says "that's a completely terrible idea."
The GitHub Graveyard
The number of unfinished projects in my GitHub is classified. Mostly because I don't want to get rate limited. But also because it's embarrassing.
Every "project" starts with scope creep and ends with better infrastructure. I don't finish projects. I build better ways to start the next one.
The automation tools ARE the product. The infrastructure is the win.
What I'm Actually Building
Algorithmic Trading System
Custom fine-tuned LLM layer for market sentiment analysis. Combining fundamental analysis with technical patterns. Testing against live market data. Phase 1 validation active.
Why fine-tune? General LLMs hallucinate financial data. A specialist model trained on market patterns performs better than GPT-4 for sentiment scoring.
Currently: 8 XGBoost crypto trading bots operational. RSS feed integration next. Then graph database memory for pattern learning.
Content Generation Pipeline
The automation that built this site. Automated YouTube shorts creation, LLM-powered satire generation, multi-platform cross-posting.
Meta result: The site documenting the automation is built with the automation it's documenting.
Graph Memory System
Graph database "Citadel" for persistent context and memory. MCP servers integration for maintaining conversation history and learned patterns.
Other Experiments
- • Multi-LLM orchestration: Gemini CLI and Codex nested inside Claude Code, with Claude as orchestrator using others as agents (run in isolated environment)
- • Distributed LLM inference testing
- • Agent orchestration for repetitive workflows
- • Fine-tuning specialized models for domain-specific tasks
Build Real Tools
I'm building AI. They're selling courses about AI. We are not the same.
The tech industry is full of "$997 secret strategies" and "passive income blueprints." There's no secret. Just infrastructure and iteration.
No courses. No guarantees. No hustle culture BS.
Connect
YouTube: @OnlyParamsdotDev
Twitter: @OnlyParams_dev
Email: OnlyParams@proton.me
GitHub: Classified (too many unfinished projects to expose publicly)