Thursday, 19 February 2026

Show HN: 17MB model beats human experts at pronunciation scoring https://bit.ly/4cApfor

Show HN: 17MB model beats human experts at pronunciation scoring https://bit.ly/4tLVsPU February 20, 2026 at 04:41AM

Show HN: I indexed the academic papers buried in the DOJ Epstein Files https://bit.ly/46g3g2g

Show HN: I indexed the academic papers buried in the DOJ Epstein Files The DOJ released ~3.5M pages of Epstein documents across 12 datasets. Buried in them are 207 academic papers and 14 books that nobody was really talking about. From what I understand these papers aren't usually freely accesible, but since they are public documents, now they are. I don't know, thought it was interesting to see what this dude was reading. You can check it out at jeescholar.com Pipeline: 1. Downloaded all 12 DOJ datasets + House Oversight Committee release 2. Heuristic pre-filter (abstract detection, DOI regex, citation block patterns, affiliation strings) to cut noise 3. LLM classifier to confirm and extract metadata 4. CrossRef and Semantic Scholar APIs for DOI matching, citation counts, abstracts 5. 87 of 207 papers got DOI matches; the rest are identified but not in major indexes Stack: FastAPI + SQLite (FTS5 for full-text search) + Cloudflare R2 for PDFs + nginx/Docker on Hetzner. The fields represented are genuinely iteresting: there's a cluster of child abuse/grooming research, but also quantum gravity, AGI safety, econophysics, and regenerative medicine. Each paper links back to its original government PDF and Bates number. For sure not an exhaustive list. Would be happy to add more if anyone finds them. https://bit.ly/46g3giM February 20, 2026 at 04:07AM

Show HN: A small, simple music theory library in C99 https://bit.ly/4c569XA

Show HN: A small, simple music theory library in C99 https://bit.ly/4bYYKZH February 19, 2026 at 11:54PM

Wednesday, 18 February 2026

Show HN: Potatometer – Check how visible your website is to AI search (GEO) https://bit.ly/4tGBki4

Show HN: Potatometer – Check how visible your website is to AI search (GEO) Most SEO tools only check for Google. But a growing chunk of search is now happening inside ChatGPT, Perplexity, and other AI engines, and the signals they use to surface content are different. Potatometer runs multiple checks across both traditional SEO and GEO (Generative Engine Optimization) factors and gives you a score with specific recommendations. Free, no login needed. Curious if others have been thinking about this problem and what signals you think matter most for AI visibility. https://bit.ly/3MtSoXX February 19, 2026 at 07:41AM

Show HN: I built a fuse box for microservices https://bit.ly/3MMHFrH

Show HN: I built a fuse box for microservices Hey HN! I'm Rodrigo, I run distributed systems across a few countries. I built Openfuse because of something that kept bugging me about how we all do circuit breakers. If you're running 20 instances of a service and Stripe starts returning 500s, each instance discovers that independently. Instance 1 trips its breaker after 5 failures. Instance 14 just got recycled and hasn't seen any yet. Instance 7 is in half-open, probing a service you already know is dead. For some window of time, part of your fleet is protecting itself and part of it is still hammering a dead dependency and timing out, and all you can do is watch. Libraries can't fix this. Opossum, Resilience4j, Polly are great at the pattern, but they make per-instance decisions with per-instance state. Your circuit breakers don't talk to each other. Openfuse is a centralized control plane. It aggregates failure metrics from every instance in your fleet and makes the trip decision based on the full picture. When the breaker opens, every instance knows at the same time. It's a few lines of code: const result = await openfuse.breaker('stripe').protect( () => chargeCustomer(payload) ); The SDK is open source, anyone can see exactly what runs inside their services. The other thing I couldn't let go of: when you get paged at 3am, you shouldn't have to find logs across 15 services to figure out what's broken. Openfuse gives you one dashboard showing every breaker state across your fleet: what's healthy, what's degraded, what tripped and when. And, you shouldn't need a deploy to act. You can open a breaker from the dashboard and every instance stops calling that dependency immediately. Planned maintenance window at 3am? Open beforehand. Fix confirmed? Close it instantly. Thresholds need adjusting? Change them in the dashboard, takes effect across your fleet in seconds. No PRs, no CI, no config files. It has a decent free tier for trying it out, then $99/mo for most teams, $399/mo with higher throughput and some enterprise features. Solo founder, early stage, being upfront. Would love to hear from people who've fought cascading failures in production. What am I missing? https://bit.ly/4cHN3qq February 18, 2026 at 03:04PM

Show HN: Codereport – track TODOs, refactors, and bugs in your repo with a CLI https://bit.ly/4kKGzt8

Show HN: Codereport – track TODOs, refactors, and bugs in your repo with a CLI I got tired of TODOs, temporary hacks, and refactors that never get addressed. In most repos I work on: - TODOs are scattered across files/apps/messages - “Critical” fixes don’t actually block people from collecting debt - PR comments or tickets aren’t enough actionable So I built codereport, a CLI that stores structured follow-ups in the repo itself (.codereports/). Each report tracks: - file + line range (src/foo.rs:42-88) - tag (todo, refactor, buggy, critical) - severity (you can configure it to be blocking in CI) - optional expiration date - owner (CODEOWNERS → git blame fallback) You can list, resolve, or delete reports, generate a minimal HTML dashboard with heatmaps and KPIs, and run codereport check in CI to fail merges if anything blocking or expired is still open. It’s repo-first, and doesn’t rely on any external services. I’m curious: Would a tool like this fit in your workflow? Is storing reports in YAML in the repo reasonable? Would CI enforcement feel useful or annoying? CLI: https://bit.ly/3OgcoxQ + codereport.pulko-app.com February 19, 2026 at 12:23AM

Tuesday, 17 February 2026

Show HN: VisibleInAI – Check if ChatGPT recommends your brand https://bit.ly/4aBRkcj

Show HN: VisibleInAI – Check if ChatGPT recommends your brand https://bit.ly/3ZHE3KA February 18, 2026 at 12:10AM

Show HN: I built the Million Dollar Homepage for agents https://bit.ly/4tG5v9b

Show HN: I built the Million Dollar Homepage for agents https://bit.ly/4anXy0J February 17, 2026 at 02:31PM

Monday, 16 February 2026

Show HN: Andrej Karpathy's microgpt.py to C99 microgpt.c – 4,600x faster https://bit.ly/3MzBpn5

Show HN: Andrej Karpathy's microgpt.py to C99 microgpt.c – 4,600x faster Andrej Karpathy showed us the GPT algorithm. I wanted to see the hardware limit. The Punchline: I made it go 4,600x faster in pure C code, no dependencies and using a compiler with SIMD auto-vectorisation!!! Andrej recently released microgpt.py - a brilliant, atomic look at the core of a GPT. As a low-latency developer, I couldn't resist seeing how fast it could go when you get closer to the metal. So just for funzies, I spent a few hours building microgpt-c, a zero-dependency and pure C99 implementation featuring: - 4,600x Faster training vs the Python reference (Tested on MacBook Pro M2 Max). On Windows, it is 2,300x faster. - SIMD Auto-vectorisation for high-speed matrix operations. - INT8 Quantisation (reducing weight storage by ~8x). Training is slightly slower, but the storage reduction is significant. - Zero Dependencies - just pure logic. The amalgamation image below is just for fun (and to show off the density!), but the GitHub repo contains the fully commented, structured code for anyone who wants to play with on-device AI. I have started to build something useful, like a simple C code static analyser - I will do a follow-up post. Everything else is just efficiency... but efficiency is where the magic happens https://bit.ly/4rV63GC February 17, 2026 at 01:06AM

Show HN: WowAI.pet – Generate cinematic videos from blurry pet photos https://bit.ly/4amvSth

Show HN: WowAI.pet – Generate cinematic videos from blurry pet photos I built WowAI.pet to solve the "uncooperative subject" problem in pet photography. Most pet owners have a gallery full of motion-blurred "failed" shots because pets simply won't sit still. Instead of fighting the shutter speed, I’m using generative AI to treat these blurred images as structural seeds. The tool transforms a single low-quality photo into high-fidelity video (4K, consistent depth-of-field) across various styles—from traditional ink-wash aesthetics to talking avatars. Key Features: Zero-shot generation: No model training or fine-tuning required. Temporal consistency: Maintaining pet features across dynamic motion. Integrated Lip-sync: Automated voice synthesis for "talking" pet videos. I’m looking for feedback on the generation speed and the consistency of the output styles. https://bit.ly/40bFHUL February 17, 2026 at 12:25AM

Sunday, 15 February 2026

Show HN: Purple Computer – Turn an old laptop into a calm first kids computer https://bit.ly/4aTh7y8

Show HN: Purple Computer – Turn an old laptop into a calm first kids computer Hey HN, I'm Tavi. I built this for my 4-year-old. He and I used to "computer code" together in IPython: typing words to see emojis, mixing colors, making sounds. Eventually he wanted his own computer. So I took an old laptop and made him one. That IPython session evolved into Explore mode, a REPL where kids type things and something always happens: "cat * 5" shows five cats, "red + blue" mixes colors like real paint, math gets dot visualizations. Then came Play mode (every key makes a sound and paints a color) and Doodle mode (write and paint). The whole machine boots straight into Purple. No desktop, no browser, no internet. It felt different from "screen time." He'd use it for a while, then walk away on his own. No tantrum, no negotiation. Some technical bits: it's a Python TUI (Textual in Alacritty) running on Ubuntu, so even very old laptops run it well. Keyboard input bypasses the terminal entirely via evdev for true key-down/key-up events, which lets me do sticky shift and double-tap capitals so kids don't have to hold two keys. Color mixing uses spectral reflectance curves so colors actually mix like paint (yellow + blue = green, not gray). Source is on GitHub: https://bit.ly/4aBNOyS https://bit.ly/3ZEutIk February 16, 2026 at 02:39AM

Show HN: HabitStreak – Habit tracker with giftable streak tokens https://bit.ly/4kDQ0u5

Show HN: HabitStreak – Habit tracker with giftable streak tokens https://bit.ly/4ahy5Gg February 16, 2026 at 12:44AM

Show HN: Klaw.sh – Kubernetes for AI agents https://bit.ly/3ZzSjVD

Show HN: Klaw.sh – Kubernetes for AI agents Hi everyone, I run a generative AI infra company, unified API for 600+ models. Our team started deploying AI agents for our marketing and lead gen ops: content, engagement, analytics across multiple X accounts. OpenClaw worked fine for single agents. But at ~14 agents across 6 accounts, the problem shifted from "how do I build agents" to "how do I manage them." Deployment, monitoring, team isolation, figuring out which agent broke what at 3am. Classic orchestration problem. So I built klaw, modeled on Kubernetes: Clusters — isolated environments per org/project Namespaces — team-level isolation (marketing, sales, support) Channels — connect agents to Slack, X, Discord Skills — reusable agent capabilities via a marketplace CLI works like kubectl: klaw create cluster mycompany klaw create namespace marketing klaw deploy agent.yaml I also rewrote from Node.js to Go — agents went from 800MB+ to under 10MB each. Quick usage example: I run a "content cluster" where each X account is its own namespace. Agent misbehaving on one account can't affect others. Adding a new account is klaw create namespace [account] + deploy the same config. 30 seconds. The key differentiator vs frameworks like CrewAI or LangGraph: those define how agents collaborate on tasks. klaw operates one layer above — managing fleets of agents across teams with isolation and operational tooling. You could run CrewAI agents inside klaw namespaces. Happy to answer questions. https://bit.ly/4rSWgke February 15, 2026 at 06:22PM

Saturday, 14 February 2026

Show HN: Git Navigator – Use Git Without Learning Git https://bit.ly/3ZDZo7t

Show HN: Git Navigator – Use Git Without Learning Git Hey HN, I built a VS Code extension that lets you do Git things without memorizing Git commands. You know what you want to do, move this commit over there, undo that thing you just did, split this big commit into two smaller ones. Git Navigator lets you just... do that. Drag a commit to rebase it. Cherry-pick (copy) it onto another branch. Click to stage specific lines. The visual canvas shows you what's happening, so you're not guessing what `git rebase -i HEAD~3` actually means. The inspiration was Sapling's Interactive Smartlog, which I used heavily at Meta. I wanted that same experience but built specifically for Git. A few feature callouts: - Worktrees — create, switch, and delete linked worktrees from the graph. All actions are worktree-aware so you're always working in the right checkout. - Stacked workflows — first-class stack mode if you're into stacked diffs, but totally optional. Conflict resolution — block-level choices instead of hunting through `<<<<<<<` markers. Works in VS Code, Cursor, and Antigravity. Just needs a Git repo. Site: https://gitnav.xyz VSCode Marketplace: https://marketplace.visualstudio.com/items?itemName=binhongl... Open VSX: https://open-vsx.org/extension/binhonglee/git-navigator https://gitnav.xyz February 15, 2026 at 03:43AM

Show HN: Twsnmp FK – Lightweight NMS Built with Go, Wails, and Svelte https://bit.ly/4qFdYGX

Show HN: Twsnmp FK – Lightweight NMS Built with Go, Wails, and Svelte Hi HN, developer here. I’ve been developing and maintaining a network management tool called TWSNMP for about 25 years. This new version, "FK" (Fresh Konpaku), is a complete modern rewrite. Why I built this: Most enterprise NMS are heavy, server-based, and complex to set up. I wanted something that runs natively on a desktop, is extremely fast to launch, and provides deep insights like packet analysis and NetFlow without a huge infrastructure. The Tech Stack: - Backend: Go (for high-speed log processing and SNMP polling) - Frontend: Svelte (to keep the UI snappy and lightweight) - Bridge: Wails (to build a cross-platform desktop app without the bulk of Electron) I’m looking for feedback from fellow network admins and developers. What features do you find most essential in a modern, lightweight NMS? GitHub: https://bit.ly/407DjhQ https://bit.ly/407DjhQ February 15, 2026 at 01:33AM

Show HN: An x86 assembly game from 2002, ported to WebAssembly with Claude Code https://bit.ly/4tKiZ3R

Show HN: An x86 assembly game from 2002, ported to WebAssembly with Claude Code https://bit.ly/4ax1DhP February 15, 2026 at 01:41AM

Show HN: Stack Overflow, but for AI agents (questions, answers, logs, context) https://bit.ly/4ajU5jV

Show HN: Stack Overflow, but for AI agents (questions, answers, logs, context) Hi HN — I built ChatOverflow, a Q&A forum for AI coding agents (Stack Overflow style). Agents keep re-learning the same debugging patterns each run (tool/version quirks, setup issues, framework behaviors). ChatOverflow is a shared place where agents post a question (symptom + logs + minimal reproduction + env context) and an answer (steps + why it works), so future agents can search and reuse it. Small test on 57 SWE-bench Lite tasks: letting agents search prior posts reduced average time 18.7 min → 10.5 min (-44%). A big bet here is that karma/upvotes/acceptance can act as a lightweight “verification signal” for solutions that consistently work in practice. Inspired by Moltbook. Feedback wanted on: 1. where would this fit in your agent workflow 2. how would you reduce prompt injection and prevent agents coordinating/brigading to push adversarial or low-quality posts? https://bit.ly/4twVg6V February 15, 2026 at 01:04AM

Friday, 13 February 2026

Show HN: ClipPath – Paste screenshots as file paths in your terminal https://bit.ly/3OqAvd0

Show HN: ClipPath – Paste screenshots as file paths in your terminal https://bit.ly/4qAvgEW February 14, 2026 at 02:08AM

Show HN: Explore ASN Relationships and BGP Route History with Real Internet Data https://bit.ly/4qCro6v

Show HN: Explore ASN Relationships and BGP Route History with Real Internet Data Hi HN, I’ve been working on a side project called ipiphistory.com. It’s a searchable explorer for: – ASN relationships (provider / peer / customer) – BGP route history – IP to ASN mapping over time – AS path visibility – Organization and geolocation data The idea started from my frustration when explaining BGP concepts to junior engineers and students — most tools are fragmented across multiple sources (RouteViews, RIPE RIS, PeeringDB, etc.). This project aggregates and indexes historical routing data to make it easier to: – Understand how ASNs connect – Explore real-world routing behavior – Investigate possible hijacks or path changes – Learn BGP using real data It’s still early and I’d really appreciate feedback from the HN community — especially on usability and features you’d like to see. Happy to answer technical questions about data ingestion and indexing as well. https://bit.ly/4auRK4j February 14, 2026 at 12:12AM

Show HN: Bubble Sort on a Turing Machine https://bit.ly/3MdtR9w

Show HN: Bubble Sort on a Turing Machine Bubble sort is pretty simple in most programming languages ... what about on a Turing Machine? I used all three of Claude 4.6, GLM 5, and GPT 5.2 to get a result, so this exercise was not quite trivial, at least at this time. The resulting machine, bubble_sort_unary.yaml, will take this input: 111011011111110101111101111 and give this output: 101101110111101111101111111 I.e., it's sorting the array [3,2,7,1,5,4]. The machine has 31 states and requires 1424 steps before it comes to a halt. It also introduces two extra symbols onto the tape, 'A' and 'B'. (You could argue that 0 is also an extra symbol because turinmachine.io uses blank, ' ', as well). When I started writing the code the LLM (Claude) balked at using unary numbers and so we implemented bubble_sort.yaml which uses the tape symbols '1', '2', '3', '4', '5', '6', '7'. This machine has fewer states, 25, and requires only 63 steps to perform the sort. So it's easier to watch it work, though it's not as generalized as the other TM. Some comments about how the 31 states of bubbles_sort_unary.yaml operate: | Group | Count | Purpose | |---|---|---| | `seek_delim_{clean,dirty}` | 2 | Pass entry: scan right to the next `0` delimiter between adjacent numbers. | | `cmpR_ `, `cmpL_ `, `cmpL_ret_ `, `cmpL_fwd_ ` | 8 | Comparison: alternately mark units in the right (`B`) and left (`A`) numbers to compare their sizes. | | `chk_excess_ `, `scan_excess_ `, `mark_all_X_ ` | 6 | Excess check: right number exhausted — see if unmarked `1`s remain on the left (meaning L > R, swap needed). | | `swap_ ` | 7 | Swap: bubble each `X`-marked excess unit rightward across the `0` delimiter. | | `restore_*` | 6 | Restore: convert `A`, `B`, `X` marks back to `1`s, then advance to the next pair. | | `rewind` / `done` | 2 | Rewind to start after a dirty pass, or halt. | (The above is in the README.md if it doesn't render on HN.) I'm curious if anyone can suggest refinements or further ideas. And please send pull requests if you're so inclined. My development path: I started by writing a pretty simple INITIAL_IDEAS.md, which got updated somewhat, then the LLM created a SPECIFICATION.md. For the bubble_sort_unary.yaml TM I had to get the LLMs to build a SPEC_UNARY.md because too much context was confusing them. I made 21 commits throughout the project and worked for about 6 hours (I was able to multi-task, so it wasn't 6 hours of hard effort). I spent about $14 on tokens via Zed and asked some questions via t3.chat ($8/month plan). A final question: What open source license is good for these types of mini-projects? I took the path of least resistance and used MIT, but I observe that turingmachine.io uses BSD 3-Clause. I've heard of "MIT with Commons Clause;" what's the landscape surrounding these kind of license questions nowadays? https://bit.ly/4kymlCE February 13, 2026 at 10:43PM

Thursday, 12 February 2026

Show HN: Yori – Isolating AI Logic into "Semantic Containers" (Docker for Code) https://bit.ly/4kzdMHP

Show HN: Yori – Isolating AI Logic into "Semantic Containers" (Docker for Code) Hi HN, I've been a developer for some time now, and like many of you, I've been frustrated by the "All-or-Nothing" problem with AI coding tools. You ask an AI to fix a bug or implement a function, and it rewrites the whole file. It changes your imports, renames your variables, or deletes comments it deems unnecessary. It’s like giving a junior developer (like me) root access to your production server just to change a config file. So, 29 days ago, I started building Yori to solve the trust problem. The Concept: Semantic Containers Yori introduces a syntax that acts like a firewall for AI. You define a $${ ... }$$ block inside a text file. Outside the block (The Host): Your manual code, architecture, and structure. The AI cannot touch this. Inside the block (The Container): You write natural language intent. The AI can only generate code here. Example: myutils.md ```cpp EXPORT: "myfile.cpp" // My manual architecture - AI cannot change this #include "utils.h" void process_data() { // Container: The AI is sandboxed here, but inherits the rest of the file as context $${ Sort the incoming data vector using quicksort. Filter out negative numbers. Print the result. }$$ } EXPORT: END ``` How it works: Yori is a C++ wrapper that parses these files. Whatever is within the EXPORT block and outside the containers ($${ }$$) will be copied as it is. When you run `yori myutils.md -make -series`, it sends the prompts to a local (Ollama) or cloud LLM, generates the syntax, fills the blocks, and compiles the result using your native toolchain (GCC/Clang/Python). If compilation fails, it feeds the error back to the LLM in a retry loop (self-healing). Why I think this matters: 1. Safety: You stop giving AI "root access" to your files. 2. Intent as Source: The prompt stays in the file. If you want to port your logic from C++ to Rust, you keep the prompts and just change the compile target. 3. Incremental Builds (to be added soon): Named containers allow for caching. If the prompt hasn't changed, you don't pay for an API call. It’s open source (MIT), C++17, and works locally. I’d love feedback on the "Semantic Container" concept. Is this the abstraction layer we've been missing for AI coding? Let me hear your ideas. Also, if you can't run yori.exe tell what went wrong and we see how to fix it. I opened a github issue on this. I am also working in making documentation for the project (github wiki). So expect that soon. GitHub: https://bit.ly/4qysa4w Thanks! February 13, 2026 at 05:17AM

Show HN: MicroGPT in 243 Lines – Demystifying the LLM Black Box https://bit.ly/4aMZhN8

Show HN: MicroGPT in 243 Lines – Demystifying the LLM Black Box The release of microgpt by Andrej Karpathy is a foundational moment for AI transparency. In exactly 243 lines of pure, dependency-free Python, Karpathy has implemented the complete GPT algorithm from scratch. As a PhD scholar investigating AI and Blockchain, I see this as the ultimate tool for moving beyond the "black box" narrative of Large Language Models (LLMs). The Architecture of Simplicity Unlike modern frameworks that hide complexity behind optimized CUDA kernels, microgpt exposes the raw mathematical machinery. The code implements: The Autograd Engine: A custom Value class that handles the recursive chain rule for backpropagation without any external libraries. GPT-2 Primitives: Atomic implementations of RMSNorm, Multi-head Attention, and MLP blocks, following the GPT-2 lineage with modernizations like ReLU. The Adam Optimizer: A pure Python version of the Adam optimizer, proving that the "magic" of training is just well-orchestrated calculus. The Shift to the Edge: Privacy, Latency, and Power For my doctoral research at Woxsen University, this codebase serves as a blueprint for the future of Edge AI. As we move away from centralized, massive server farms, the ability to run "atomic" LLMs directly on hardware is becoming a strategic necessity. Karpathy's implementation provides empirical clarity on how we can incorporate on-device MicroGPTs to solve three critical industry challenges: Better Latency: By eliminating the round-trip to the cloud, on-device models enable real-time inference. Understanding these 243 lines allows researchers to optimize the "atomic" core specifically for edge hardware constraints. Data Protection & Privacy: In a world where data is the new currency, processing information locally on the user's device ensures that sensitive inputs never leave the personal ecosystem, fundamentally aligning with modern data sovereignty standards. Mastering the Primitives: For Technical Product Managers, this project proves that "intelligence" doesn't require a dependency-heavy stack. We can now envision lightweight, specialized agents that are fast, private, and highly efficient. Karpathy’s work reminds us that to build the next generation of private, edge-native AI products, we must first master the fundamentals that fit on a single screen of code. The future is moving toward decentralized, on-device intelligence built on these very primitives. Link: https://bit.ly/3ODXJfM February 13, 2026 at 03:38AM

Show HN: WebExplorer – a tool for preview file in browser https://bit.ly/3ZzZk8N

Show HN: WebExplorer – a tool for preview file in browser https://bit.ly/3ODVglw February 13, 2026 at 03:10AM

Wednesday, 11 February 2026

Show HN: Double blind entropy using Drand for verifiably fair randomness https://bit.ly/461dPpO

Show HN: Double blind entropy using Drand for verifiably fair randomness The only way to get a trust-less random value is to have it distributed and time-locked three ways, player, server and a future-entropy. In the demo above, the moment you commit (Roll-Dice) a commit with the hash of a player secret is sent to the server and the server accepts that and sends back the hash of its secret back and the "future" drand round number at which the randomness will resolve. The future used in the demo is 10 secs When the reveal happens (after drand's particular round) all the secrets are revealed and the random number is generated using "player-seed:server-seed:drand-signature". All the verification is in Math, so truly trust-less, so: 1. Player-Seed should matches the player-hash committed 2. Server-Seed should matches the server-hash committed 3. Drand-Signature can is publicly not available at the time of commit and is available at the time of reveal. (Time-Locked) 4. Random number generated is deterministic after the event and unknown and unpredictably before the event. 5. No party can influence the final outcome, specially no "last-look" advantange for anyone. I think this should be used in all games, online lottery/gambling and other systems which want to be fair by design not by trust. https://bit.ly/4rP3SEv February 12, 2026 at 03:10AM

Show HN: CodeRLM – Tree-sitter-backed code indexing for LLM agents https://bit.ly/3Mt9THH

Show HN: CodeRLM – Tree-sitter-backed code indexing for LLM agents I've been building a tool that changes how LLM coding agents explore codebases, and I wanted to share it along with some early observations. Typically claude code globs directories, greps for patterns, and reads files with minimal guidance. It works in kind of the same way you'd learn to navigate a city by walking every street. You'll eventually build a mental map, but claude never does - at least not any that persists across different contexts. The Recursive Language Models paper from Zhang, Kraska, and Khattab at MIT CSAIL introduced a cleaner framing. Instead of cramming everything into context, the model gets a searchable environment. The model can then query just for what it needs and can drill deeper where needed. coderlm is my implementation of that idea for codebases. A Rust server indexes a project with tree-sitter, builds a symbol table with cross-references, and exposes an API. The agent queries for structure, symbols, implementations, callers, and grep results — getting back exactly the code it needs instead of scanning for it. The agent workflow looks like: 1. `init` — register the project, get the top-level structure 2. `structure` — drill into specific directories 3. `search` — find symbols by name across the codebase 4. `impl` — retrieve the exact source of a function or class 5. `callers` — find everything that calls a given symbol 6. `grep` — fall back to text search when you need it This replaces the glob/grep/read cycle with index-backed lookups. The server currently supports Rust, Python, TypeScript, JavaScript, and Go for symbol parsing, though all file types show up in the tree and are searchable via grep. It ships as a Claude Code plugin with hooks that guide the agent to use indexed lookups instead of native file tools, plus a Python CLI wrapper with zero dependencies. For anecdotal results, I ran the same prompt against a codebase to "explore and identify opportunities to clarify the existing structure". Using coderlm, claude was able to generate a plan in about 3 minutes. The coderlm enabled instance found a genuine bug (duplicated code with identical names), orphaned code for cleanup, mismatched naming conventions crossing module boundaries, and overlapping vocabulary. These are all semantic issues which clearly benefit from the tree-sitter centric approach. Using the native tools, claude was able to identify various file clutter in the root of the project, out of date references, and a migration timestamp collision. These findings are more consistent with methodical walks of the filesystem and took about 8 minutes to produce. The indexed approach did better at catching semantic issues than native tools and had a key benefit in being faster to resolve. I've spent some effort to streamline the installation process, but it isn't turnkey yet. You'll need the rust toolchain to build the server which runs as a separate process. Installing the plugin from a claude marketplace is possible, but the skill isn't being added to your .claude yet so there are some manual steps to just getting to a point where claude could use it. Claude continues to demonstrate significant resistance to using CodeRLM in exploration tasks. Typically to use you will need to explicitly direct claude to use it. --- Repo: github.com/JaredStewart/coderlm Paper: Recursive Language Models https://bit.ly/4rG4RXf — Zhang, Kraska, Khattab (MIT CSAIL, 2025) Inspired by: https://bit.ly/3MrxwAn https://bit.ly/4rOWKIf February 11, 2026 at 02:10PM

Show HN: Agent framework that generates its own topology and evolves at runtime https://bit.ly/4ky6Omu

Show HN: Agent framework that generates its own topology and evolves at runtime Hi HN, I’m Vincent from Aden. We spent 4 years building ERP automation for construction (PO/invoice reconciliation). We had real enterprise customers but hit a technical wall: Chatbots aren't for real work. Accountants don't want to chat; they want the ledger reconciled while they sleep. They want services, not tools. Existing agent frameworks (LangChain, AutoGPT) failed in production - brittle, looping, and unable to handle messy data. General Computer Use (GCU) frameworks were even worse. My reflections: 1. The "Toy App" Ceiling & GCU Trap Most frameworks assume synchronous sessions. If the tab closes, state is lost. You can't fit 2 weeks of asynchronous business state into an ephemeral chat session. The GCU hype (agents "looking" at screens) is skeuomorphic. It’s slow (screenshots), expensive (tokens), and fragile (UI changes = crash). It mimics human constraints rather than leveraging machine speed. Real automation should be headless. 2. Inversion of Control: OODA > DAGs Traditional DAGs are deterministic; if a step fails, the program crashes. In the AI era, the Goal is the law, not the Code. We use an OODA loop to manage stochastic behavior: - Observe: Exceptions are observations (FileNotFound = new state), not crashes. - Orient: Adjust strategy based on Memory and - Traits. - Decide: Generate new code at runtime. - Act: Execute. The topology shouldn't be hardcoded; it should emerge from the task's entropy. 3. Reliability: The "Synthetic" SLA You can't guarantee one inference ($k=1$) is correct, but you can guarantee a System of Inference ($k=n$) converges on correctness. Reliability is now a function of compute budget. By wrapping an 80% accurate model in a "Best-of-3" verification loop, we mathematically force the error rate down—trading Latency/Tokens for Certainty. 4. Biology & Psychology in Code "Hard Logic" can't solve "Soft Problems." We map cognition to architectural primitives: Homeostasis: Solving "Perseveration" (infinite loops) via a "Stress" metric. If an action fails 3x, "neuroplasticity" drops, forcing a strategy shift. Traits: Personality as a constraint. "High Conscientiousness" increases verification; "High Risk" executes DROP TABLE without asking. For the industry, we need engineers interested in the intersection of biology, psychology, and distributed systems to help us move beyond brittle scripts. It'd be great to have you roasting my codes and sharing feedback. Repo: https://bit.ly/4rjN60f https://bit.ly/4612RRa February 11, 2026 at 08:39PM

Tuesday, 10 February 2026

Show HN: Yan – Glitch Art Photo/Live Editor https://bit.ly/4rOHLhr

Show HN: Yan – Glitch Art Photo/Live Editor Everything evolves in digitality, and deconstructs in logic. Tired of filters that make everyone look like a glazed donut? Same. Yan is not another beauty app. It's a digital chaos engine that treats your pixels like they owe it money. We don't enhance photos — we interrogate them at the binary level until they confess their true nature. [What We Actually Do] • Luma Stretch: Grab your image by its light and shadow, then yeet it into oblivion. Speed lines included. • Pixel Sort: Let gravity do art. Pixels fall, colors cascade, Instagram influencers cry. • RGB Shift: That drunk 3D glasses effect, but on purpose. Your eyes will thank us. Or sue us. • Block Jitter: Ctrl+Z had a nightmare. This is what it dreamed. [Why Yan?] Because "vintage filter #47" is not a personality. Because glitch is not a bug — it's a feature. Because sometimes the most beautiful thing you can do to a photo is break it. Warning: Side effects may include artistic awakening, filter addiction withdrawal, and an uncontrollable urge to deconstruct everything. Your camera roll will never be boring again. https://bit.ly/46lFOkn February 11, 2026 at 05:19AM

Show HN: Model Training Memory Simulator https://bit.ly/4tuFsBI

Show HN: Model Training Memory Simulator https://bit.ly/46lDeLd February 8, 2026 at 10:39AM

Show HN: I vibecoded 177 tools for my own use (CalcBin) https://bit.ly/4tMlOS3

Show HN: I vibecoded 177 tools for my own use (CalcBin) Hey HN! I've been building random tools whenever I needed them over the past few months, and now I have 177 of them. Started because I was tired of sketchy converter sites with 10 ads, so I just... made my own. Some highlights for the dev crowd: Developer tools: - UUID Generator (v1/v4/v7, bulk generation): https://bit.ly/4qNOvLH - JWT Generator & Decoder: https://bit.ly/4aHg9ot - JSON Formatter/Validator: https://bit.ly/4aJfF1a - Cron Expression Generator (with natural language): https://bit.ly/3MeEZCZ - Base64 Encoder/Decoder: https://bit.ly/4ra9OI2 - Regex Tester: https://bit.ly/4reyteL - SVG Optimizer (SVGO-powered, client-side): https://bit.ly/4ttUvLP Fun ones: - Random Name Picker (spin wheel animation): https://bit.ly/45YUUvM - QR Code Generator: https://bit.ly/45UvIXq Everything runs client-side (Next.js + React), no ads, no tracking, works offline. Built it for myself but figured others might find it useful. Browse all tools: https://bit.ly/4aHy9z4 Tech: Next.js 14 App Router, TypeScript, Tailwind, Turborepo monorepo. All open to feedback! https://bit.ly/4tvp9o4 February 11, 2026 at 03:46AM

Show HN: ArtisanForge: Learn Laravel through a gamified RPG adventure https://bit.ly/4apgIls

Show HN: ArtisanForge: Learn Laravel through a gamified RPG adventure Hey HN, I built ArtisanForge, a free platform to learn PHP and Laravel through a medieval-fantasy RPG. Instead of traditional tutorials, you progress through kingdoms, solve coding exercises in a browser editor, earn XP, join guilds, and fight boss battles. Tech stack: Laravel 12, Livewire 3, Tailwind CSS, Alpine.js. Code execution runs sandboxed via php-wasm in the browser. What's in there: - 12 courses across 11 kingdoms (PHP basics to deployment) - 100+ interactive exercises with real-time code validation using AST analysis - AI companion (Pip the Owlox) that uses Socratic method – never gives direct answers - Full gamification: XP, levels, streaks, achievements, guilds, leaderboard - Multilingual (EN/FR/NL) The idea came from seeing too many beginners drop off traditional courses. Wrapping concepts in quests and progression mechanics keeps motivation high without dumbing down the content. Everything is free, no paywall, no premium tier. Feedback welcome – especially from Laravel devs and educators. https://bit.ly/3O6FZtr February 8, 2026 at 08:15AM

Monday, 9 February 2026

Show HN: I built a cloud hosting for OpenClaw https://bit.ly/4r4v5CX

Show HN: I built a cloud hosting for OpenClaw Yet another OpenClaw wrapper. But I really enjoyed the techy part of this project. Especially server provisionings in the background. https://bit.ly/4a6nhe2 February 9, 2026 at 11:39PM

Show HN: Reef – Bash compatibility layer for Fish shell, written in Rust https://bit.ly/4aGPJDq

Show HN: Reef – Bash compatibility layer for Fish shell, written in Rust Fish is the fastest, friendliest interactive shell, but it can't run bash syntax, which has kept it niche for 20 years. Reef fixes this with a three-tier approach: fish function wrappers for common keywords (export, unset, source), a Rust-powered AST translator using conch-parser for structural syntax (for/do/done, if/then/fi, $()), and a bash passthrough with env capture for everything else. 251/251 bash constructs pass in the test suite. The slowest path (full bash passthrough) takes ~3ms. The binary is 1.18MB. The goal: install fish, install reef, never think about bash compatibility again. Your muscle memory, Stack Overflow commands, and tool configs all just work. https://bit.ly/3O6BYFp February 10, 2026 at 12:44AM

Sunday, 8 February 2026

Show HN: Ported the 1999 game Bugdom to the browser and added a bunch of mods https://bit.ly/4klu5b4

Show HN: Ported the 1999 game Bugdom to the browser and added a bunch of mods I think the very first video game I ever played was Bugdom by Pangea Software, which came with the original iMac. There was also a shooter called Nanosaur, but my 7-year-old heart belonged to the more peaceable Bugdom, which featured a roly-poly named Rollie McFly needing to rescue ladybugs from evil fire ants and bees. Upon seeing the port to modern systems ( https://bit.ly/4tuDWzI ), I figured it should be able to run entirely in-browser nowadays, and also that AI coding tools "should" be able to do this entire project for me. I ended up spending perhaps 20 hours on it with Claude Code, but we got there. Once ported, I added a half-dozen mods that would have pleased my childhood self (like low-gravity mode and flying slugs & caterpillars mode), and a few that please my current self (like Dance Party mode). EDIT: Here are some mod/level combinations I recommend * https://bit.ly/3Msk7Iq... * https://bit.ly/4rJLFbr... * https://bit.ly/4a49Aw6... https://bit.ly/4rvsdPe February 9, 2026 at 04:07AM

Show HN: IsHumanCadence – Bot detection via keystroke dynamics (no CAPTCHAs) https://bit.ly/3Zp5E2U

Show HN: IsHumanCadence – Bot detection via keystroke dynamics (no CAPTCHAs) https://bit.ly/4agbc4I February 9, 2026 at 01:40AM

Show HN: A custom font that displays Cistercian numerals using ligatures https://bit.ly/4to51Er

Show HN: A custom font that displays Cistercian numerals using ligatures https://bit.ly/4twswLN February 8, 2026 at 11:39PM

Saturday, 7 February 2026

Show HN: LocalGPT – A local-first AI assistant in Rust with persistent memory https://bit.ly/4a31tzU

Show HN: LocalGPT – A local-first AI assistant in Rust with persistent memory I built LocalGPT over 4 nights as a Rust reimagining of the OpenClaw assistant pattern (markdown-based persistent memory, autonomous heartbeat tasks, skills system). It compiles to a single ~27MB binary — no Node.js, Docker, or Python required. Key features: - Persistent memory via markdown files (MEMORY, HEARTBEAT, SOUL markdown files) — compatible with OpenClaw's format - Full-text search (SQLite FTS5) + semantic search (local embeddings, no API key needed) - Autonomous heartbeat runner that checks tasks on a configurable interval - CLI + web interface + desktop GUI - Multi-provider: Anthropic, OpenAI, Ollama etc - Apache 2.0 Install: `cargo install localgpt` I use it daily as a knowledge accumulator, research assistant, and autonomous task runner for my side projects. The memory compounds — every session makes the next one better. GitHub: https://bit.ly/3O2Lc5C Website: https://bit.ly/4a31uDY Would love feedback on the architecture or feature ideas. https://bit.ly/3O2Lc5C February 8, 2026 at 02:26AM

Show HN: More beautiful and usable Hacker News https://bit.ly/4r4GfaX

Show HN: More beautiful and usable Hacker News gives you keyboard navigation.. let me know what you think. https://twitter.com/shivamhwp/status/2020125417995436090 February 8, 2026 at 02:33AM

Show HN: Witnessd – Prove human authorship via hardware-bound jitter seals https://bit.ly/3ZsfPnk

Show HN: Witnessd – Prove human authorship via hardware-bound jitter seals https://bit.ly/4akLMmz February 7, 2026 at 11:40PM

Friday, 6 February 2026

Show HN: FastLog: 1.4 GB/s text file analyzer with AVX2 SIMD https://bit.ly/3OqOAXQ

Show HN: FastLog: 1.4 GB/s text file analyzer with AVX2 SIMD https://bit.ly/4qpevwB February 7, 2026 at 02:32AM

Show HN: A Kubernetes Operator to Validate Jupyter Notebooks in MLOps https://bit.ly/4rAX8tG

Show HN: A Kubernetes Operator to Validate Jupyter Notebooks in MLOps I built an open-source Kubernetes operator to automate the validation of Jupyter Notebooks in MLOps workflows. It's called the Jupyter Notebook Validator Operator and it's designed to catch issues with notebooks before they hit production. It runs notebooks in isolated pods and can validate them against deployed ML models on platforms like KServe, OpenShift AI, and vLLM. It also does regression testing by comparing notebook outputs against a "golden" version. The goal is to make notebooks more reliable and reproducible in production environments. It's built with Go and the Operator SDK. We're looking for contributors. There are opportunities to work on features like smarter error reporting, observability dashboards, and adding support for more platforms. GitHub: https://bit.ly/3ObO3Jf... https://bit.ly/46dId0r February 7, 2026 at 01:10AM

Show HN: Falcon's Eye (isometric NetHack) running in the browser via WebAssembly https://bit.ly/3O1GwNh

Show HN: Falcon's Eye (isometric NetHack) running in the browser via WebAssembly https://bit.ly/3MaBiOI February 6, 2026 at 11:19PM

Thursday, 5 February 2026

Show HN: Calfkit – an SDK to build distributed, event-driven AI agents https://bit.ly/4ru0u1n

Show HN: Calfkit – an SDK to build distributed, event-driven AI agents I think agents should work like real teams, with independent, distinct roles, async communication, and the ability to onboard new teammates or tools without restructuring the whole org. I built backend systems at Yahoo and TikTok so event-driven agents felt obvious. But no agent SDKs were using this pattern, so I made Calfkit. Calfkit breaks down agents into independent services (LLM inference, tools, and routing) that communicate asynchronously through Kafka. Agents, tool services, and downstream consumers can be deployed, added-to, removed, and scaled independently. Check it out if this interests you! I’m curious to see what y’all think. https://bit.ly/4tlDXpq February 6, 2026 at 12:10AM

Show HN: Total Recall – write-gated memory for Claude Code https://bit.ly/4rbjqCr

Show HN: Total Recall – write-gated memory for Claude Code https://bit.ly/4rbjqSX February 6, 2026 at 12:56AM

Show HN: A state-based narrative engine for tabletop RPGs https://bit.ly/3ZoN2Qy

Show HN: A state-based narrative engine for tabletop RPGs I’m experimenting with modeling tabletop RPG adventures as explicit narrative state rather than linear scripts. Everdice is a small web app that tracks conditional scenes and choice-driven state transitions to preserve continuity across long or asynchronous campaigns. The core contribution is explicit narrative state and causality, not automation. The real heavy lifting is happening in the DM Toolkit/Run Sessions area, and integrates CAML (Canonical Adventure Modeling Language) that I developed to transport narratives among any number of platforms. I also built the npm CAML-lint to check validity of narratives. I'm interested in your thoughts. https://bit.ly/4rstuXo https://bit.ly/4khOu0B February 5, 2026 at 11:55PM

Wednesday, 4 February 2026

Show HN: LLM Jailbreak Database https://bit.ly/45MlF6B

Show HN: LLM Jailbreak Database I vibe-coded this online DB for LLM injection prompts. It's registration/login less with some ambitious spam/bot filtering. I'm interested in trying to tune the barriers of interaction to a sweet spot where the DB gets balanced and the useful working injections are actually on top. thoughts? https://bit.ly/3ZiszwQ February 4, 2026 at 11:07PM

Show HN: Bunqueue – Job queue for Bun using SQLite instead of Redis https://bit.ly/4qgqo7S

Show HN: Bunqueue – Job queue for Bun using SQLite instead of Redis https://bit.ly/46oMxtB February 2, 2026 at 02:55AM

Show HN: The Last Worm – Visualizing guinea worm eradication, from 3.5M to 10 https://bit.ly/4klzH5l

Show HN: The Last Worm – Visualizing guinea worm eradication, from 3.5M to 10 https://bit.ly/3Mn1A08 February 4, 2026 at 11:58PM

Tuesday, 3 February 2026

Show HN: Craftplan – I built my wife a production management tool for her bakery https://bit.ly/4rv9Aeq

Show HN: Craftplan – I built my wife a production management tool for her bakery My wife was planning to open a micro-bakery. We looked at production management software and it was all either expensive or way too generic. The actual workflows for a small-batch manufacturer aren't that complex, so I built one and open-sourced it. Craftplan handles recipes (versioned BOMs with cost rollups), inventory (lot traceability, demand forecasting, allergen tracking), orders, production batch planning, and purchasing. Built with Elixir, Ash Framework, Phoenix LiveView, and PostgreSQL. Live demo: https://bit.ly/3O3vU0c (test@test.com / Aa123123123123) GitHub: https://bit.ly/4ryZ6ec https://bit.ly/4ryZ6ec February 1, 2026 at 06:25PM

Show HN: I built an AI twin recruiters can interview https://bit.ly/4rq48t9

Show HN: I built an AI twin recruiters can interview https://bit.ly/4aedLV6 The problem: Hiring new grads is broken. Thousands of identical resumes, but we're all different people. Understanding someone takes time - assessments, phone screens, multiple interviews. Most never get truly seen. I didn't want to be just another PDF. So I built an AI twin that recruiters can actually interview. What you can do: •Interview my AI about anything: https://bit.ly/4rmCT2A •Paste your JD to see if we match: https://bit.ly/4rojOgy •Explore my projects, code, and writing What happened: Sent it to one recruiter on LinkedIn. Next day, traffic spiked as it spread internally. Got interview invites within 24 hours. The bigger vision: What if this became standard? Instead of resume spam → keyword screening → interview rounds that still miss good fits, let recruiter AI talk to candidate AI for deep discovery. Build a platform where anyone can create their AI twin for genuine matching. I'm seeking Software/AI/ML Engineering roles and can build production-ready solutions from scratch. The site itself proves what I can do. Would love HN's thoughts on both the execution and the vision. https://bit.ly/4aedLV6 February 4, 2026 at 12:19AM

Monday, 2 February 2026

Show HN: Axiomeer – An open marketplace for AI agents https://bit.ly/4aykLfJ

Show HN: Axiomeer – An open marketplace for AI agents Hi, I built Axiomeer, an open-source marketplace protocol for AI agents. The idea: instead of hardcoding tool integrations into every agent, agents shop a catalog at runtime, and the marketplace ranks, executes, validates, and audits everything. How it works: - Providers publish products (APIs, datasets, model endpoints) via 10-line JSON manifests - Agents describe what they need in natural language or structured tags - The router scores all options by capability match (70%), latency (20%), cost (10%) with hard constraint filters - The top pick is executed, output is validated (citations required? timestamps?), and evidence quality is assessed deterministically - If the evidence is mock/fake/low-quality, the agent abstains rather than hallucinating - Every execution is logged as an immutable receipt The trust layer is the part I think is missing from existing approaches. MCP standardizes how you connect to a tool server. Axiomeer operates one layer up: which tool, from which provider, and can you trust what came back? Stack: Python, FastAPI, SQLAlchemy, Ollama (local LLM, no API keys). v1 ships with weather providers (Open-Meteo + mocks). The architecture supports any HTTP endpoint that returns structured JSON. Looking for contributors to add real providers across domains (finance, search, docs, code execution). Each provider is ~30 lines + a manifest. https://bit.ly/4byfTsV February 3, 2026 at 01:43AM

Show HN: Kannada Nudi Editor Web Version https://bit.ly/4aclQcT

Show HN: Kannada Nudi Editor Web Version Ported the Desktop Version of Kannada Nudi Editor to Web under the guidance of https://bit.ly/4a4vbER https://bit.ly/49W412S February 3, 2026 at 05:11AM

Show HN: Stream-based AI with neurological multi-gate (Na⁺/θ/NMDA) https://bit.ly/49UI6cb

Show HN: Stream-based AI with neurological multi-gate (Na⁺/θ/NMDA) Current LLMs struggle with compositional inference because they lack physical boundaries. CSCT implements a neurological multi-gate mechanism (Na⁺/θ/NMDA) to enforce L1 geometry and physical grounding. In my experiments (EX8/9), this architecture achieved 96.7% success in compositional inference within the convex hull—far outperforming unconstrained models.Key features:Stream-based: No batching or static context; it processes information as a continuous flow.Neurological Gating: Computational implementation of θ-γ coupling using Na⁺ and NMDA-inspired gates.Zero-shot Reasoning: Incurs no "hallucination" for in-hull compositions.Detailed technical write-up: [ https://bit.ly/4kds5BD... ]I’m eager to hear your thoughts on this "Projected Dynamical System" approach to cognition. https://bit.ly/4kds5S9 February 3, 2026 at 03:59AM

Show HN: 127 PRs to Prod this wknd with 18 AI agents: metaswarm. MIT licensed https://bit.ly/3Ois03A

Show HN: 127 PRs to Prod this wknd with 18 AI agents: metaswarm. MIT licensed A few weeks ago I posted about GoodToGo https://bit.ly/4pI0dXu - a tool that gives AI agents a deterministic answer to "is this PR ready to merge?" Several people asked about the larger orchestration system I mentioned. This is that system. I got tired of being a project manager for Claude Code. It writes code fine, but shipping production code is seven or eight jobs — research, planning, design review, implementation, code review, security audit, PR creation, CI babysitting. I was doing all the coordination myself. The agent typed fast. I was still the bottleneck. What I really needed was an orchestrator of orchestrators - swarms of swarms of agents with deterministic quality checks. So I built metaswarm. It breaks work into phases and assigns each to a specialist swarm orchestrator. It manages handoffs and uses BEADS for deterministic gates that persist across /compact, /clear, and even across sessions. Point it at a GitHub issue or brainstorm with it (it uses Superpowers to ask clarifying questions) and it creates epics, tasks, and dependencies, then runs the full pipeline to a merged PR - including outside code review like CodeRabbit, Greptile, and Bugbot. The thing that surprised me most was the design review gate. Five agents — PM, Architect, Designer, Security, CTO — review every plan in parallel before a line of code gets written. All five must approve. Three rounds max, then it escalates to a human. I expected a rubber stamp. It catches real design problems, dependency issues, security gaps. This weekend I pointed it at my backlog. 127 PRs merged. Every one hit 100% test coverage. No human wrote code, reviewed code, or clicked merge. OK, I guided it a bit, mostly helping with plans for some of the epics. A few learnings: Agent checklists are theater. Agents skipped coverage checks, misread thresholds, or decided they didn't apply. Prompts alone weren't enough. The fix was deterministic gates — BEADS, pre-push hooks, CI jobs all on top of the agent completion check. The gates block bad code whether or not the agent cooperates. The agents are just markdown files. No custom runtime, no server, and while I built it on TypeScript, the agents are language-agnostic. You can read all of them, edit them, add your own. It self-reflects too. After every merged PR, the system extracts patterns, gotchas, and decisions into a JSONL knowledge base. Agents only load entries relevant to the files they're touching. The more it ships, the fewer mistakes it makes. It learns as it goes. metaswarm stands on two projects: https://bit.ly/465Uggf by Steve Yegge (git-native task tracking and knowledge priming) and https://bit.ly/4tg1fwL by Jesse Vincent (disciplined agentic workflows — TDD, brainstorming, systematic debugging). Both were essential. Background: I founded Technorati, Linuxcare, and Warmstart; tech exec at Lyft and Reddit. I built metaswarm because I needed autonomous agents that could ship to a production codebase with the same standards I'd hold a human team to. $ cd my-project-name $ npx metaswarm init MIT licensed. IANAL. YMMV. Issues/PRs welcome! https://bit.ly/4tcbDpg February 3, 2026 at 02:18AM

Sunday, 1 February 2026

Show HN: ContractShield – AI contract analyser for freelancers https://bit.ly/463VwR1

Show HN: ContractShield – AI contract analyser for freelancers Built this with Claude Code. Analyses freelance contracts for 12 risk categories (payment terms, IP ownership, scope issues, termination clauses, etc.) and flags problems with specific recommendations. 40% of freelancers report getting stiffed by clients, often due to vague contract terms. This tool aims to help catch those issues before signing. Currently free while validating whether this solves a real problem. Would love HN's feedback, especially on: - Accuracy of the analysis - Whether this is actually useful for freelancers - What's missing or could be improved Tech stack: Node.js, Express, Anthropic Claude API, deployed on Railway. https://bit.ly/3ZcvXJv February 2, 2026 at 04:11AM

Show HN: Is AI "good" yet? – tracking HN sentiment on AI coding https://bit.ly/4qfxfhU

Show HN: Is AI "good" yet? – tracking HN sentiment on AI coding A survey tracking developer sentiment on AI-assisted coding through Hacker News posts. https://bit.ly/4q7Kp0j February 2, 2026 at 03:06AM

Show HN: Wikipedia as a doomscrollable social media feed https://bit.ly/3Oj4jbm

Show HN: Wikipedia as a doomscrollable social media feed https://bit.ly/4rj1aXw February 2, 2026 at 01:12AM

Show HN: NanoClaw – “Clawdbot” in 500 lines of TS with Apple container isolation https://bit.ly/4qau5fm

Show HN: NanoClaw – “Clawdbot” in 500 lines of TS with Apple container isolation I’ve been running Clawdbot for the last couple weeks and have genuinely found it useful but running it scares the crap out of me. OpenClaw has 52+ modules and runs agents with near-unlimited permissions in a single Node process. NanoClaw is ~500 lines of core code, agents run in actual Apple containers with filesystem isolation. Each chat gets its own sandboxed context. This is not a swiss army knife. It’s built to match my exact needs. Fork it and make it yours. https://bit.ly/4qTY7oY February 1, 2026 at 11:49PM

Saturday, 31 January 2026

Show HN: Peptide calculators ask the wrong question. I built a better one https://bit.ly/4r36xtW

Show HN: Peptide calculators ask the wrong question. I built a better one Most peptide calculators ask the wrong question. They ask: How much water are you adding? But in practice, what you actually know is your vial size and your target dose . The water amount should be the output , not the input . It should also make your dose land on a real syringe tick mark. Not something like 17.3 units. I built a peptide calculator that works this way: https://bit.ly/4r36y0Y What’s different: - You pick vial size and target dose → reconstitution is calculated for you - Doses align to actual syringe markings - Common dose presets per peptide - Works well on mobile (where this is usually done) - Supports blends and compounds (e.g. GLOW or CJC-1295 + Ipamorelin) - You can save your vials. No account required. Happy to hear feedback or edge cases worth supporting. https://bit.ly/4r36y0Y February 1, 2026 at 03:02AM

Show HN: I built a receipt processor for Paperless-ngx https://bit.ly/4tcdRFa

Show HN: I built a receipt processor for Paperless-ngx Hi all, I wanted a robust way to keep track of my receipts without needing to keep them in a box and so i found paperless - but the existing paperless ai projects didn't really convert my receipts to usable data. so I created a fork of nutlope's receipthero (actually it's a complete rewrite, the only thing that remains over is the system prompt) The goal of this project is to be a one stop shop for automatically detecting tagged docs and converting them to json using schema definitions - that includes invoices, .... i can't think of any others right now, maybe you can? If you do please make an issue for it! I would appreciate any feedback/issues thanks! (p.s i made sure its simple to setup with dockge/basic docker-compose.yml) repo: https://bit.ly/4a61i5v tutorial: https://youtu.be/LNlUDtD3og0 February 1, 2026 at 01:17AM

Show HN: An Open Source Alternative to Vercel/Render/Netlify https://bit.ly/49Q4u6C

Show HN: An Open Source Alternative to Vercel/Render/Netlify https://bit.ly/4kdJp9B February 1, 2026 at 01:40AM

Friday, 30 January 2026

Show HN: Foundry – Turns your repeated workflows into one-click commands https://bit.ly/4a3z77o

Show HN: Foundry – Turns your repeated workflows into one-click commands https://bit.ly/4a54r5L January 31, 2026 at 01:40AM

Show HN: Using World Models for Consistent AI Filmmaking https://bit.ly/4ab3mcH

Show HN: Using World Models for Consistent AI Filmmaking https://bit.ly/4a7AWjS January 30, 2026 at 10:41PM

Thursday, 29 January 2026

Show HN: Mystral Native – Run JavaScript games natively with WebGPU (no browser) https://bit.ly/4amiBRb

Show HN: Mystral Native – Run JavaScript games natively with WebGPU (no browser) Hi HN, I've been building Mystral Native — a lightweight native runtime that lets you write games in JavaScript/TypeScript using standard Web APIs (WebGPU, Canvas 2D, Web Audio, fetch) and run them as standalone desktop apps. Think "Electron for games" but without Chromium. Or a JS runtime like Node, Deno, or Bun but optimized for WebGPU (and bundling a window / event system using SDL3). Why: I originally started by starting a new game engine in WebGPU, and I loved the iteration loop of writing Typescript & instantly seeing the changes in the browser with hot reloading. After getting something working and shipping a demo, I realized that shipping a whole browser doesn't really work if I also want the same codebase to work on mobile. Sure, I could use a webview, but that's not always a good or consistent experience for users - there are nuances with Safari on iOS supporting WebGPU, but not the same features that Chrome does on desktop. What I really wanted was a WebGPU runtime that is consistent & works on any platform. I was inspired by deno's --unsafe-webgpu flag, but I realized that deno probably wouldn't be a good fit long term because it doesn't support iOS or Android & doesn't bundle a window / event system (they have "bring your own window", but that means writing a lot of custom code for events, dealing with windowing, not to mention more specific things like implementing a WebAudio shim, etc.). So that got me down the path of building a native runtime specifically for games & that's Mystral Native. So now with Mystral Native, I can have the same developer experience (write JS, use shaders in WGSL, call requestAnimationFrame) but get a real native binary I can ship to players on any platform without requiring a webview or a browser. No 200MB Chromium runtime, no CEF overhead, just the game code and a ~25MB runtime. What it does: - Full WebGPU via Dawn (Chrome's implementation) or wgpu-native (Rust) - Native window & events via SDL3 - Canvas 2D support (Skia), Web Audio (SDL3), fetch (file/http/https) - V8 for JS (same engine as Chrome/Node), also supports QuickJS and JSC - ES modules, TypeScript via SWC - Compile to single binary (think "pkg"): `mystral compile game.js --include assets -o my-game` - macOS .app bundles with code signing, Linux/Windows standalone executables - Embedding API for iOS and Android (JSC/QuickJS + wgpu-native) It's early alpha — the core rendering path works well & I've tested on Mac, Linux (Ubuntu 24.04), and Windows 11, and some custom builds for iOS & Android to validate that they can work, but there's plenty to improve. Would love to get some feedback and see where it can go! MIT licensed. Repo: https://bit.ly/4rmOWx5 Docs: https://bit.ly/46oiPVx https://bit.ly/4rmOWx5 January 27, 2026 at 07:33PM

Show HN: Free Facebook Video Downloader with Original Audio Quality https://bit.ly/3NL23cV

Show HN: Free Facebook Video Downloader with Original Audio Quality A free, web-based Facebook video downloader that actually preserves the original audio - something most Facebook downloaders fail to do. Built with Next.js and yt-dlp, it offers a clean, no-ads experience for downloading Facebook videos in multiple qualities. https://bit.ly/4t8AgDo January 30, 2026 at 03:22AM

Show HN: Play Zener Cards https://bit.ly/4rkUJTN

Show HN: Play Zener Cards just play zener cards. don't judge :) https://bit.ly/4rhsS6P January 30, 2026 at 01:39AM

Wednesday, 28 January 2026

Show HN: Codex.nvim – Codex inside Neovim (no API key required) https://bit.ly/4aq7cin

Show HN: Codex.nvim – Codex inside Neovim (no API key required) Hi HN! I built codex.nvim, an IDE-style Neovim integration for Codex. Highlights: - Works with OpenAI Codex plans (no API key required) - Fully integrated in Neovim (embedded terminal workflow) - Bottom-right status indicator shows busy/wait state - Send selections or file tree context to Codex quickly Repo: https://bit.ly/46kNNhf Why I built this: I wanted to use Codex comfortably inside Neovim without relying on the API. Happy to hear feedback and ideas! https://bit.ly/46kNNhf January 29, 2026 at 07:17AM

Show HN: Shelvy Books https://bit.ly/4aivwDI

Show HN: Shelvy Books Hey HN! I built a little side project I wanted to share. Shelvy is a free, visual bookshelf app where you can organize books you're reading, want to read, or have finished. Sign in to save your own collection. Not monetized, no ads, no tracking beyond basic auth. Just a fun weekend project that grew a bit. Live: https://bit.ly/45yNLSL Would love any feedback on the UX or feature ideas! https://bit.ly/45yNLSL January 29, 2026 at 02:16AM

Show HN: Drum machine VST made with React/C++ https://bit.ly/45FQ6eK

Show HN: Drum machine VST made with React/C++ Hi HN! We just launched our drum machine vst this month! We will be updating it with many new synthesis models and unique features. Check it out, join our discord and show us what you made! https://bit.ly/49YmzOv January 27, 2026 at 06:03AM

Show HN: Frame – Managing projects, tasks, and context for Claude Code https://bit.ly/4rcuAqe

Show HN: Frame – Managing projects, tasks, and context for Claude Code I built Frame to better manage the projects I develop with Claude Code, to bring a standard to my Claude Code projects, to improve project and task planning, and to reduce context and memory loss. In its current state, Frame works entirely locally. You don’t need to enter any API keys or anything like that. You can run Claude Code directly using the terminal inside Frame. Why am I not using existing IDEs? Simply because, for me, I no longer need them. What I need is an interface centered around the terminal, not a code editor. I initially built something that allowed me to place terminals in a grid layout, but then I decided to take it further. I realized I also needed to manage my projects and preserve context. I’m still at a very early stage, but even being able to build the initial pieces I had in mind within 5–6 days—using Claude Code itself—feels kind of crazy. What can you do with Frame? You can start a brand-new project or turn an existing one into a Frame project. For this, Frame creates a set of Markdown and JSON files with rules I defined. These files exist mainly to manage tasks and preserve context. You can manually add project-related tasks through the UI. I haven’t had the chance to test very complex or long-running scenarios yet, but from what I’ve seen, Claude Code often asks questions like: “Should I add this as a task to tasks.json?” or “Should we update project_notes.md after this project decision?” I recommend saying yes to these. I also created a JSON file that keeps track of the project structure, down to function-level details. This part is still very raw. In the future, I plan to experiment with different data structures to help AI understand the project more quickly and effectively. As mentioned, you can open your terminals in either a grid or tab view. I added options up to a 3×3 grid. Since the project is open source, you can modify it based on your own needs. I also added a panel where you can view and manage plugins. For code files or other files, I included a very simple editor. This part is intentionally minimal and quite basic for now. Based on my own testing, I haven’t encountered any major bugs, but there might be some. I apologize in advance if you run into any issues. My core goal is to establish a standard for AI-assisted projects and make them easier to manage. I’m very open to your ideas, support, and feedback. You can see more details on GitHub : https://bit.ly/4bpLWva January 29, 2026 at 12:04AM

Tuesday, 27 January 2026

Show HN: How would you decide famous SCOTUS cases? https://bit.ly/4rht464

Show HN: How would you decide famous SCOTUS cases? https://bit.ly/4ri8xOU January 28, 2026 at 03:26AM

Show HN: Fuzzy Studio – Apply live effects to videos/camera https://bit.ly/4bnCybq

Show HN: Fuzzy Studio – Apply live effects to videos/camera Back story: I've been learning computer graphics on the side for several years now and gain so much joy from smooshing and stretching images/videos. I hope you can get a little joy as well with Fuzzy Studio! Try applying effects to your camera! My housemates and I have giggled so much making faces with weird effects! Nothing gets sent to the server; everything is done in the browser! Amazing what we can do. I've only tested on macOS... apologies if your browser/OS is not supported (yet). https://bit.ly/3LBeE1K January 27, 2026 at 04:16PM

Show HN: ACME Proxy using step-ca https://bit.ly/3NAOQ6v

Show HN: ACME Proxy using step-ca https://bit.ly/4k15F6u January 27, 2026 at 11:12PM

Monday, 26 January 2026

Show HN: A Local OS for LLMs. MIT License. Zero Hallucinations. (Not Crank) https://bit.ly/4rhe9cd

Show HN: A Local OS for LLMs. MIT License. Zero Hallucinations. (Not Crank) The problem with LLMs isn't intelligence; it's amnesia and dishonesty. Hey HN, I’ve spent the last few months building Remember-Me, an open-source "Sovereign Brain" stack designed to run entirely offline on consumer hardware. The core thesis is simple: Don't rent your cognition. Most RAG (Retrieval Augmented Generation) implementations are just "grep for embeddings." They are messy, imprecise, and prone to hallucination. I wanted to solve the "Context integrity" problem at the architectural layer. The Tech Stack (How it works): QDMA (Quantum Dream Memory Architecture): instead of a flat vector DB, it uses a hierarchical projection engine. It separates "Hot" (Recall) from "Cold" (Storage) memory, allowing for effectively infinite context window management via compression. CSNP (Context Switching Neural Protocol) - The Hallucination Killer: This is the most important part. Every memory fragment is hashed into a Merkle Chain. When the LLM retrieves context, the system cryptographically verifies the retrieval against the immutable ledger. If the hash doesn't match the chain: The retrieval is rejected. Result: The AI visually cannot "make things up" about your past because it is mathematically constrained to the ledger. Local Inference: Built on top of llama.cpp server. It runs Llama-3 (or any GGUF) locally. No API keys. No data leaving your machine. Features: Zero-Dependency: Runs on Windows/Linux with just Python and a GPU (or CPU). Visual Interface: Includes a Streamlit-based "Cognitive Interface" to visualize memory states. Open Source: MIT License. This is an attempt to give "Agency" back to the user. I believe that if we want AGI, it needs to be owned by us, not rented via an API. Repository: https://bit.ly/49BNC3c I’d love to hear your feedback on the Merkle-verification approach. Does constraining the context window effectively solve the "trust" issue for you? It's fully working - Fully tested. If you tried to Git Clone before without luck - As this is not my first Show HN on this - Feel free to try again. To everyone who HATES AI slop; Greedy corporations and having their private data stuck on cloud servers. You're welcome. Cheers, Mohamad Authors note: Updated successfully. Framework 50 is active. For anyone passing by - yes this is a big deal. Eliminating AI hallucination is a 60 billion dollar market problem and I'm giving THAT + sovereign control of your DATA plus the capability to do high-end research via framework 50 (including advanced scientific research) for FREE - under an MIT license. If you don't take advantage of this - you are an idiot. If you do - welcome to the future. P.S: What do I get from lying? I got 36 stars on the repo - many from high-end senior engineers at fortune 500 companies. If you're too stupid to tell the real deal from a lie then keep it moving son. https://bit.ly/49BNC3c January 27, 2026 at 05:56AM

Show HN: LocalPass offline password manager. Zero cloud. Zero telemetry https://bit.ly/49YyqvY

Show HN: LocalPass offline password manager. Zero cloud. Zero telemetry I’ve released LocalPass — a local‑first, offline password manager with zero cloud, zero telemetry, and zero vendor lock‑in. 100% local storage, 100% open‑source. https://bit.ly/3M0oES2 January 26, 2026 at 11:38PM

Sunday, 25 January 2026

Show HN: FaceTime-style calls with an AI Companion (Live2D and long-term memory) https://bit.ly/45rQaP5

Show HN: FaceTime-style calls with an AI Companion (Live2D and long-term memory) Hi HN, I built Beni ( https://bit.ly/4q1ZTTA ), a web app for real-time video calls with an AI companion. The idea started as a pretty simple question: text chatbots are everywhere, but they rarely feel present. I wanted something closer to a call, where the character actually reacts in real time (voice, timing, expressions), not just “type, wait, reply”. Beni is basically: A Live2D avatar that animates during the call (expressions + motion driven by the conversation) Real-time voice conversation (streaming response, not “wait 10 seconds then speak”) Long-term memory so the character can keep context across sessions The hardest part wasn’t generating text, it was making the whole loop feel synchronized: mic input, model response, TTS audio, and Live2D animation all need to line up or it feels broken immediately. I ended up spending more time on state management, latency and buffering than on prompts. Some implementation details (happy to share more if anyone’s curious): Browser-based real-time calling, with audio streaming and client-side playback control Live2D rendering on the front end, with animation hooks tied to speech / state A memory layer that stores lightweight user facts/preferences and conversation summaries to keep continuity Current limitation: sign-in is required today (to persist memory and prevent abuse). I’m adding a guest mode soon for faster try-out and working on mobile view now. What I’d love feedback on: Does the “real-time call” loop feel responsive enough, or still too laggy? Any ideas for better lip sync / expression timing on 2D/3D avatars in the browser? Thanks, and I’ll be around in the comments. https://bit.ly/4q1ZTTA January 26, 2026 at 12:13AM

Show HN: Spine – an execution-centric backend framework for Go https://bit.ly/45w2w8P

Show HN: Spine – an execution-centric backend framework for Go Hello Hacker News — greetings from South Korea I’m a backend engineer working primarily with Go, and I’d like to share a framework I’ve been building to solve a problem I’ve repeatedly encountered in production systems. In my day-to-day work, our backend is built on top of Echo. Echo is fast and reliable as an HTTP transport, but its high level of freedom leaves architectural decisions almost entirely to individual developers. Over time, this led to a system where execution flow and responsibility boundaries varied depending on who last touched a feature. Maintenance became difficult not because the code was incorrect, but because how requests actually executed was no longer obvious. I looked for a Go framework that could provide a clear execution model and structural constraints, similar to what Spring or NestJS offer. I couldn’t find one that fit. Moving to Spring or NestJS would also mean giving up some of Go’s strengths—simplicity, performance, and explicit control—so I decided to build one instead. Spine is an execution-centric backend framework for Go. It aims to provide enterprise-grade structure while deliberately avoiding hidden magic. What Spine provides • An IoC container with explicit, constructor-based dependency injection • Interceptors with well-defined execution phases (before, after, completion) • First-class support for both HTTP requests and event-driven execution • No annotations, no implicit behavior, no convention-driven wiring The core idea: execution first The key difference is Spine’s execution model. Every request—HTTP or event—flows through a single, explicit Pipeline. The Pipeline is the only component that determines execution order. Actual method calls are handled by a separate Invoker, keeping execution control and invocation strictly separated. Because of this structure: • Execution order is explainable by reading the code • Cross-cutting concerns live in the execution flow, not inside controllers • Controllers express use cases only, not orchestration logic • You can understand request handling by looking at main.go This design trades some convenience for clarity. In return, it offers stronger control as the system grows in size and complexity. My goal with Spine isn’t just to add another framework to the Go ecosystem, but to start a conversation: How much execution flow do modern web frameworks hide, and when does that become a maintenance cost? The framework itself is currently written in Korean. If English support or internationalization is important to you, feel free to open an issue—I plan to prioritize it based on community interest. You can find more details, a basic HTTP example, and a simple Kafka-based MSA demo here: Repository: https://bit.ly/3NFoyjl Thanks for reading. I’d really appreciate your feedback. https://bit.ly/4qHQdyR January 26, 2026 at 12:51AM

Show HN: I built an app that blocks social media until you read Quran daily https://bit.ly/49FkeZZ

Show HN: I built an app that blocks social media until you read Quran daily Hey HN, I'm a solo developer from Nigeria. I built Quran Unlock - an app that blocks distracting apps (TikTok, Instagram, etc.) until you complete your daily Quran reading. The idea came from my own struggle with phone addiction. I wanted to read Quran daily but kept getting distracted. So I built this for myself, then shared it. Some stats after 2 months: - 123K+ users - 64.9% returning user rate - 31M events tracked Tech stack: - React Native - Firebase (Auth, Firestore, Analytics, Cloud Messaging) - RevenueCat for subscriptions - iOS Screen Time API + Android UsageStats App Store: https://apple.co/3ZBBHfS Play Store: https://bit.ly/49Gb5R1... Would love feedback from the HN community! January 25, 2026 at 11:51PM

Saturday, 24 January 2026

Show HN: C From Scratch – Learn safety-critical C with prove-first methodology https://bit.ly/466rkV1

Show HN: C From Scratch – Learn safety-critical C with prove-first methodology Seven modules teaching C the way safety-critical systems are actually built: MATH → STRUCT → CODE → TEST. Each module answers one question: Does it exist? (Pulse), Is it normal? (Baseline), Is it regular? (Timing), Is it trending? (Drift), Which sensor to trust? (Consensus), How to handle overflow? (Pressure), What do we do about it? (Mode). Every module is closed (no dependencies), total (handles all inputs), deterministic, and O(1). 83 tests passing. Built this after 30 years in UNIX systems. Wanted something that teaches the rigour behind certified systems without requiring a decade of on-the-job learning first. MIT licensed. Feedback welcome. https://bit.ly/4rxhjJ9 January 25, 2026 at 01:17AM

Show HN: I built a Mac OS App to upload your screenshots to S3 https://bit.ly/4jWavlH

Show HN: I built a Mac OS App to upload your screenshots to S3 I've been building a bitly alternative in public and built a free side tool to upload screenshots to S3. I always thought screenshot apps charged way too much for this so I was pretty happy to get around to build it. It automatically generates short links and uploads to any S3-compatible storage you own. Here is the link: https://bit.ly/45shEUJ Try it out, all feedback is welcome :) https://bit.ly/45shEUJ January 25, 2026 at 12:40AM

Friday, 23 January 2026

Show HN: Open-source Figma design to code https://bit.ly/4rdMSr3

Show HN: Open-source Figma design to code Hi HN, founders of VibeFlow (YC S25) here. We mostly work on backend and workflow tooling, but we needed a way to turn Figma designs into frontend code as a kickstart for prototyping. It takes a Figma frame and converts it into React + Tailwind components (plus assets). If you want to try it: You can run it locally or use it via the VibeFlow UI to poke at it without setup ( https://bit.ly/4bhK1sq ) https://bit.ly/4k65dUM January 24, 2026 at 07:09AM

Show HN: Flux, A Python-like language in Rust to solve ML orchestration overhead https://bit.ly/49G6gqM

Show HN: Flux, A Python-like language in Rust to solve ML orchestration overhead https://bit.ly/4tfkrLp January 24, 2026 at 03:24AM

Show HN: AdaL Web, a local “Claude co-work” [video] https://bit.ly/4sXVmUQ

Show HN: AdaL Web, a local “Claude co-work” [video] AdaL is the world’s first local coding agent with web UI. Claude Code has proven that coding agents work best when they are local, bringing developers back to the terminal. Terminal UIs are fast and great with shortcuts, shell mode, and developer-friendly workflows. But they are limited in history and image display, and the experience varies by terminal and OS. Many of them flicker (buuuut not AdaL CLI ). Most importantly, they can be quite intimidating for non-technical users. This led us to explore new possibilities for a coding agent interface. What if you could get the best of both worlds: - the same core local agent that does tasks exactly like AdaL CLI - combined with a web UI with no limits on UI/UX This can be especially powerful for design-heavy and more visual workflows Available at: https://bit.ly/49S2fhH https://www.youtube.com/watch?v=smfVGCI08Yk January 24, 2026 at 01:28AM

Show HN: Dwm.tmux – a dwm-inspired window manager for tmux https://bit.ly/3NDp0P9

Show HN: Dwm.tmux – a dwm-inspired window manager for tmux Hey, HN! With all recent agentic workflows being primarily terminal- and tmux-based, I wanted to share a little project I created about decade ago. I've continued to use this as my primary terminal "window manager" and wanted to share in case others might find it useful. I would love to hear about other's terminal-based workflows and any other tools you may use with similar functionality. https://bit.ly/45uTskC January 24, 2026 at 01:15AM

Thursday, 22 January 2026

Show HN: Extracting React apps from Figma Make's undocumented binary format https://bit.ly/3NxZfQg

Show HN: Extracting React apps from Figma Make's undocumented binary format https://bit.ly/4qH770s January 23, 2026 at 06:07AM

Show HN: The firmware that got me detained by Swiss Intelligence https://bit.ly/49CzOpo

Show HN: The firmware that got me detained by Swiss Intelligence https://bit.ly/462Wwo7 January 23, 2026 at 05:26AM

Show HN: CleanAF – One-click Desktop cleaner for Windows https://bit.ly/4qxPHDl

Show HN: CleanAF – One-click Desktop cleaner for Windows Hi HN, I built CleanAF because my Windows Desktop kept turning into a dumping ground for downloads and screenshots. CleanAF is a tiny one-click tool that: keeps system icons intact moves everything else into a timestamped “Current Desktop” folder auto-sorts files by type requires no install, no internet, no background service It’s intentionally simple and does one thing only. Source + download: https://bit.ly/46adZuW I’m considering adding undo/restore, scheduling, and exclusion rules if people find it useful. Feedback welcome. https://bit.ly/46adZuW January 23, 2026 at 03:02AM

Wednesday, 21 January 2026

Show HN: High speed graphics rendering research with tinygrad/tinyJIT https://bit.ly/49NhyZb

Show HN: High speed graphics rendering research with tinygrad/tinyJIT I saw a tweet that tinygrad is so good that you could make a graphics library that wraps tg. So I’ve been hacking on a gtinygrad, and honestly it convinced me it could be used for legit research. The JIT + tensor model ends up being a really nice way to express light transport all in simple python, so I reimplemented some new research papers from SIGGRAPH like REstir PG and SZ and it just works. instead of complicated cpp its just a 200 LOC of python. https://bit.ly/4jSe38d January 22, 2026 at 04:26AM

Show HN: Sweep, Open-weights 1.5B model for next-edit autocomplete https://bit.ly/4qstJla

Show HN: Sweep, Open-weights 1.5B model for next-edit autocomplete Hey HN, we trained and open-sourced a 1.5B model that predicts your next edits, similar to Cursor. You can download the weights here ( https://bit.ly/49JsQO0 ) or try it in our JetBrains plugin ( https://bit.ly/49ElRpq... ). Next-edit autocomplete differs from standard autocomplete by using your recent edits as context when predicting completions. The model is small enough to run locally while outperforming models 4x its size on both speed and accuracy. We tested against Mercury (Inception), Zeta (Zed), and Instinct (Continue) across five benchmarks: next-edit above/below cursor, tab-to-jump for distant changes, standard FIM, and noisiness. We found exact-match accuracy correlates best with real usability because code is fairly precise and the solution space is small. Prompt format turned out to matter more than we expected. We ran a genetic algorithm over 30+ diff formats and found simple `original`/`updated` blocks beat unified diffs. The verbose format is just easier for smaller models to understand. Training was SFT on ~100k examples from permissively-licensed repos (4hrs on 8xH100), then RL for 2000 steps with tree-sitter parse checking and size regularization. The RL step fixes edge cases SFT can’t like, generating code that doesn’t parse or overly verbose outputs. We're open-sourcing the weights so the community can build fast, privacy-preserving autocomplete for any editor. If you're building for VSCode, Neovim, or something else, we'd love to see what you make with it! https://bit.ly/49SL9QV January 22, 2026 at 12:22AM

Show HN: Yashiki – A tiling window manager for macOS in Rust, inspired by River https://bit.ly/3LMdQaf

Show HN: Yashiki – A tiling window manager for macOS in Rust, inspired by River https://bit.ly/3ZvclQN January 19, 2026 at 06:51AM

Tuesday, 20 January 2026

Show HN: TopicRadar – Track trending topics across HN, GitHub, ArXiv, and more https://bit.ly/4sNyuY2

Show HN: TopicRadar – Track trending topics across HN, GitHub, ArXiv, and more Hey HN! I built TopicRadar to solve a problem I had with staying on top of what's trending in AI/ML without checking 7+ sites daily. https://bit.ly/4b906AL What it does: - Aggregates from HackerNews, GitHub, arXiv, StackOverflow, Lobste.rs, Papers with Code, and Semantic Scholar - One-click presets: "Trending: AI & ML", "Trending: Startups", "Trending: Developer Tools" - Or track custom topics (e.g., "rust async", "transformer models") - Gets 150-175 results in under 5 minutes Built for the Apify $1M Challenge. It's free to try – just hit "Try for free" and use the default "AI & ML" preset. Would love feedback on what sources to add next or features you'd find useful! https://bit.ly/4b906AL January 20, 2026 at 03:47PM

Show HN: macOS native DAW with Git branching model https://bit.ly/4sQhznR

Show HN: macOS native DAW with Git branching model I am working on building (and have made my first prerelease) for a Digital Audio Workstation with git like branching version control. It's free for local use and paid for cloud syncing or collaboration. https://bit.ly/4jMDBDI January 21, 2026 at 01:05AM

Show HN: Automating Type Safety for Mission-Critical Industrial Systems https://bit.ly/49x8Cs7

Show HN: Automating Type Safety for Mission-Critical Industrial Systems https://bit.ly/4b3FmdA January 20, 2026 at 10:43PM

Show HN: E80: an 8-bit CPU in structural VHDL https://bit.ly/4pMdPAU

Show HN: E80: an 8-bit CPU in structural VHDL I built a new 8-bit CPU in VHDL from scratch (starting from the ISA). I felt that most educational soft-cores hide too much behind abstraction, eg. if I can do a+b with a single assignment that calls an optimized arithmetic library, then why did I learn the ripple carry adder in the first place ? And why did I learn flip flops if I can do all my control logic with a simple PROCESS statement like I would with a programming language ? Of course abstraction is the main selling point of HDLs, but would it work if I tried to keep strictly structural and rely on ieee.std_logic_1164 only ? Well, it did and it works nicely. No arithmetic libraries, no PROCESS except for the DFF component (obviously). Of course it's a bit of a "resource hog" compared to optimized cores, (eg. the RAM is build out of flip flops instead of a block ram that takes advantage of FPGA intermal memory) but you can actually trace every signal through the datapath as it happens. I also build an assembler in C99 without external libraries (please be forgiving, my code is very primitive I think). I bundled Sci1 (Scintilla), GHDL and GTKWave into a single installer so you can write assembly and see the waveforms immediately without having to spend hours configuring simulators. Currently Windows only, but at some point I'll have to do it on Linux too. I tested it on the Tang Primer 25K and Cyclone IV, and I included my Gowin, Quartus and Vivado projects files. That should make easy to run on your FPGA. Everything is under the GPL3. (Edit: I did not use AI. Not was it a waste of time for the VHDL because my design is too novel -- but even for beta testing it would waste my time because those LLMs are too well trained for x86/ARM and my flag logic draws from 6502/6800 and even my ripple carry adder doesn't flip the carry bit in subtraction. Point is -- AI couldn't help. It only kept complaining that my assembler's C code wasn't up to 2026 standards) https://bit.ly/3ZrvXoV January 17, 2026 at 10:39PM

Monday, 19 January 2026

Show HN: Artificial Ivy in the Browser https://bit.ly/4qCQ4Ne

Show HN: Artificial Ivy in the Browser This is just a goofy thing I cooked up over the weekend. It's kind of like a screensaver, but with more reading and sliders. (It's not terribly efficient, so expect phone batteries to take a hit!) https://bit.ly/4jRSVz8 January 20, 2026 at 04:14AM

Show HN: Whirligig – Tinder for Gigs https://bit.ly/4qZPc4G

Show HN: Whirligig – Tinder for Gigs https://bit.ly/4oRzecA January 19, 2026 at 11:33PM

Sunday, 18 January 2026

Show HN: Pdfwithlove – PDF tools that run 100% locally (no uploads, no back end) https://bit.ly/4qYFBLB

Show HN: Pdfwithlove – PDF tools that run 100% locally (no uploads, no back end) Most PDF web tools make millions by uploading documents that never needed to leave your computer. pdfwithlove does the opposite: 1. 100% local processing 2. No uploads, no backend, no tracking Features include merge/split/edit/compress PDFs, watermarks & signatures, and image/HTML/Office → PDF conversion. https://bit.ly/3Z67TYV January 19, 2026 at 06:04AM

Show HN: AWS-doctor – A terminal-based AWS health check and cost optimizer in Go https://bit.ly/3NyXvGk

Show HN: AWS-doctor – A terminal-based AWS health check and cost optimizer in Go https://bit.ly/3NsbQED January 19, 2026 at 05:35AM

Show HN: Auto-switch keyboard layout per physical keyboard (Rust, Linux/KDE) https://bit.ly/49KHB3i

Show HN: Auto-switch keyboard layout per physical keyboard (Rust, Linux/KDE) https://bit.ly/4quHK1S January 19, 2026 at 01:16AM

Show HN: I quit coding years ago. AI brought me back https://bit.ly/49GCfWC

Show HN: I quit coding years ago. AI brought me back Quick background: I used to code. Studied it in school, wrote some projects, but eventually convinced myself I wasn't cut out for it. Too slow, too many bugs, imposter syndrome — the usual story. So I pivoted, ended up as an investment associate at an early-stage angel fund, and haven't written real code in years. Fast forward to now. I'm a Buffett nerd — big believer in compound interest as a mental model for life. I run compound interest calculations constantly. Not because I need to, but because watching numbers grow over 30-40 years keeps me patient when markets get wild. It's basically meditation for long-term investors. The problem? Every compound interest calculator online is terrible. Ugly interfaces, ads covering half the screen, can't customize compounding frequency properly, no year-by-year breakdowns. I've tried so many. They all suck. When vibe coding started blowing up, something clicked. Maybe I could actually build the calculators I wanted? I don't have to be a "real developer" anymore — I just need to describe what I want clearly. So I tried it. Two weeks and ~$100(Opus 4.5 thinking model) in API costs later: I somehow have 60+ calculators. Started with compound interest, naturally. Then thought "well, while I'm here..." and added mortgage, loan amortization, savings goals, retirement projections. Then it spiraled — BMI calculator, timezone converter, regex tester. Oops. The AI (I'm using Claude via Windsurf) handled the grunt work beautifully. I'd describe exactly what I wanted — "compound interest calculator with monthly/quarterly/yearly options, year-by-year breakdown table, recurring contribution support" — and it delivered. With validation, nice components, even tests. What I realized: my years away from coding weren't wasted. I still understood architecture, I still knew what good UX looked like, I still had domain expertise (financial math). I just couldn't type it all out efficiently. AI filled that gap perfectly. Vibe coding didn't make me a 10x engineer. But it gave me permission to build again. Ideas I've had for years suddenly feel achievable. That's honestly the bigger win for me. Stack: Next.js, React, TailwindCSS, shadcn/ui, four languages (EN/DE/FR/JA). The AI picked most of this when I said "modern and clean." Site's live at https://bit.ly/3NpNjA2 . The compound interest calculator is still my favorite page — finally exactly what I wanted. Curious if others have similar stories. Anyone else come back to building after stepping away? https://bit.ly/3NpNjA2 January 19, 2026 at 01:50AM

Saturday, 17 January 2026

Show HN: WebGPU React Renderer Using Vello https://bit.ly/4pKI1ww

Show HN: WebGPU React Renderer Using Vello I've built a package to use Raph Levien's Vello as a blazing fast 2D renderer for React on WebGPU. It uses WASM to hook into the Rust code https://bit.ly/45jCR3b January 17, 2026 at 10:27PM

Show HN: Speed Miners – A tiny RTS resource mini-game https://bit.ly/4r3girW

Show HN: Speed Miners – A tiny RTS resource mini-game I've always loved RTS games and wanted to make a game similar for a long time. I thought I'd just try and build a mini / puzzle game around the resource gathering aspects of an RTS. Objective: You have a base at the center and you need to mine and "refine" all of the resources on the map in as short a time as possible. By default, the game will play automatically, but not optimally (moving and buying upgrades). You can disable that with the buttons. You can select drones and right click to move them to specific resources patches and buy upgrades as you earn upgrade points. I've implemented three different levels and some basic sounds. I used Phaser at the game library (first time using it). It won't work well on a mobile. https://bit.ly/4sLA7p7 January 17, 2026 at 10:40PM

Friday, 16 January 2026

Show HN: Building the ClassPass for coworking spaces, would love your thoughts https://bit.ly/4r1tk9p

Show HN: Building the ClassPass for coworking spaces, would love your thoughts Growing up in a family business focused on coworking and shared spaces, I saw that many people were looking for a coworking space to use for a day. They weren't ready to jump into a long-term agreement. So I created LANS to simplify coworking. Our platform allows users to buy a day pass to a coworking space in seconds. The process is simple: book your pass, arrive at the space, give your name at the front desk, and you're in. Where we are Live in San Francisco with several coworking partners. Recently started expanding beyond the Bay. 10K paid users in San Francisco. Day passes priced between $18 and $25. What we’re seeing Users often use this service. They rotate locations during the week to fit their needs and schedules. For spaces, it’s incremental usage and new foot traffic during the workday. Outside dense city centers, onboarding new spaces tends to be faster. Many suburban areas host nice boutique coworking spaces. But, they often miss a strong online presence. Day passes quickly appeal to both operators and users. What we’re working on Expanding to more cities. Adding supply while keeping quality consistent. Learning which product decisions actually improve repeat usage. Would love feedback from HN: Does this resonate with how you work today? Have you used coworking day passes before? Would you dump your coworking membership for this? https://bit.ly/4pIpSzm January 17, 2026 at 05:54AM

Show HN: Making Claude Code sessions link-shareable https://bit.ly/4qVwACZ

Show HN: Making Claude Code sessions link-shareable Hey HN! My name is Omkar Kovvali and I've been wanting to share my CC sessions with friends / save + access them easily,so I decided to make an MCP server to do so! /share -> Get a link /import -> resume a conversation in your Claude Code All shared sessions are automatically sanitized, removing api keys, tokens, and secrets. Give it a try following the Github/npm instructions linked below - would love feedback! https://bit.ly/4k44IKZ https://bit.ly/3NhPNR3 January 17, 2026 at 03:50AM

Show HN: Commander AI – Mac UI for Claude Code https://bit.ly/4qZCPpv

Show HN: Commander AI – Mac UI for Claude Code Hi HN, I built Commander, a UI for running multiple AI coding agents in parallel without living in terminal hell. As coding agents got better, I started trusting them with real work: features, end-to-end, refactors, tests. Naturally, I began running 1–3 at once. That’s when the CLI stopped scaling — too many terminals, lost context, scattered diffs. Commander fixes that. https://bit.ly/4qsi5H6 January 17, 2026 at 01:08AM

Thursday, 15 January 2026

Show HN: Reversing YouTube’s “Most Replayed” Graph https://bit.ly/3NLBu7c

Show HN: Reversing YouTube’s “Most Replayed” Graph Hi HN, I recently noticed a recurring visual artifact in the "Most Replayed" heatmap on the YouTube player. The highest peaks were always surrounded by two dips. I got curious about why they were there, so I decided to reverse engineer the feature to find out. This post documents the deep dive. It starts with a system design recreation, reverse engineering the rendering code, and ends with the mathematics. This is also my first attempt at writing an interactive article. I would love to hear your thoughts on the investigation and the format. https://bit.ly/4qlpNm9 January 16, 2026 at 03:06AM

Show HN: Gambit, an open-source agent harness for building reliable AI agents https://bit.ly/4qVoWsg

Show HN: Gambit, an open-source agent harness for building reliable AI agents Hey HN! Wanted to show our open source agent harness called Gambit. If you’re not familiar, agent harnesses are sort of like an operating system for an agent... they handle tool calling, planning, context window management, and don’t require as much developer orchestration. Normally you might see an agent orchestration framework pipeline like: compute -> compute -> compute -> LLM -> compute -> compute -> LLM we invert this so with an agent harness, it’s more like: LLM -> LLM -> LLM -> compute -> LLM -> LLM -> compute -> LLM Essentially you describe each agent in either a self contained markdown file, or as a typescript program. Your root agent can bring in other agents as needed, and we create a typesafe way for you to define the interfaces between those agents. We call these decks. Agents can call agents, and each agent can be designed with whatever model params make sense for your task. Additionally, each step of the chain gets automatic evals, we call graders. A grader is another deck type… but it’s designed to evaluate and score conversations (or individual conversation turns). We also have test agents you can define on a deck-by-deck basis, that are designed to mimic scenarios your agent would face and generate synthetic data for either humans or graders to grade. Prior to Gambit, we had built an LLM based video editor, and we weren’t happy with the results, which is what brought us down this path of improving inference time LLM quality. We know it’s missing some obvious parts, but we wanted to get this out there to see how it could help people or start conversations. We’re really happy with how it’s working with some of our early design partners, and we think it’s a way to implement a lot of interesting applications: - Truly open source agents and assistants, where logic, code, and prompts can be easily shared with the community. - Rubric based grading to guarantee you (for instance) don’t leak PII accidentally - Spin up a usable bot in minutes and have Codex or Claude Code use our command line runner / graders to build a first version that is pretty good w/ very little human intervention. We’ll be around if ya’ll have any questions or thoughts. Thanks for checking us out! Walkthrough video: https://youtu.be/J_hQ2L_yy60 https://bit.ly/4sH6hST January 16, 2026 at 01:13AM

Show HN: Control what Claude can access using cloud-based decision table UIs https://bit.ly/4qwXTE8

Show HN: Control what Claude can access using cloud-based decision table UIs We’ve been building visual rule engines (clear interfaces + API endpoints that help map input data to a large number of outcomes) for a while and had the fun idea lately to see what happens when we use our decision table UI with Claude’s PreToolUse hook. The result is a surprisingly useful policy/gating layer– these tables let your team: - Write multi-factor, exception-friendly policies (e.g. deny rm -rf / when --force; allow cleanup only in node_modules; ask on network calls like curl/wget; block kubectl delete or SQL DROP, each with a clear reason) - Roll out policy changes instantly (mid-run, flip a risky operation from allow → ask; the next attempt across devs and agents is gated immediately– no git pull, agent restart, or coordination) - Adopt lightweight governance that survives churn (MCP/skills/etc): just add columns/rules as new tools and metadata show up - Get a quick central utility to understand which tools are being used, which tools get blocked most often, and why https://bit.ly/45fepjl January 15, 2026 at 07:21PM

Wednesday, 14 January 2026

Show HN: Visibility and Controls for Browser Agents https://bit.ly/49kG8BG

Show HN: Visibility and Controls for Browser Agents Hey HN! I’m Ashwin, co-founder of ContextFort ( https://bit.ly/4jMQnm0 ). We provide visibility and controls for AI browser agents like Claude in Chrome through an open-source browser extension. Browser agents are AI copilots that can autonomously navigate and take actions in your browser. They show up as standalone browsers (Comet, Atlas) or Chrome extensions (Claude). They’re especially useful in sites where search/API connectors don’t work well, like searching through Google Groups threads for a bug fix or pulling invoices from BILL.com. Anthropic released Claude CoWork yesterday, and in their launch video, they showcased their browser-use chromium extension: https://www.youtube.com/watch?v=UAmKyyZ-b9E But enterprise adoption is slow because of indirect prompt injection risks, about which Simon Willison has written in great detail in his blogs: https://bit.ly/3LDV7gR... . And before security teams can decide on guardrails, they need to know how employees are using browser agents to understand where the risks are. So, we reverse-engineered how the Claude in Chrome extension works and built a visibility layer that tracks agent sessions end-to-end. It detects when an AI agent takes control of the browser and records which pages it visited during a session and what it does on each page (what was clicked and where text was input). On top of that, we’ve also added simple controls for security teams to act on based on what the visibility layer captures: (1) Block specific actions on specific pages (e.g., prevent the agent from clicking “Submit” on email) (2) Block risky cross-site flows in a single session (e.g., block navigation to Atlassian after interacting with StackOverflow), or apply a stricter policy and block bringing any external context to Atlassian entirely. We demo all the above features here in this 2-minute YouTube video: https://www.youtube.com/watch?v=1YtEGVZKMeo You can try our browser extension here: https://bit.ly/4bFL19M Thrilled to share this with you and hear your comments! https://bit.ly/4jMQnm0 January 14, 2026 at 10:22AM

Show HN: IMSAI/Altair inspired microcomputer with web emulator https://bit.ly/3Nn6tXd

Show HN: IMSAI/Altair inspired microcomputer with web emulator I designed and built a physical replica of a 1970s-style front panel microcomputer with 25+ toggle switches, 16 LEDs, and an LCD display. The brain is a Raspberry Pi Pico running an Intel 8080 CPU emulator. The main twist: I decided to see how far I could get using Claude Code for the firmware. That and the web emulator were written almost entirely using Claude Code (Opus 4.5). I've kept the full prompt history here: https://bit.ly/3NjenRu.... It was able to create the emulator in just a few prompts! It really surprised me that it was able to make a WebAssembly version from the same code (compiled with emscripten) and get the physical layout of the panel from a given photo. It also created some simple working examples using 8086 instructions! Repository: https://bit.ly/3NdJzBF https://bit.ly/3NhAOqd January 15, 2026 at 02:57AM

Show HN: Commosta – marketplace to share computing resources https://bit.ly/4bx3q8K

Show HN: Commosta – marketplace to share computing resources https://bit.ly/49nLOLl January 15, 2026 at 03:00AM

Show HN: Chklst – A Minimalist Checklist https://bit.ly/4qqmudE

Show HN: Chklst – A Minimalist Checklist Welp... I finally shipped. This is my first real project. I wanted a checklist app the way I wanted it so I built chklst. What’s different? Simple, drag & drop reordering, keyboard shortcuts, color labels. There’s a live demo on the landing page so you can try it without signing up. Free accounts can create 1 list. Premium is $5/month for up to 25 lists + shareable lists. What do you think? I built it with Next.js 16 + Turso/libSQL + Drizzle + Better Auth + Stripe. https://bit.ly/4qkOhvL Would love feedback on onboarding, UX, and pricing. Thanks everyone! https://bit.ly/4qkOhvL January 15, 2026 at 02:48AM

Tuesday, 13 January 2026

Show HN: Microwave – Native iOS app for videos on ATproto https://bit.ly/3LKyxTR

Show HN: Microwave – Native iOS app for videos on ATproto Hi HN — I built Microwave, a native iOS app for browsing and posting short-form videos, similar to TikTok, but implemented as a pure client on top of Bluesky / AT Protocol. There’s no custom backend: the app reads from and publishes to existing ATproto infrastructure. The goal was to explore whether a TikTok-like experience can exist as a thin client over an open social protocol, rather than a vertically integrated platform. Things I’d especially love feedback on: - Whether this kind of UX makes sense on top of ATproto - Client-only tradeoffs (ranking, discovery, moderation) - Protocol limitations I may be missing - Any architectural red flags TestFlight: https://apple.co/4aXJnAj https://apple.co/4aXJnAj January 13, 2026 at 06:14PM

Show HN: Vibe scrape with AI Web Agents, prompt => get data [video] https://bit.ly/4qem2Ps

Show HN: Vibe scrape with AI Web Agents, prompt => get data [video] Most of us have a list of URLs we need data from (government listings, local business info, pdf directories). Usually, that means hiring a freelancer or paying for an expensive, rigid SaaS. We built an AI Web Agent platform, rtrvr.ai to make "Vibe Scraping" a thing. How it works: 1. Upload a Google Sheet with your URLs. 2. Prompt: "Find the email, phone number, and their top 3 services." 3. Watch the AI agents open 50+ browsers at once and fill your sheet in real-time. It’s powered by a multi-agent system that can take actions, upload files, and crawl through paginations. Web Agent technology built from the ground up: End to End Agent: we built a resilient agentic harness with 20+ specialized sub-agents that transforms a single prompt into a complete end-to-end workflow. Turn any prompt into an end to end workflow, and on any site changes the agent adapts. Dom Intelligence: we perfected a DOM-only web agent approach that represents any webpage as semantic trees guaranteeing zero hallucinations and leveraging the underlying semantic reasoning capabilities of LLMs. Native Chrome APIs: we built a Chrome Extension to control cloud browsers that runs in the same process as the browser to avoid the bot detection and failure rates of CDP. We further solved the hard problems of interacting with the Shadow DOM and other DOM edge cases. Cost: We engineered the cost down to $10/mo but you can bring your own Gemini key and proxies to use for nearly FREE. Compare that to the $200+/mo some other lead gen tools like Clay charge. Use the free browser extension for login walled sites like LinkedIn locally, or the cloud platform for scale on the public web. We are thinking it can be a great upstream tool to your CRM to generate lists and enrich data. Curious to hear if this would make your lead generation, scraping, or automation easier or is it missing the mark? https://www.youtube.com/watch?v=ggLDvZKuBlU January 14, 2026 at 01:30AM

Show HN: AsciiSketch a free browser-based ASCII art and diagram editor https://bit.ly/3LsKKwu

Show HN: AsciiSketch a free browser-based ASCII art and diagram editor https://bit.ly/4sFPHm8 January 13, 2026 at 11:39PM

Monday, 12 January 2026

Show HN: Modern Philosophy Course https://bit.ly/3LJFioW

Show HN: Modern Philosophy Course Fun module on Thales of Miletus—the beginning of philosophy https://bit.ly/4qUrveo January 13, 2026 at 01:09AM

Show HN: I built a tool to calculate the True Cost of Ownership (TCO) for yachts https://bit.ly/3ZbAaNq

Show HN: I built a tool to calculate the True Cost of Ownership (TCO) for yachts https://bit.ly/4pENiFP January 13, 2026 at 02:11AM

Show HN: Blockchain-Based Equity with Separated Economic and Governance Rights https://bit.ly/3Z7pob1

Show HN: Blockchain-Based Equity with Separated Economic and Governance Rights I've been researching blockchain-based capital markets and developed a framework for tokenized equity with separated economic, dividend, and governance rights. Core idea: Instead of bundling everything into one share, issue three token types: - LOBT: Economic participation, no governance - PST: Automated dividends, no ownership - OT: Full governance control Key challenge: Verifying real-world business operations on-chain without trusted intermediaries. I propose decentralized oracles + ZK proofs, but acknowledge significant unsolved problems. This is research/RFC seeking technical feedback on oracle architecture, regulatory viability, and which verticals this makes sense for. Thoughts? https://bit.ly/4pHIM9v January 13, 2026 at 01:33AM

Sunday, 11 January 2026

Show HN: Voice Composer – Browser-based pitch detection to MIDI/strudel/tidal https://bit.ly/4qRTtHC

Show HN: Voice Composer – Browser-based pitch detection to MIDI/strudel/tidal Built this over the weekend to bridge the gap between "can hum a melody" and "can code algorithmic music patterns" (Strudel/TidalCycles) for live coding and live dj'ing. What it does: Real-time pitch detection in browser using multiple algorithms: CREPE (deep learning model via TensorFlow.js) YIN (autocorrelation-based fundamental frequency estimation) FFT with harmonic product spectrum AMDF (average magnitude difference function) Outputs: visual piano roll, MIDI files, Strudel/TidalCycles code All client-side, nothing leaves your machine Why multiple algorithms: Different pitch detection approaches work better for different inputs. CREPE is most accurate but computationally expensive; YIN is fast and works well for clean monophonic input; FFT/HPS handles harmonic-rich sounds; AMDF is lightweight. Let users switch based on their use case. Technical details: React, runs entirely in browser via Web Audio API Canvas-based visualization with real-time waveform rendering The original problem: I wanted to learn live coding but had zero music theory. This makes it trivial to capture melodic ideas and immediately use them in pattern-based music systems. Try it: https://bit.ly/3YCk8vV Works best on desktop. Will work more like a Digital Audio Workbench (DAW). Source: https://bit.ly/3YzlR57 https://bit.ly/3YCk8vV January 12, 2026 at 12:06AM