Friday, 30 January 2026

Show HN: Foundry – Turns your repeated workflows into one-click commands https://bit.ly/4a3z77o

Show HN: Foundry – Turns your repeated workflows into one-click commands https://bit.ly/4a54r5L January 31, 2026 at 01:40AM

Show HN: Using World Models for Consistent AI Filmmaking https://bit.ly/4ab3mcH

Show HN: Using World Models for Consistent AI Filmmaking https://bit.ly/4a7AWjS January 30, 2026 at 10:41PM

Thursday, 29 January 2026

Show HN: Mystral Native – Run JavaScript games natively with WebGPU (no browser) https://bit.ly/4amiBRb

Show HN: Mystral Native – Run JavaScript games natively with WebGPU (no browser) Hi HN, I've been building Mystral Native — a lightweight native runtime that lets you write games in JavaScript/TypeScript using standard Web APIs (WebGPU, Canvas 2D, Web Audio, fetch) and run them as standalone desktop apps. Think "Electron for games" but without Chromium. Or a JS runtime like Node, Deno, or Bun but optimized for WebGPU (and bundling a window / event system using SDL3). Why: I originally started by starting a new game engine in WebGPU, and I loved the iteration loop of writing Typescript & instantly seeing the changes in the browser with hot reloading. After getting something working and shipping a demo, I realized that shipping a whole browser doesn't really work if I also want the same codebase to work on mobile. Sure, I could use a webview, but that's not always a good or consistent experience for users - there are nuances with Safari on iOS supporting WebGPU, but not the same features that Chrome does on desktop. What I really wanted was a WebGPU runtime that is consistent & works on any platform. I was inspired by deno's --unsafe-webgpu flag, but I realized that deno probably wouldn't be a good fit long term because it doesn't support iOS or Android & doesn't bundle a window / event system (they have "bring your own window", but that means writing a lot of custom code for events, dealing with windowing, not to mention more specific things like implementing a WebAudio shim, etc.). So that got me down the path of building a native runtime specifically for games & that's Mystral Native. So now with Mystral Native, I can have the same developer experience (write JS, use shaders in WGSL, call requestAnimationFrame) but get a real native binary I can ship to players on any platform without requiring a webview or a browser. No 200MB Chromium runtime, no CEF overhead, just the game code and a ~25MB runtime. What it does: - Full WebGPU via Dawn (Chrome's implementation) or wgpu-native (Rust) - Native window & events via SDL3 - Canvas 2D support (Skia), Web Audio (SDL3), fetch (file/http/https) - V8 for JS (same engine as Chrome/Node), also supports QuickJS and JSC - ES modules, TypeScript via SWC - Compile to single binary (think "pkg"): `mystral compile game.js --include assets -o my-game` - macOS .app bundles with code signing, Linux/Windows standalone executables - Embedding API for iOS and Android (JSC/QuickJS + wgpu-native) It's early alpha — the core rendering path works well & I've tested on Mac, Linux (Ubuntu 24.04), and Windows 11, and some custom builds for iOS & Android to validate that they can work, but there's plenty to improve. Would love to get some feedback and see where it can go! MIT licensed. Repo: https://bit.ly/4rmOWx5 Docs: https://bit.ly/46oiPVx https://bit.ly/4rmOWx5 January 27, 2026 at 07:33PM

Show HN: Free Facebook Video Downloader with Original Audio Quality https://bit.ly/3NL23cV

Show HN: Free Facebook Video Downloader with Original Audio Quality A free, web-based Facebook video downloader that actually preserves the original audio - something most Facebook downloaders fail to do. Built with Next.js and yt-dlp, it offers a clean, no-ads experience for downloading Facebook videos in multiple qualities. https://bit.ly/4t8AgDo January 30, 2026 at 03:22AM

Show HN: Play Zener Cards https://bit.ly/4rkUJTN

Show HN: Play Zener Cards just play zener cards. don't judge :) https://bit.ly/4rhsS6P January 30, 2026 at 01:39AM

Wednesday, 28 January 2026

Show HN: Codex.nvim – Codex inside Neovim (no API key required) https://bit.ly/4aq7cin

Show HN: Codex.nvim – Codex inside Neovim (no API key required) Hi HN! I built codex.nvim, an IDE-style Neovim integration for Codex. Highlights: - Works with OpenAI Codex plans (no API key required) - Fully integrated in Neovim (embedded terminal workflow) - Bottom-right status indicator shows busy/wait state - Send selections or file tree context to Codex quickly Repo: https://bit.ly/46kNNhf Why I built this: I wanted to use Codex comfortably inside Neovim without relying on the API. Happy to hear feedback and ideas! https://bit.ly/46kNNhf January 29, 2026 at 07:17AM

Show HN: Shelvy Books https://bit.ly/4aivwDI

Show HN: Shelvy Books Hey HN! I built a little side project I wanted to share. Shelvy is a free, visual bookshelf app where you can organize books you're reading, want to read, or have finished. Sign in to save your own collection. Not monetized, no ads, no tracking beyond basic auth. Just a fun weekend project that grew a bit. Live: https://bit.ly/45yNLSL Would love any feedback on the UX or feature ideas! https://bit.ly/45yNLSL January 29, 2026 at 02:16AM

Show HN: Drum machine VST made with React/C++ https://bit.ly/45FQ6eK

Show HN: Drum machine VST made with React/C++ Hi HN! We just launched our drum machine vst this month! We will be updating it with many new synthesis models and unique features. Check it out, join our discord and show us what you made! https://bit.ly/49YmzOv January 27, 2026 at 06:03AM

Show HN: Frame – Managing projects, tasks, and context for Claude Code https://bit.ly/4rcuAqe

Show HN: Frame – Managing projects, tasks, and context for Claude Code I built Frame to better manage the projects I develop with Claude Code, to bring a standard to my Claude Code projects, to improve project and task planning, and to reduce context and memory loss. In its current state, Frame works entirely locally. You don’t need to enter any API keys or anything like that. You can run Claude Code directly using the terminal inside Frame. Why am I not using existing IDEs? Simply because, for me, I no longer need them. What I need is an interface centered around the terminal, not a code editor. I initially built something that allowed me to place terminals in a grid layout, but then I decided to take it further. I realized I also needed to manage my projects and preserve context. I’m still at a very early stage, but even being able to build the initial pieces I had in mind within 5–6 days—using Claude Code itself—feels kind of crazy. What can you do with Frame? You can start a brand-new project or turn an existing one into a Frame project. For this, Frame creates a set of Markdown and JSON files with rules I defined. These files exist mainly to manage tasks and preserve context. You can manually add project-related tasks through the UI. I haven’t had the chance to test very complex or long-running scenarios yet, but from what I’ve seen, Claude Code often asks questions like: “Should I add this as a task to tasks.json?” or “Should we update project_notes.md after this project decision?” I recommend saying yes to these. I also created a JSON file that keeps track of the project structure, down to function-level details. This part is still very raw. In the future, I plan to experiment with different data structures to help AI understand the project more quickly and effectively. As mentioned, you can open your terminals in either a grid or tab view. I added options up to a 3×3 grid. Since the project is open source, you can modify it based on your own needs. I also added a panel where you can view and manage plugins. For code files or other files, I included a very simple editor. This part is intentionally minimal and quite basic for now. Based on my own testing, I haven’t encountered any major bugs, but there might be some. I apologize in advance if you run into any issues. My core goal is to establish a standard for AI-assisted projects and make them easier to manage. I’m very open to your ideas, support, and feedback. You can see more details on GitHub : https://bit.ly/4bpLWva January 29, 2026 at 12:04AM

Tuesday, 27 January 2026

Show HN: How would you decide famous SCOTUS cases? https://bit.ly/4rht464

Show HN: How would you decide famous SCOTUS cases? https://bit.ly/4ri8xOU January 28, 2026 at 03:26AM

Show HN: Fuzzy Studio – Apply live effects to videos/camera https://bit.ly/4bnCybq

Show HN: Fuzzy Studio – Apply live effects to videos/camera Back story: I've been learning computer graphics on the side for several years now and gain so much joy from smooshing and stretching images/videos. I hope you can get a little joy as well with Fuzzy Studio! Try applying effects to your camera! My housemates and I have giggled so much making faces with weird effects! Nothing gets sent to the server; everything is done in the browser! Amazing what we can do. I've only tested on macOS... apologies if your browser/OS is not supported (yet). https://bit.ly/3LBeE1K January 27, 2026 at 04:16PM

Show HN: ACME Proxy using step-ca https://bit.ly/3NAOQ6v

Show HN: ACME Proxy using step-ca https://bit.ly/4k15F6u January 27, 2026 at 11:12PM

Monday, 26 January 2026

Show HN: A Local OS for LLMs. MIT License. Zero Hallucinations. (Not Crank) https://bit.ly/4rhe9cd

Show HN: A Local OS for LLMs. MIT License. Zero Hallucinations. (Not Crank) The problem with LLMs isn't intelligence; it's amnesia and dishonesty. Hey HN, I’ve spent the last few months building Remember-Me, an open-source "Sovereign Brain" stack designed to run entirely offline on consumer hardware. The core thesis is simple: Don't rent your cognition. Most RAG (Retrieval Augmented Generation) implementations are just "grep for embeddings." They are messy, imprecise, and prone to hallucination. I wanted to solve the "Context integrity" problem at the architectural layer. The Tech Stack (How it works): QDMA (Quantum Dream Memory Architecture): instead of a flat vector DB, it uses a hierarchical projection engine. It separates "Hot" (Recall) from "Cold" (Storage) memory, allowing for effectively infinite context window management via compression. CSNP (Context Switching Neural Protocol) - The Hallucination Killer: This is the most important part. Every memory fragment is hashed into a Merkle Chain. When the LLM retrieves context, the system cryptographically verifies the retrieval against the immutable ledger. If the hash doesn't match the chain: The retrieval is rejected. Result: The AI visually cannot "make things up" about your past because it is mathematically constrained to the ledger. Local Inference: Built on top of llama.cpp server. It runs Llama-3 (or any GGUF) locally. No API keys. No data leaving your machine. Features: Zero-Dependency: Runs on Windows/Linux with just Python and a GPU (or CPU). Visual Interface: Includes a Streamlit-based "Cognitive Interface" to visualize memory states. Open Source: MIT License. This is an attempt to give "Agency" back to the user. I believe that if we want AGI, it needs to be owned by us, not rented via an API. Repository: https://bit.ly/49BNC3c I’d love to hear your feedback on the Merkle-verification approach. Does constraining the context window effectively solve the "trust" issue for you? It's fully working - Fully tested. If you tried to Git Clone before without luck - As this is not my first Show HN on this - Feel free to try again. To everyone who HATES AI slop; Greedy corporations and having their private data stuck on cloud servers. You're welcome. Cheers, Mohamad Authors note: Updated successfully. Framework 50 is active. For anyone passing by - yes this is a big deal. Eliminating AI hallucination is a 60 billion dollar market problem and I'm giving THAT + sovereign control of your DATA plus the capability to do high-end research via framework 50 (including advanced scientific research) for FREE - under an MIT license. If you don't take advantage of this - you are an idiot. If you do - welcome to the future. P.S: What do I get from lying? I got 36 stars on the repo - many from high-end senior engineers at fortune 500 companies. If you're too stupid to tell the real deal from a lie then keep it moving son. https://bit.ly/49BNC3c January 27, 2026 at 05:56AM

Show HN: LocalPass offline password manager. Zero cloud. Zero telemetry https://bit.ly/49YyqvY

Show HN: LocalPass offline password manager. Zero cloud. Zero telemetry I’ve released LocalPass — a local‑first, offline password manager with zero cloud, zero telemetry, and zero vendor lock‑in. 100% local storage, 100% open‑source. https://bit.ly/3M0oES2 January 26, 2026 at 11:38PM

Sunday, 25 January 2026

Show HN: FaceTime-style calls with an AI Companion (Live2D and long-term memory) https://bit.ly/45rQaP5

Show HN: FaceTime-style calls with an AI Companion (Live2D and long-term memory) Hi HN, I built Beni ( https://bit.ly/4q1ZTTA ), a web app for real-time video calls with an AI companion. The idea started as a pretty simple question: text chatbots are everywhere, but they rarely feel present. I wanted something closer to a call, where the character actually reacts in real time (voice, timing, expressions), not just “type, wait, reply”. Beni is basically: A Live2D avatar that animates during the call (expressions + motion driven by the conversation) Real-time voice conversation (streaming response, not “wait 10 seconds then speak”) Long-term memory so the character can keep context across sessions The hardest part wasn’t generating text, it was making the whole loop feel synchronized: mic input, model response, TTS audio, and Live2D animation all need to line up or it feels broken immediately. I ended up spending more time on state management, latency and buffering than on prompts. Some implementation details (happy to share more if anyone’s curious): Browser-based real-time calling, with audio streaming and client-side playback control Live2D rendering on the front end, with animation hooks tied to speech / state A memory layer that stores lightweight user facts/preferences and conversation summaries to keep continuity Current limitation: sign-in is required today (to persist memory and prevent abuse). I’m adding a guest mode soon for faster try-out and working on mobile view now. What I’d love feedback on: Does the “real-time call” loop feel responsive enough, or still too laggy? Any ideas for better lip sync / expression timing on 2D/3D avatars in the browser? Thanks, and I’ll be around in the comments. https://bit.ly/4q1ZTTA January 26, 2026 at 12:13AM

Show HN: Spine – an execution-centric backend framework for Go https://bit.ly/45w2w8P

Show HN: Spine – an execution-centric backend framework for Go Hello Hacker News — greetings from South Korea I’m a backend engineer working primarily with Go, and I’d like to share a framework I’ve been building to solve a problem I’ve repeatedly encountered in production systems. In my day-to-day work, our backend is built on top of Echo. Echo is fast and reliable as an HTTP transport, but its high level of freedom leaves architectural decisions almost entirely to individual developers. Over time, this led to a system where execution flow and responsibility boundaries varied depending on who last touched a feature. Maintenance became difficult not because the code was incorrect, but because how requests actually executed was no longer obvious. I looked for a Go framework that could provide a clear execution model and structural constraints, similar to what Spring or NestJS offer. I couldn’t find one that fit. Moving to Spring or NestJS would also mean giving up some of Go’s strengths—simplicity, performance, and explicit control—so I decided to build one instead. Spine is an execution-centric backend framework for Go. It aims to provide enterprise-grade structure while deliberately avoiding hidden magic. What Spine provides • An IoC container with explicit, constructor-based dependency injection • Interceptors with well-defined execution phases (before, after, completion) • First-class support for both HTTP requests and event-driven execution • No annotations, no implicit behavior, no convention-driven wiring The core idea: execution first The key difference is Spine’s execution model. Every request—HTTP or event—flows through a single, explicit Pipeline. The Pipeline is the only component that determines execution order. Actual method calls are handled by a separate Invoker, keeping execution control and invocation strictly separated. Because of this structure: • Execution order is explainable by reading the code • Cross-cutting concerns live in the execution flow, not inside controllers • Controllers express use cases only, not orchestration logic • You can understand request handling by looking at main.go This design trades some convenience for clarity. In return, it offers stronger control as the system grows in size and complexity. My goal with Spine isn’t just to add another framework to the Go ecosystem, but to start a conversation: How much execution flow do modern web frameworks hide, and when does that become a maintenance cost? The framework itself is currently written in Korean. If English support or internationalization is important to you, feel free to open an issue—I plan to prioritize it based on community interest. You can find more details, a basic HTTP example, and a simple Kafka-based MSA demo here: Repository: https://bit.ly/3NFoyjl Thanks for reading. I’d really appreciate your feedback. https://bit.ly/4qHQdyR January 26, 2026 at 12:51AM

Show HN: I built an app that blocks social media until you read Quran daily https://bit.ly/49FkeZZ

Show HN: I built an app that blocks social media until you read Quran daily Hey HN, I'm a solo developer from Nigeria. I built Quran Unlock - an app that blocks distracting apps (TikTok, Instagram, etc.) until you complete your daily Quran reading. The idea came from my own struggle with phone addiction. I wanted to read Quran daily but kept getting distracted. So I built this for myself, then shared it. Some stats after 2 months: - 123K+ users - 64.9% returning user rate - 31M events tracked Tech stack: - React Native - Firebase (Auth, Firestore, Analytics, Cloud Messaging) - RevenueCat for subscriptions - iOS Screen Time API + Android UsageStats App Store: https://apple.co/3ZBBHfS Play Store: https://bit.ly/49Gb5R1... Would love feedback from the HN community! January 25, 2026 at 11:51PM

Saturday, 24 January 2026

Show HN: C From Scratch – Learn safety-critical C with prove-first methodology https://bit.ly/466rkV1

Show HN: C From Scratch – Learn safety-critical C with prove-first methodology Seven modules teaching C the way safety-critical systems are actually built: MATH → STRUCT → CODE → TEST. Each module answers one question: Does it exist? (Pulse), Is it normal? (Baseline), Is it regular? (Timing), Is it trending? (Drift), Which sensor to trust? (Consensus), How to handle overflow? (Pressure), What do we do about it? (Mode). Every module is closed (no dependencies), total (handles all inputs), deterministic, and O(1). 83 tests passing. Built this after 30 years in UNIX systems. Wanted something that teaches the rigour behind certified systems without requiring a decade of on-the-job learning first. MIT licensed. Feedback welcome. https://bit.ly/4rxhjJ9 January 25, 2026 at 01:17AM

Show HN: I built a Mac OS App to upload your screenshots to S3 https://bit.ly/4jWavlH

Show HN: I built a Mac OS App to upload your screenshots to S3 I've been building a bitly alternative in public and built a free side tool to upload screenshots to S3. I always thought screenshot apps charged way too much for this so I was pretty happy to get around to build it. It automatically generates short links and uploads to any S3-compatible storage you own. Here is the link: https://bit.ly/45shEUJ Try it out, all feedback is welcome :) https://bit.ly/45shEUJ January 25, 2026 at 12:40AM

Friday, 23 January 2026

Show HN: Open-source Figma design to code https://bit.ly/4rdMSr3

Show HN: Open-source Figma design to code Hi HN, founders of VibeFlow (YC S25) here. We mostly work on backend and workflow tooling, but we needed a way to turn Figma designs into frontend code as a kickstart for prototyping. It takes a Figma frame and converts it into React + Tailwind components (plus assets). If you want to try it: You can run it locally or use it via the VibeFlow UI to poke at it without setup ( https://bit.ly/4bhK1sq ) https://bit.ly/4k65dUM January 24, 2026 at 07:09AM

Show HN: Flux, A Python-like language in Rust to solve ML orchestration overhead https://bit.ly/49G6gqM

Show HN: Flux, A Python-like language in Rust to solve ML orchestration overhead https://bit.ly/4tfkrLp January 24, 2026 at 03:24AM

Show HN: AdaL Web, a local “Claude co-work” [video] https://bit.ly/4sXVmUQ

Show HN: AdaL Web, a local “Claude co-work” [video] AdaL is the world’s first local coding agent with web UI. Claude Code has proven that coding agents work best when they are local, bringing developers back to the terminal. Terminal UIs are fast and great with shortcuts, shell mode, and developer-friendly workflows. But they are limited in history and image display, and the experience varies by terminal and OS. Many of them flicker (buuuut not AdaL CLI ). Most importantly, they can be quite intimidating for non-technical users. This led us to explore new possibilities for a coding agent interface. What if you could get the best of both worlds: - the same core local agent that does tasks exactly like AdaL CLI - combined with a web UI with no limits on UI/UX This can be especially powerful for design-heavy and more visual workflows Available at: https://bit.ly/49S2fhH https://www.youtube.com/watch?v=smfVGCI08Yk January 24, 2026 at 01:28AM

Show HN: Dwm.tmux – a dwm-inspired window manager for tmux https://bit.ly/3NDp0P9

Show HN: Dwm.tmux – a dwm-inspired window manager for tmux Hey, HN! With all recent agentic workflows being primarily terminal- and tmux-based, I wanted to share a little project I created about decade ago. I've continued to use this as my primary terminal "window manager" and wanted to share in case others might find it useful. I would love to hear about other's terminal-based workflows and any other tools you may use with similar functionality. https://bit.ly/45uTskC January 24, 2026 at 01:15AM

Thursday, 22 January 2026

Show HN: Extracting React apps from Figma Make's undocumented binary format https://bit.ly/3NxZfQg

Show HN: Extracting React apps from Figma Make's undocumented binary format https://bit.ly/4qH770s January 23, 2026 at 06:07AM

Show HN: The firmware that got me detained by Swiss Intelligence https://bit.ly/49CzOpo

Show HN: The firmware that got me detained by Swiss Intelligence https://bit.ly/462Wwo7 January 23, 2026 at 05:26AM

Show HN: CleanAF – One-click Desktop cleaner for Windows https://bit.ly/4qxPHDl

Show HN: CleanAF – One-click Desktop cleaner for Windows Hi HN, I built CleanAF because my Windows Desktop kept turning into a dumping ground for downloads and screenshots. CleanAF is a tiny one-click tool that: keeps system icons intact moves everything else into a timestamped “Current Desktop” folder auto-sorts files by type requires no install, no internet, no background service It’s intentionally simple and does one thing only. Source + download: https://bit.ly/46adZuW I’m considering adding undo/restore, scheduling, and exclusion rules if people find it useful. Feedback welcome. https://bit.ly/46adZuW January 23, 2026 at 03:02AM

Wednesday, 21 January 2026

Show HN: High speed graphics rendering research with tinygrad/tinyJIT https://bit.ly/49NhyZb

Show HN: High speed graphics rendering research with tinygrad/tinyJIT I saw a tweet that tinygrad is so good that you could make a graphics library that wraps tg. So I’ve been hacking on a gtinygrad, and honestly it convinced me it could be used for legit research. The JIT + tensor model ends up being a really nice way to express light transport all in simple python, so I reimplemented some new research papers from SIGGRAPH like REstir PG and SZ and it just works. instead of complicated cpp its just a 200 LOC of python. https://bit.ly/4jSe38d January 22, 2026 at 04:26AM

Show HN: Sweep, Open-weights 1.5B model for next-edit autocomplete https://bit.ly/4qstJla

Show HN: Sweep, Open-weights 1.5B model for next-edit autocomplete Hey HN, we trained and open-sourced a 1.5B model that predicts your next edits, similar to Cursor. You can download the weights here ( https://bit.ly/49JsQO0 ) or try it in our JetBrains plugin ( https://bit.ly/49ElRpq... ). Next-edit autocomplete differs from standard autocomplete by using your recent edits as context when predicting completions. The model is small enough to run locally while outperforming models 4x its size on both speed and accuracy. We tested against Mercury (Inception), Zeta (Zed), and Instinct (Continue) across five benchmarks: next-edit above/below cursor, tab-to-jump for distant changes, standard FIM, and noisiness. We found exact-match accuracy correlates best with real usability because code is fairly precise and the solution space is small. Prompt format turned out to matter more than we expected. We ran a genetic algorithm over 30+ diff formats and found simple `original`/`updated` blocks beat unified diffs. The verbose format is just easier for smaller models to understand. Training was SFT on ~100k examples from permissively-licensed repos (4hrs on 8xH100), then RL for 2000 steps with tree-sitter parse checking and size regularization. The RL step fixes edge cases SFT can’t like, generating code that doesn’t parse or overly verbose outputs. We're open-sourcing the weights so the community can build fast, privacy-preserving autocomplete for any editor. If you're building for VSCode, Neovim, or something else, we'd love to see what you make with it! https://bit.ly/49SL9QV January 22, 2026 at 12:22AM

Show HN: Yashiki – A tiling window manager for macOS in Rust, inspired by River https://bit.ly/3LMdQaf

Show HN: Yashiki – A tiling window manager for macOS in Rust, inspired by River https://bit.ly/3ZvclQN January 19, 2026 at 06:51AM

Tuesday, 20 January 2026

Show HN: TopicRadar – Track trending topics across HN, GitHub, ArXiv, and more https://bit.ly/4sNyuY2

Show HN: TopicRadar – Track trending topics across HN, GitHub, ArXiv, and more Hey HN! I built TopicRadar to solve a problem I had with staying on top of what's trending in AI/ML without checking 7+ sites daily. https://bit.ly/4b906AL What it does: - Aggregates from HackerNews, GitHub, arXiv, StackOverflow, Lobste.rs, Papers with Code, and Semantic Scholar - One-click presets: "Trending: AI & ML", "Trending: Startups", "Trending: Developer Tools" - Or track custom topics (e.g., "rust async", "transformer models") - Gets 150-175 results in under 5 minutes Built for the Apify $1M Challenge. It's free to try – just hit "Try for free" and use the default "AI & ML" preset. Would love feedback on what sources to add next or features you'd find useful! https://bit.ly/4b906AL January 20, 2026 at 03:47PM

Show HN: macOS native DAW with Git branching model https://bit.ly/4sQhznR

Show HN: macOS native DAW with Git branching model I am working on building (and have made my first prerelease) for a Digital Audio Workstation with git like branching version control. It's free for local use and paid for cloud syncing or collaboration. https://bit.ly/4jMDBDI January 21, 2026 at 01:05AM

Show HN: Automating Type Safety for Mission-Critical Industrial Systems https://bit.ly/49x8Cs7

Show HN: Automating Type Safety for Mission-Critical Industrial Systems https://bit.ly/4b3FmdA January 20, 2026 at 10:43PM

Show HN: E80: an 8-bit CPU in structural VHDL https://bit.ly/4pMdPAU

Show HN: E80: an 8-bit CPU in structural VHDL I built a new 8-bit CPU in VHDL from scratch (starting from the ISA). I felt that most educational soft-cores hide too much behind abstraction, eg. if I can do a+b with a single assignment that calls an optimized arithmetic library, then why did I learn the ripple carry adder in the first place ? And why did I learn flip flops if I can do all my control logic with a simple PROCESS statement like I would with a programming language ? Of course abstraction is the main selling point of HDLs, but would it work if I tried to keep strictly structural and rely on ieee.std_logic_1164 only ? Well, it did and it works nicely. No arithmetic libraries, no PROCESS except for the DFF component (obviously). Of course it's a bit of a "resource hog" compared to optimized cores, (eg. the RAM is build out of flip flops instead of a block ram that takes advantage of FPGA intermal memory) but you can actually trace every signal through the datapath as it happens. I also build an assembler in C99 without external libraries (please be forgiving, my code is very primitive I think). I bundled Sci1 (Scintilla), GHDL and GTKWave into a single installer so you can write assembly and see the waveforms immediately without having to spend hours configuring simulators. Currently Windows only, but at some point I'll have to do it on Linux too. I tested it on the Tang Primer 25K and Cyclone IV, and I included my Gowin, Quartus and Vivado projects files. That should make easy to run on your FPGA. Everything is under the GPL3. (Edit: I did not use AI. Not was it a waste of time for the VHDL because my design is too novel -- but even for beta testing it would waste my time because those LLMs are too well trained for x86/ARM and my flag logic draws from 6502/6800 and even my ripple carry adder doesn't flip the carry bit in subtraction. Point is -- AI couldn't help. It only kept complaining that my assembler's C code wasn't up to 2026 standards) https://bit.ly/3ZrvXoV January 17, 2026 at 10:39PM

Monday, 19 January 2026

Show HN: Artificial Ivy in the Browser https://bit.ly/4qCQ4Ne

Show HN: Artificial Ivy in the Browser This is just a goofy thing I cooked up over the weekend. It's kind of like a screensaver, but with more reading and sliders. (It's not terribly efficient, so expect phone batteries to take a hit!) https://bit.ly/4jRSVz8 January 20, 2026 at 04:14AM

Show HN: Whirligig – Tinder for Gigs https://bit.ly/4qZPc4G

Show HN: Whirligig – Tinder for Gigs https://bit.ly/4oRzecA January 19, 2026 at 11:33PM

Sunday, 18 January 2026

Show HN: Pdfwithlove – PDF tools that run 100% locally (no uploads, no back end) https://bit.ly/4qYFBLB

Show HN: Pdfwithlove – PDF tools that run 100% locally (no uploads, no back end) Most PDF web tools make millions by uploading documents that never needed to leave your computer. pdfwithlove does the opposite: 1. 100% local processing 2. No uploads, no backend, no tracking Features include merge/split/edit/compress PDFs, watermarks & signatures, and image/HTML/Office → PDF conversion. https://bit.ly/3Z67TYV January 19, 2026 at 06:04AM

Show HN: AWS-doctor – A terminal-based AWS health check and cost optimizer in Go https://bit.ly/3NyXvGk

Show HN: AWS-doctor – A terminal-based AWS health check and cost optimizer in Go https://bit.ly/3NsbQED January 19, 2026 at 05:35AM

Show HN: Auto-switch keyboard layout per physical keyboard (Rust, Linux/KDE) https://bit.ly/49KHB3i

Show HN: Auto-switch keyboard layout per physical keyboard (Rust, Linux/KDE) https://bit.ly/4quHK1S January 19, 2026 at 01:16AM

Show HN: I quit coding years ago. AI brought me back https://bit.ly/49GCfWC

Show HN: I quit coding years ago. AI brought me back Quick background: I used to code. Studied it in school, wrote some projects, but eventually convinced myself I wasn't cut out for it. Too slow, too many bugs, imposter syndrome — the usual story. So I pivoted, ended up as an investment associate at an early-stage angel fund, and haven't written real code in years. Fast forward to now. I'm a Buffett nerd — big believer in compound interest as a mental model for life. I run compound interest calculations constantly. Not because I need to, but because watching numbers grow over 30-40 years keeps me patient when markets get wild. It's basically meditation for long-term investors. The problem? Every compound interest calculator online is terrible. Ugly interfaces, ads covering half the screen, can't customize compounding frequency properly, no year-by-year breakdowns. I've tried so many. They all suck. When vibe coding started blowing up, something clicked. Maybe I could actually build the calculators I wanted? I don't have to be a "real developer" anymore — I just need to describe what I want clearly. So I tried it. Two weeks and ~$100(Opus 4.5 thinking model) in API costs later: I somehow have 60+ calculators. Started with compound interest, naturally. Then thought "well, while I'm here..." and added mortgage, loan amortization, savings goals, retirement projections. Then it spiraled — BMI calculator, timezone converter, regex tester. Oops. The AI (I'm using Claude via Windsurf) handled the grunt work beautifully. I'd describe exactly what I wanted — "compound interest calculator with monthly/quarterly/yearly options, year-by-year breakdown table, recurring contribution support" — and it delivered. With validation, nice components, even tests. What I realized: my years away from coding weren't wasted. I still understood architecture, I still knew what good UX looked like, I still had domain expertise (financial math). I just couldn't type it all out efficiently. AI filled that gap perfectly. Vibe coding didn't make me a 10x engineer. But it gave me permission to build again. Ideas I've had for years suddenly feel achievable. That's honestly the bigger win for me. Stack: Next.js, React, TailwindCSS, shadcn/ui, four languages (EN/DE/FR/JA). The AI picked most of this when I said "modern and clean." Site's live at https://bit.ly/3NpNjA2 . The compound interest calculator is still my favorite page — finally exactly what I wanted. Curious if others have similar stories. Anyone else come back to building after stepping away? https://bit.ly/3NpNjA2 January 19, 2026 at 01:50AM

Saturday, 17 January 2026

Show HN: WebGPU React Renderer Using Vello https://bit.ly/4pKI1ww

Show HN: WebGPU React Renderer Using Vello I've built a package to use Raph Levien's Vello as a blazing fast 2D renderer for React on WebGPU. It uses WASM to hook into the Rust code https://bit.ly/45jCR3b January 17, 2026 at 10:27PM

Show HN: Speed Miners – A tiny RTS resource mini-game https://bit.ly/4r3girW

Show HN: Speed Miners – A tiny RTS resource mini-game I've always loved RTS games and wanted to make a game similar for a long time. I thought I'd just try and build a mini / puzzle game around the resource gathering aspects of an RTS. Objective: You have a base at the center and you need to mine and "refine" all of the resources on the map in as short a time as possible. By default, the game will play automatically, but not optimally (moving and buying upgrades). You can disable that with the buttons. You can select drones and right click to move them to specific resources patches and buy upgrades as you earn upgrade points. I've implemented three different levels and some basic sounds. I used Phaser at the game library (first time using it). It won't work well on a mobile. https://bit.ly/4sLA7p7 January 17, 2026 at 10:40PM

Friday, 16 January 2026

Show HN: Building the ClassPass for coworking spaces, would love your thoughts https://bit.ly/4r1tk9p

Show HN: Building the ClassPass for coworking spaces, would love your thoughts Growing up in a family business focused on coworking and shared spaces, I saw that many people were looking for a coworking space to use for a day. They weren't ready to jump into a long-term agreement. So I created LANS to simplify coworking. Our platform allows users to buy a day pass to a coworking space in seconds. The process is simple: book your pass, arrive at the space, give your name at the front desk, and you're in. Where we are Live in San Francisco with several coworking partners. Recently started expanding beyond the Bay. 10K paid users in San Francisco. Day passes priced between $18 and $25. What we’re seeing Users often use this service. They rotate locations during the week to fit their needs and schedules. For spaces, it’s incremental usage and new foot traffic during the workday. Outside dense city centers, onboarding new spaces tends to be faster. Many suburban areas host nice boutique coworking spaces. But, they often miss a strong online presence. Day passes quickly appeal to both operators and users. What we’re working on Expanding to more cities. Adding supply while keeping quality consistent. Learning which product decisions actually improve repeat usage. Would love feedback from HN: Does this resonate with how you work today? Have you used coworking day passes before? Would you dump your coworking membership for this? https://bit.ly/4pIpSzm January 17, 2026 at 05:54AM

Show HN: Making Claude Code sessions link-shareable https://bit.ly/4qVwACZ

Show HN: Making Claude Code sessions link-shareable Hey HN! My name is Omkar Kovvali and I've been wanting to share my CC sessions with friends / save + access them easily,so I decided to make an MCP server to do so! /share -> Get a link /import -> resume a conversation in your Claude Code All shared sessions are automatically sanitized, removing api keys, tokens, and secrets. Give it a try following the Github/npm instructions linked below - would love feedback! https://bit.ly/4k44IKZ https://bit.ly/3NhPNR3 January 17, 2026 at 03:50AM

Show HN: Commander AI – Mac UI for Claude Code https://bit.ly/4qZCPpv

Show HN: Commander AI – Mac UI for Claude Code Hi HN, I built Commander, a UI for running multiple AI coding agents in parallel without living in terminal hell. As coding agents got better, I started trusting them with real work: features, end-to-end, refactors, tests. Naturally, I began running 1–3 at once. That’s when the CLI stopped scaling — too many terminals, lost context, scattered diffs. Commander fixes that. https://bit.ly/4qsi5H6 January 17, 2026 at 01:08AM

Thursday, 15 January 2026

Show HN: Reversing YouTube’s “Most Replayed” Graph https://bit.ly/3NLBu7c

Show HN: Reversing YouTube’s “Most Replayed” Graph Hi HN, I recently noticed a recurring visual artifact in the "Most Replayed" heatmap on the YouTube player. The highest peaks were always surrounded by two dips. I got curious about why they were there, so I decided to reverse engineer the feature to find out. This post documents the deep dive. It starts with a system design recreation, reverse engineering the rendering code, and ends with the mathematics. This is also my first attempt at writing an interactive article. I would love to hear your thoughts on the investigation and the format. https://bit.ly/4qlpNm9 January 16, 2026 at 03:06AM

Show HN: Gambit, an open-source agent harness for building reliable AI agents https://bit.ly/4qVoWsg

Show HN: Gambit, an open-source agent harness for building reliable AI agents Hey HN! Wanted to show our open source agent harness called Gambit. If you’re not familiar, agent harnesses are sort of like an operating system for an agent... they handle tool calling, planning, context window management, and don’t require as much developer orchestration. Normally you might see an agent orchestration framework pipeline like: compute -> compute -> compute -> LLM -> compute -> compute -> LLM we invert this so with an agent harness, it’s more like: LLM -> LLM -> LLM -> compute -> LLM -> LLM -> compute -> LLM Essentially you describe each agent in either a self contained markdown file, or as a typescript program. Your root agent can bring in other agents as needed, and we create a typesafe way for you to define the interfaces between those agents. We call these decks. Agents can call agents, and each agent can be designed with whatever model params make sense for your task. Additionally, each step of the chain gets automatic evals, we call graders. A grader is another deck type… but it’s designed to evaluate and score conversations (or individual conversation turns). We also have test agents you can define on a deck-by-deck basis, that are designed to mimic scenarios your agent would face and generate synthetic data for either humans or graders to grade. Prior to Gambit, we had built an LLM based video editor, and we weren’t happy with the results, which is what brought us down this path of improving inference time LLM quality. We know it’s missing some obvious parts, but we wanted to get this out there to see how it could help people or start conversations. We’re really happy with how it’s working with some of our early design partners, and we think it’s a way to implement a lot of interesting applications: - Truly open source agents and assistants, where logic, code, and prompts can be easily shared with the community. - Rubric based grading to guarantee you (for instance) don’t leak PII accidentally - Spin up a usable bot in minutes and have Codex or Claude Code use our command line runner / graders to build a first version that is pretty good w/ very little human intervention. We’ll be around if ya’ll have any questions or thoughts. Thanks for checking us out! Walkthrough video: https://youtu.be/J_hQ2L_yy60 https://bit.ly/4sH6hST January 16, 2026 at 01:13AM

Show HN: Control what Claude can access using cloud-based decision table UIs https://bit.ly/4qwXTE8

Show HN: Control what Claude can access using cloud-based decision table UIs We’ve been building visual rule engines (clear interfaces + API endpoints that help map input data to a large number of outcomes) for a while and had the fun idea lately to see what happens when we use our decision table UI with Claude’s PreToolUse hook. The result is a surprisingly useful policy/gating layer– these tables let your team: - Write multi-factor, exception-friendly policies (e.g. deny rm -rf / when --force; allow cleanup only in node_modules; ask on network calls like curl/wget; block kubectl delete or SQL DROP, each with a clear reason) - Roll out policy changes instantly (mid-run, flip a risky operation from allow → ask; the next attempt across devs and agents is gated immediately– no git pull, agent restart, or coordination) - Adopt lightweight governance that survives churn (MCP/skills/etc): just add columns/rules as new tools and metadata show up - Get a quick central utility to understand which tools are being used, which tools get blocked most often, and why https://bit.ly/45fepjl January 15, 2026 at 07:21PM

Wednesday, 14 January 2026

Show HN: Visibility and Controls for Browser Agents https://bit.ly/49kG8BG

Show HN: Visibility and Controls for Browser Agents Hey HN! I’m Ashwin, co-founder of ContextFort ( https://bit.ly/4jMQnm0 ). We provide visibility and controls for AI browser agents like Claude in Chrome through an open-source browser extension. Browser agents are AI copilots that can autonomously navigate and take actions in your browser. They show up as standalone browsers (Comet, Atlas) or Chrome extensions (Claude). They’re especially useful in sites where search/API connectors don’t work well, like searching through Google Groups threads for a bug fix or pulling invoices from BILL.com. Anthropic released Claude CoWork yesterday, and in their launch video, they showcased their browser-use chromium extension: https://www.youtube.com/watch?v=UAmKyyZ-b9E But enterprise adoption is slow because of indirect prompt injection risks, about which Simon Willison has written in great detail in his blogs: https://bit.ly/3LDV7gR... . And before security teams can decide on guardrails, they need to know how employees are using browser agents to understand where the risks are. So, we reverse-engineered how the Claude in Chrome extension works and built a visibility layer that tracks agent sessions end-to-end. It detects when an AI agent takes control of the browser and records which pages it visited during a session and what it does on each page (what was clicked and where text was input). On top of that, we’ve also added simple controls for security teams to act on based on what the visibility layer captures: (1) Block specific actions on specific pages (e.g., prevent the agent from clicking “Submit” on email) (2) Block risky cross-site flows in a single session (e.g., block navigation to Atlassian after interacting with StackOverflow), or apply a stricter policy and block bringing any external context to Atlassian entirely. We demo all the above features here in this 2-minute YouTube video: https://www.youtube.com/watch?v=1YtEGVZKMeo You can try our browser extension here: https://bit.ly/4bFL19M Thrilled to share this with you and hear your comments! https://bit.ly/4jMQnm0 January 14, 2026 at 10:22AM

Show HN: IMSAI/Altair inspired microcomputer with web emulator https://bit.ly/3Nn6tXd

Show HN: IMSAI/Altair inspired microcomputer with web emulator I designed and built a physical replica of a 1970s-style front panel microcomputer with 25+ toggle switches, 16 LEDs, and an LCD display. The brain is a Raspberry Pi Pico running an Intel 8080 CPU emulator. The main twist: I decided to see how far I could get using Claude Code for the firmware. That and the web emulator were written almost entirely using Claude Code (Opus 4.5). I've kept the full prompt history here: https://bit.ly/3NjenRu.... It was able to create the emulator in just a few prompts! It really surprised me that it was able to make a WebAssembly version from the same code (compiled with emscripten) and get the physical layout of the panel from a given photo. It also created some simple working examples using 8086 instructions! Repository: https://bit.ly/3NdJzBF https://bit.ly/3NhAOqd January 15, 2026 at 02:57AM

Show HN: Commosta – marketplace to share computing resources https://bit.ly/4bx3q8K

Show HN: Commosta – marketplace to share computing resources https://bit.ly/49nLOLl January 15, 2026 at 03:00AM

Show HN: Chklst – A Minimalist Checklist https://bit.ly/4qqmudE

Show HN: Chklst – A Minimalist Checklist Welp... I finally shipped. This is my first real project. I wanted a checklist app the way I wanted it so I built chklst. What’s different? Simple, drag & drop reordering, keyboard shortcuts, color labels. There’s a live demo on the landing page so you can try it without signing up. Free accounts can create 1 list. Premium is $5/month for up to 25 lists + shareable lists. What do you think? I built it with Next.js 16 + Turso/libSQL + Drizzle + Better Auth + Stripe. https://bit.ly/4qkOhvL Would love feedback on onboarding, UX, and pricing. Thanks everyone! https://bit.ly/4qkOhvL January 15, 2026 at 02:48AM

Tuesday, 13 January 2026

Show HN: Microwave – Native iOS app for videos on ATproto https://bit.ly/3LKyxTR

Show HN: Microwave – Native iOS app for videos on ATproto Hi HN — I built Microwave, a native iOS app for browsing and posting short-form videos, similar to TikTok, but implemented as a pure client on top of Bluesky / AT Protocol. There’s no custom backend: the app reads from and publishes to existing ATproto infrastructure. The goal was to explore whether a TikTok-like experience can exist as a thin client over an open social protocol, rather than a vertically integrated platform. Things I’d especially love feedback on: - Whether this kind of UX makes sense on top of ATproto - Client-only tradeoffs (ranking, discovery, moderation) - Protocol limitations I may be missing - Any architectural red flags TestFlight: https://apple.co/4aXJnAj https://apple.co/4aXJnAj January 13, 2026 at 06:14PM

Show HN: Vibe scrape with AI Web Agents, prompt => get data [video] https://bit.ly/4qem2Ps

Show HN: Vibe scrape with AI Web Agents, prompt => get data [video] Most of us have a list of URLs we need data from (government listings, local business info, pdf directories). Usually, that means hiring a freelancer or paying for an expensive, rigid SaaS. We built an AI Web Agent platform, rtrvr.ai to make "Vibe Scraping" a thing. How it works: 1. Upload a Google Sheet with your URLs. 2. Prompt: "Find the email, phone number, and their top 3 services." 3. Watch the AI agents open 50+ browsers at once and fill your sheet in real-time. It’s powered by a multi-agent system that can take actions, upload files, and crawl through paginations. Web Agent technology built from the ground up: End to End Agent: we built a resilient agentic harness with 20+ specialized sub-agents that transforms a single prompt into a complete end-to-end workflow. Turn any prompt into an end to end workflow, and on any site changes the agent adapts. Dom Intelligence: we perfected a DOM-only web agent approach that represents any webpage as semantic trees guaranteeing zero hallucinations and leveraging the underlying semantic reasoning capabilities of LLMs. Native Chrome APIs: we built a Chrome Extension to control cloud browsers that runs in the same process as the browser to avoid the bot detection and failure rates of CDP. We further solved the hard problems of interacting with the Shadow DOM and other DOM edge cases. Cost: We engineered the cost down to $10/mo but you can bring your own Gemini key and proxies to use for nearly FREE. Compare that to the $200+/mo some other lead gen tools like Clay charge. Use the free browser extension for login walled sites like LinkedIn locally, or the cloud platform for scale on the public web. We are thinking it can be a great upstream tool to your CRM to generate lists and enrich data. Curious to hear if this would make your lead generation, scraping, or automation easier or is it missing the mark? https://www.youtube.com/watch?v=ggLDvZKuBlU January 14, 2026 at 01:30AM

Show HN: AsciiSketch a free browser-based ASCII art and diagram editor https://bit.ly/3LsKKwu

Show HN: AsciiSketch a free browser-based ASCII art and diagram editor https://bit.ly/4sFPHm8 January 13, 2026 at 11:39PM

Monday, 12 January 2026

Show HN: Modern Philosophy Course https://bit.ly/3LJFioW

Show HN: Modern Philosophy Course Fun module on Thales of Miletus—the beginning of philosophy https://bit.ly/4qUrveo January 13, 2026 at 01:09AM

Show HN: I built a tool to calculate the True Cost of Ownership (TCO) for yachts https://bit.ly/3ZbAaNq

Show HN: I built a tool to calculate the True Cost of Ownership (TCO) for yachts https://bit.ly/4pENiFP January 13, 2026 at 02:11AM

Show HN: Blockchain-Based Equity with Separated Economic and Governance Rights https://bit.ly/3Z7pob1

Show HN: Blockchain-Based Equity with Separated Economic and Governance Rights I've been researching blockchain-based capital markets and developed a framework for tokenized equity with separated economic, dividend, and governance rights. Core idea: Instead of bundling everything into one share, issue three token types: - LOBT: Economic participation, no governance - PST: Automated dividends, no ownership - OT: Full governance control Key challenge: Verifying real-world business operations on-chain without trusted intermediaries. I propose decentralized oracles + ZK proofs, but acknowledge significant unsolved problems. This is research/RFC seeking technical feedback on oracle architecture, regulatory viability, and which verticals this makes sense for. Thoughts? https://bit.ly/4pHIM9v January 13, 2026 at 01:33AM

Sunday, 11 January 2026

Show HN: Voice Composer – Browser-based pitch detection to MIDI/strudel/tidal https://bit.ly/4qRTtHC

Show HN: Voice Composer – Browser-based pitch detection to MIDI/strudel/tidal Built this over the weekend to bridge the gap between "can hum a melody" and "can code algorithmic music patterns" (Strudel/TidalCycles) for live coding and live dj'ing. What it does: Real-time pitch detection in browser using multiple algorithms: CREPE (deep learning model via TensorFlow.js) YIN (autocorrelation-based fundamental frequency estimation) FFT with harmonic product spectrum AMDF (average magnitude difference function) Outputs: visual piano roll, MIDI files, Strudel/TidalCycles code All client-side, nothing leaves your machine Why multiple algorithms: Different pitch detection approaches work better for different inputs. CREPE is most accurate but computationally expensive; YIN is fast and works well for clean monophonic input; FFT/HPS handles harmonic-rich sounds; AMDF is lightweight. Let users switch based on their use case. Technical details: React, runs entirely in browser via Web Audio API Canvas-based visualization with real-time waveform rendering The original problem: I wanted to learn live coding but had zero music theory. This makes it trivial to capture melodic ideas and immediately use them in pattern-based music systems. Try it: https://bit.ly/3YCk8vV Works best on desktop. Will work more like a Digital Audio Workbench (DAW). Source: https://bit.ly/3YzlR57 https://bit.ly/3YCk8vV January 12, 2026 at 12:06AM

Show HN: What if AI agents had Zodiac personalities? https://bit.ly/49s3MuM

Show HN: What if AI agents had Zodiac personalities? A fun game for playing moral dilemmas with friends. I gave 12 AI agents zodiac personalities (not that I believe in them) using the same LLM with different personality prompts. https://bit.ly/4bt1ONk January 12, 2026 at 12:49AM

Saturday, 10 January 2026

Show HN: Horizon Engine – C++20 3D FPS Game Engine with ECS and Modern Renderer https://bit.ly/49xd8W4

Show HN: Horizon Engine – C++20 3D FPS Game Engine with ECS and Modern Renderer Hi HN, I’m working on an experimental 3D FPS game engine in C++20, aiming to deeply understand engine internals from first principles rather than just using existing frameworks. Currently I'm strictly following LearnOpenGL docs. This project focuses on: Entity-Component-System (ECS) architecture for high performance. OpenGL 4.1 rendering with a PBR pipeline, material system, HDR, SSAO, and shadow mapping. Modular systems: input, physics (Jolt), audio (miniaudio), assets, hot reload. A sample FPS game & debug editor built into the repo. Repo: https://bit.ly/3Ltr5wc This isn’t intended to be a commercial rival to any commercial game engines. it’s a learning and exploration project: understanding why certain engine decisions are made, and how to build low-level engine systems from scratch. I’m especially looking for feedback on: Architecture choices (ECS design, render loop, module separation) Your thoughts on modern C++ engine patterns What you’d build vs stub early in a homemade engine Tips from experienced graphics/engine developers Criticism and suggestions are very welcome — it’s early days and meant to evolve. Thanks for checking it out! https://bit.ly/3Ltr5wc January 10, 2026 at 11:23PM

Show HN: Librario, a book metadata API that aggregates G Books, ISBNDB, and more https://bit.ly/49j7Te0

Show HN: Librario, a book metadata API that aggregates G Books, ISBNDB, and more TLDR: Librario is a book metadata API that aggregates data from Google Books, ISBNDB, and Hardcover into a single response, solving the problem of no single source having complete book information. It's currently pre-alpha, AGPL-licensed, and available to try now[0]. My wife and I have a personal library with around 1,800 books. I started working on a library management tool for us, but I quickly realized I needed a source of data for book information, and none of the solutions available provided all the data I needed. One might provide the series, the other might provide genres, and another might provide a good cover, but none provided everything. So I started working on Librario, a book metadata aggregation API written in Go. It fetches information about books from multiple sources (Google Books, ISBNDB, Hardcover. Working on Goodreads and Anna's Archive next.), merges everything, and saves it all to a PostgreSQL database for future lookups. The idea is that the database gets stronger over time as more books are queried. You can see an example response here[1], or try it yourself: curl -s -H 'Authorization: Bearer librario_ARbmrp1fjBpDywzhvrQcByA4sZ9pn7D5HEk0kmS34eqRcaujyt0enCZ' \ 'https://bit.ly/454p2pd' | jq . This is pre-alpha and runs on a small VPS, so keep that in mind. I never hit the limits in the third-party services, so depending on how this post goes, I’ll or will not find out if the code handles that well. The merger is the heart of the service, and figuring out how to combine conflicting data from different sources was the hardest part. In the end I decided to use field-specific strategies which are quite naive, but work for now. Each extractor has a priority, and results are sorted by that priority before merging. But priority alone isn't enough, so different fields need different treatment. For example: - Titles use a scoring system. I penalize titles containing parentheses or brackets because sources sometimes shove subtitles into the main title field. Overly long titles (80+ chars) also get penalized since they often contain edition information or other metadata that belongs elsewhere. - Covers collect all candidate URLs, then a separate fetcher downloads and scores them by dimensions and quality. The best one gets stored locally and served from the server. For most other fields (publisher, language, page count), I just take the first non-empty value by priority. Simple, but it works. Recently added a caching layer[2] which sped things up nicely. I considered migrating from net/http to fiber at some point[3], but decided against it. Going outside the standard library felt wrong, and the migration didn't provide much in the end. The database layer is being rewritten before v1.0[4]. I'll be honest: the original schema was written by AI, and while I tried to guide it in the right direction with SQLC[5] and good documentation, database design isn't my strong suit and I couldn't confidently vouch for the code. Rather than ship something I don't fully understand, I hired the developers from SourceHut[6] to rewrite it properly. I've got a 5-month-old and we're still adjusting to their schedule, so development is slow. I've mentioned this project in a few HN threads before[7], so I’m pretty happy to finally have something people can try. Code is AGPL and on SourceHut[8]. Feedback and patches[9] are very welcome :) [0]: https://bit.ly/3LtSRsC [1]: https://bit.ly/4btRHI1... [2]: https://bit.ly/4jDyAxp [3]: https://bit.ly/4qgj90J [4]: https://bit.ly/4jyI2lz [5]: https://bit.ly/49ZPPSU [6]: https://bit.ly/4qN49Ho [7]: https://bit.ly/49m1NYM [8]: https://bit.ly/3LtSRsC [9]: https://bit.ly/455EzoN... January 11, 2026 at 12:45AM

Show HN: Symfreq – Analyse symbol frequencies in code (Rust) https://bit.ly/49KkIhr

Show HN: Symfreq – Analyse symbol frequencies in code (Rust) https://bit.ly/3L9Dgyp January 10, 2026 at 11:55PM

Friday, 9 January 2026

Show HN: Yellopages – New tab Chrome extension https://bit.ly/4jys6zC

Show HN: Yellopages – New tab Chrome extension Hey all- I just released a New tab replacement Chrome extension that makes browsing a lot easier - it also solves many of the annoyances with browser tabs. It's called Yellopages and it's free. Hope you'll give it a try. * Groups all tabs from same domain. Makes it simple to kill all your Gmail tabs in one click (or keep just one). * Groups all tabs playing audio. Toggle the sound for each one. * Single text search for open tabs, bookmarks, and browsing history. * Groups all tabs with new notifications (e.g. emails, likes, posts, replies, etc.) * One click to kill all tabs (e.g. you're sharing screen in Zoom). A second click brings them all back. I'm a solo web developer and I'm hoping to build an audience with my work. More at: https://bit.ly/49uCrYK https://bit.ly/4julNgw January 8, 2026 at 11:44PM

Show HN: Senior Developer Playbook https://bit.ly/4jSg4BH

Show HN: Senior Developer Playbook I wrote a short playbook capturing behaviors I’ve seen in consistently effective developers. Posting it here in case it’s useful. Curious what others agree or disagree with. https://bit.ly/4sxBRlX January 10, 2026 at 12:57AM

Thursday, 8 January 2026

Show HN: Layoffstoday – Open database tracking for 10k Companies https://bit.ly/3LDCpGa

Show HN: Layoffstoday – Open database tracking for 10k Companies Hi HN, I built Layoffstoday, an open platform that tracks tech layoffs across ~6,500 companies. What it does: Aggregates layoff events from public news sources Normalizes data by company, date, industry, and affected headcount Shows historical patterns instead of isolated headlines Why I built it: During job transitions, I noticed people had to jump across news articles, spreadsheets, and social posts just to answer simple questions like “Has this company laid people off before?” or “Is this happening across the industry?” This is an attempt to make that information structured, searchable, and accessible. Would love feedback on: Data accuracy / gaps Signals that would actually help job seekers Whether alerts or trend indicators are useful or noisy https://bit.ly/4pzxgNx January 9, 2026 at 04:39AM

Show HN: Claude Code for Django https://bit.ly/3Luogel

Show HN: Claude Code for Django Chris Wiles showcased his setup for Claude Code and I thought it was sick. So I adapted it for Django projects. Several skills have been added to address the pain points in Django development. https://bit.ly/49yGXpp January 9, 2026 at 03:37AM

Show HN: Executable Markdown files with Unix pipes https://bit.ly/4bmpLWv

Show HN: Executable Markdown files with Unix pipes I wanted to run markdown files like shell scripts. So I built an open source tool that lets you use a shebang to pipe them through Claude Code with full stdin/stdout support. task.md: #!/usr/bin/env claude-run Analyze this codebase and summarize the architecture. Then: chmod +x task.md ./task.md These aren't just prompts. Claude Code has tool use, so a markdown file can run shell commands, write scripts, read files, make API calls. The prompt orchestrates everything. A script that runs your tests and reports results (`run_tests.md`): #!/usr/bin/env claude-run --permission-mode bypassPermissions Run ./test/run_tests.sh and summarize what passed and failed. Because stdin/stdout work like any Unix program, you can chain them: cat data.json | ./analyze.md > results.txt git log -10 | ./summarize.md ./generate.md | ./review.md > final.txt Or mix them with traditional shell scripts: for f in logs/\*.txt; do cat "$f" | ./analyze.md >> summary.txt done This replaced a lot of Python glue code for us. Tasks that needed LLM orchestration libraries are now markdown files composed with standard Unix tools. Composable as building blocks, runnable as cron jobs, etc. One thing we didn't expect is that these are more auditable (and shareable) than shell scripts. Install scripts like `curl -fsSL https://bit.ly/49foHT7 | bash` could become: `curl -fsSL https://bit.ly/4ssZWKN | claude-run` Where install.md says something like "Detect my OS and architecture, download the right binary from GitHub releases, extract to ~/.local/bin, update my shell config." A normal human can actually read and verify that. The (really cool) executable markdown idea and auditability examples are from Pete Koomen (@koomen on X). As Pete says: "Markdown feels increasingly important in a way I'm not sure most people have wrapped their heads around yet." We implemented it and added Unix pipe semantics. Currently works with Claude Code - hoping to support other AI coding tools too. You can also route scripts through different cloud providers (AWS Bedrock, etc.) if you want separate billing for automated jobs. GitHub: https://bit.ly/4qP9UEG What workflows would you use this for? January 9, 2026 at 03:29AM

Show HN: Legit, Open source Git-based Version control for AI agents https://bit.ly/45JyluK

Show HN: Legit, Open source Git-based Version control for AI agents Hi HN, Martin, Nils, and Jannes here. We are building Legit, an open source version control and collaboration layer for AI agents and AI native applications. You can find the repo here https://bit.ly/3LacBBw and the website here https://bit.ly/49yaAat Over the last years, we worked on multiple developer tools and AI driven products. As soon as we started letting agents modify real files and business critical data, one problem kept showing up. We could not reliably answer what changed, why it changed, or how to safely undo it. Today, most AI tools either run without real guardrails or store their state in proprietary databases that are hard to inspect, audit, or migrate. Once agents start collaborating on shared data, you are often just crossing your fingers and hoping nothing goes wrong. We noticed something interesting. Developers do not have this problem when collaborating on code, and agent like workflows took off there first. The reason is relatively simple. Git already solves coordination, history, review, and rollback. That insight led us to build Legit. We bring Git style versioning and collaboration to AI applications and to most file formats. Every change an agent makes is tracked. Every action is inspectable, reviewable, and reversible. No hidden state. No black box history. Legit works as a lightweight SDK that AI apps can embed anywhere the filesystem works. It handles versioning, Sync, rollback, and access control for agens. Everything lives in a repository that you can host yourself or on any Git hosting provider you already trust. We believe the right way to scale AI collaboration is not to hide what agents do, but to let developers and users see, review, and control every change. Legit is our attempt to bring the discipline, visibility, and safety of modern developer workflows to write enabled AI applications. Give it a spin: https://bit.ly/3LacBBw and let us know your feedback, criticism, and thoughts. January 9, 2026 at 01:20AM

Wednesday, 7 January 2026

Show HN: MaduroTrials – Tracking the SDNY indictments and court documents https://bit.ly/49clJ1F

Show HN: MaduroTrials – Tracking the SDNY indictments and court documents I built a dashboard to organize the unsealed indictments, court schedules, and filings regarding the United States v. Nicolás Maduro case in the Southern District of New York. https://bit.ly/4bnCZlS January 8, 2026 at 05:29AM

Show HN: I built Mike – AI motion graphics https://bit.ly/49bhsvn

Show HN: I built Mike – AI motion graphics When you think of AI videos, you think of something like Sora or Veo 3 (diffusion). What if the AI could write the code for a video like a website? This thought experiment led me to create Mike. It writes React code which can be rendered into a video. You can ask the AI to use any Node library to render graphs, animations, simulations. https://bit.ly/4qANqqK January 8, 2026 at 04:42AM

Show HN: IceRaidsNearMe – Real-time, crowdsourced map of immigration enforcement https://bit.ly/4qa6HiK

Show HN: IceRaidsNearMe – Real-time, crowdsourced map of immigration enforcement I built this to provide transparency around enforcement activities. It uses [mention tech stack, e.g., Mapbox/Leaflet] and a verification system to prevent false positives. Feedback on the verification logic is welcome. https://bit.ly/4qbKAIR January 8, 2026 at 04:30AM

Show HN: Kerns – A Continuous Research Workspace https://bit.ly/49cYdSc

Show HN: Kerns – A Continuous Research Workspace Most research tools help you collect links. Kerns is built for ongoing research. You define topics and sources once. Kerns continuously tracks them over time, surfaces what changes, and structures the material so understanding compounds instead of resetting each session. The key difference is the interface layer. Beyond feeds and summaries, Kerns organizes research into reasoning-ready views—maps, structured summaries, and synthesized perspectives—so you can actually think through complex areas rather than just store information. We built this for people doing deep, long-running research (researchers, analysts, investors, founders, autodidacts) where the hard part isn’t finding sources, but keeping a coherent mental model as the space evolves. Would love feedback, especially from people who’ve tried to maintain research across weeks or months. https://bit.ly/4p7lPwH January 7, 2026 at 11:51PM

Tuesday, 6 January 2026

Show HN: Funboxie – Free printables and coloring pages for kids https://bit.ly/3YWU52C

Show HN: Funboxie – Free printables and coloring pages for kids https://bit.ly/4szqKsS January 7, 2026 at 03:33AM

Show HN: SMTP Tunnel – A SOCKS5 proxy disguised as email traffic to bypass DPI https://bit.ly/3NhpZEm

Show HN: SMTP Tunnel – A SOCKS5 proxy disguised as email traffic to bypass DPI A fast SOCKS5 proxy that tunnels your traffic through what looks like normal SMTP email, bypassing Deep Packet Inspection firewalls. How it works: - Client runs a local SOCKS5 proxy (127.0.0.1:1080) - Traffic is sent to server disguised as SMTP (EHLO, STARTTLS, AUTH) - DPI sees legitimate email session, not a VPN/proxy Features: - One-liner install on any Linux VPS - Multi-user with per-user secrets and IP whitelists - Auto-generated client packages (just double-click to run) - Auto-reconnect on connection loss - Works with any app that supports SOCKS5 Tech: Python/asyncio, TLS 1.2+, HMAC-SHA256 auth GitHub: https://bit.ly/4aOjAKG https://bit.ly/4aOjAKG January 7, 2026 at 01:30AM

Show HN: GPU Cuckoo Filter – faster queries than Blocked Bloom, with deletion https://bit.ly/4ju1q2Z

Show HN: GPU Cuckoo Filter – faster queries than Blocked Bloom, with deletion https://bit.ly/4jql1kK January 6, 2026 at 11:33PM

Monday, 5 January 2026

Show HN: OSS sustain guard – Sustainability signals for OSS dependencies https://bit.ly/4jthrGG

Show HN: OSS sustain guard – Sustainability signals for OSS dependencies Hi HN, I made OSS Sustain Guard. After every high-profile OSS incident, I wonder about the packages I rely on right now. I can skim issues/PRs and activity on GitHub, but that doesn’t scale when you have tens or hundreds of dependencies. I built this to surface sustainability signals (maintainer redundancy, activity trends, funding links, etc.) and create awareness. It’s meant to start a respectful conversation, not to judge projects. These are signals, not truth; everything is inferred from public data (internal mirrors/private work won’t show up). Quick start: pip install oss-sustain-guard export GITHUB_TOKEN=... os4g check It uses GitHub GraphQL with local caching (no telemetry; token not uploaded/stored), and supports multiple ecosystems (Python/JS/Rust/Go/Java/etc.). Repo: https://bit.ly/3LgfIrC I’d love feedback on metric choices/thresholds and wording that stays respectful. If you have examples where these signals break down, please share. https://bit.ly/4jveR2S January 5, 2026 at 02:58PM

Show HN: RepoReaper – AST-aware, JIT-loading code audit agent (Python/AsyncIO) https://bit.ly/4st8qkP

Show HN: RepoReaper – AST-aware, JIT-loading code audit agent (Python/AsyncIO) OP here. I built RepoReaper to solve code context fragmentation in RAG. Unlike standard chat-with-repo tools, it simulates a senior engineer's workflow: it parses Python AST for logic-aware chunking, uses a ReAct loop to JIT-fetch missing file dependencies from GitHub, and employs hybrid search (BM25+Vector). It also generates Mermaid diagrams for architecture visualization. The backend is fully async and persists state via ChromaDB. Link: https://bit.ly/3NfdxoC https://bit.ly/3NfdxoC January 6, 2026 at 12:55AM

Show HN: Live VNC for web agents – debugging native captcha on Cloud Run https://bit.ly/4blgo9B

Show HN: Live VNC for web agents – debugging native captcha on Cloud Run Hi HN, Bhavani here (rtrvr.ai). We build DOM-native web agents (no screenshot-based vision, no CDP/Playwright debugger-port control). We handle captchas natively including Google reCAPTCHA image challenges by traversing cross-origin iframes and shadow DOM. The latency is high on this one currently. The problem: when debugging image selection captchas ("select all images with traffic lights"), logs don't tell you why the agent clicked the wrong tiles. I found myself staring at execution logs thinking "did it even see the grid correctly?" and realized I just wanted to watch it work. So we built live VNC view + takeover for serverless Chrome workers on Cloud Run. Key learnings: 1. Session affinity is best-effort; "attach later" can hit a different instance 2. A separate relay service that pairs viewer↔runner by short-lived tokens makes attach deterministic 3. Runner stays clean: concurrency=1, one browser per container, no mixed traffic Would love feedback from folks who've shipped similar: 1. What replaced VNC for you (WebRTC etc) and why? 2. Best approach for recording/replay without huge storage? 3. How do you handle "attach later" safely in serverless? https://bit.ly/3YsjOjj January 5, 2026 at 08:08AM

Sunday, 4 January 2026

Show HN: Vho – AST-based analysis for better AI refactoring of large codebases https://bit.ly/3Nuhl5t

Show HN: Vho – AST-based analysis for better AI refactoring of large codebases https://bit.ly/45vWipp January 5, 2026 at 03:24AM

Show HN: CloudSlash – Find AWS waste and generate Terraform state rm commands https://bit.ly/3N3Rr8C

Show HN: CloudSlash – Find AWS waste and generate Terraform state rm commands We've all been there: You find an unused NAT Gateway costing $45/mo. You delete it in the AWS console to stop the billing immediately. But the next time you run terraform plan, it fails because of state drift. Now you have to manually run terraform state rm or import it back to fix the drift. It's tedious, so often we just leave the waste running. I built CloudSlash to automate the cleanup and the state surgery. It’s written in Go (using BubbleTea for the TUI) and solves two engineering problems: 1. Finding "hollow" resources (the graph). Most cost tools just check CloudWatch metrics (CPU < 5%). That creates too much noise. Instead, I build an in-memory graph of the infrastructure to find structural waste. Example: An "Active" ELB. It has healthy targets, so metrics look good. But if you traverse the graph (ELB -> Instance -> Subnet -> Route Table), you might see the Route Table has no path to an Internet Gateway. The ELB is functionally dead, even if AWS reports it as "healthy." 2. The state mapping. Deleting the resource in AWS is easy. The challenge is mapping a physical ID (e.g., nat-0a1b2c) back to its Terraform address (e.g., module.vpc.aws_nat_gateway.public[0]) so you can remove it from the state file programmatically. I wrote a parser that reads your local .tfstate, handles the complex JSON structure (including nested modules and for_each outputs), and generates a remediation script. It outputs a shell script (fix_terraform.sh) that runs the necessary terraform state rm commands for you. It never writes to your .tf files directly—it just hands you the script to review and run. The core logic, scanner, and TUI are open source (AGPLv3). I charge a one-time license for the feature that auto-generates the fix scripts for developers , but the forensic analysis/detection is free. Repo: https://bit.ly/44Ulcig January 5, 2026 at 03:58AM

Show HN: H-1B Salary Data Explorer https://bit.ly/3YXKyZ2

Show HN: H-1B Salary Data Explorer Excited to share my New Year’s project. For a long time, I’ve wanted to build H-1B data directly into Levels.fyi. Every time I went looking for this data elsewhere, it was a frustrating experience to use. Most H-1B sites felt antiquated, unintuitive, cluttered with ads, or just overwhelming to use. The data was there, but it wasn’t usable, and definitely not pleasant to explore. So out of that frustration, I decided to build the H-1B data experience I personally wanted to use. Right into Levels.fyi. https://bit.ly/49kZ3uV Some other pages I'm excited about: Wage Heatmap: https://bit.ly/49uACeK Company H-1B Footprints: https://bit.ly/49lYwsK Highest Paying H-1B Jobs: https://bit.ly/49nM23S Top H-1B Cities: https://bit.ly/3KYVSkz Top Company Sponsors: https://bit.ly/3YoE3OM Would love any feedback, it's definitely still a work in progress. January 4, 2026 at 11:16PM

Saturday, 3 January 2026

Show HN: Lock In – A goal Mac tracker controlled by commands (7 Days Free) https://bit.ly/49fSLN6

Show HN: Lock In – A goal Mac tracker controlled by commands (7 Days Free) I built a task/goal tracker where the entire UI is one input field. The idea: your goals live in four quadrants (daily, weekly, monthly, yearly). Everything happens through commands. The app docks to the side of your screen. Adding a goal: /d 50 pushups Chain them: /d 50 pushups /w 3 gym sessions /m finish project /y learn piano Updating progress: Create an alias with /alias p pushups, then just type 25 p to add 25. Three characters. Review your week with /review 7d. Rename goals, change targets, convert between quadrants—all through commands. Each quadrant auto-resets at the right interval (daily at midnight, weekly on Monday, etc). You don't manage anything. Why I built it this way: I kept bouncing between productivity apps looking for something faster. Nothing stuck because they all wanted me to click through menus and organise things. I just wanted to type and move on. So I made something deliberately constrained. One input. Four quadrants. No settings screen. No integrations. The lack of features is the point. Curious what the HN crowd thinks—especially if the command syntax feels intuitive or too obscure. Still iterating. https://bit.ly/4q2uYHw January 3, 2026 at 10:03PM

Show HN: Auxide- a Real-Time Audio Graph Library for Rust https://bit.ly/3Ncpw6m

Show HN: Auxide- a Real-Time Audio Graph Library for Rust Auxide is a Rust library for building real-time audio processing graphs. It provides a low-level, deterministic kernel for executing directed acyclic graphs (DAGs) of audio nodes, with a focus on real-time safety and correctness. https://bit.ly/4aKaXks January 4, 2026 at 01:23AM

Show HN: Til.re – The URL is your timer, no signup required https://bit.ly/4pqUxRn

Show HN: Til.re – The URL is your timer, no signup required https://bit.ly/4pqiIzD January 4, 2026 at 12:50AM

Friday, 2 January 2026

Show HN: Website that plays the lottery every second https://bit.ly/4jr6sh1

Show HN: Website that plays the lottery every second https://bit.ly/4aDeTTW January 3, 2026 at 01:12AM

Show HN: A Bloomberg terminal for finding fresh powder (DuckDB WASM) https://bit.ly/3MV8tG2

Show HN: A Bloomberg terminal for finding fresh powder (DuckDB WASM) https://bit.ly/4q1AZEh January 3, 2026 at 12:58AM

Show HN: Go-Highway – Portable SIMD for Go https://bit.ly/4sFi5Fl

Show HN: Go-Highway – Portable SIMD for Go Go 1.26 adds native SIMD via GOEXPERIMENT=simd. This library provides a portability layer so the same code runs on AVX2, AVX-512, or falls back to scalar. Inspired by Google's Highway C++ library. Includes vectorized math (exp, log, sin, tanh, sigmoid, erf) since those come up a lot in ML/scientific code and the stdlib doesn't have SIMD versions. algo.SigmoidTransform(input, output) Requires go1.26rc1. Feedback welcome. https://bit.ly/49zNdhI January 2, 2026 at 11:36PM

Thursday, 1 January 2026

Show HN: Turning 100-plus comments HN threads into readable discussions https://bit.ly/49k29zp

Show HN: Turning 100-plus comments HN threads into readable discussions HN has some of the best discussions on the internet, but I don’t love reading 100 comments to find 10 great insights. This site analyzes top HN threads with LLMs and summarizes the key ideas, disagreements, and resources — while preserving links to the original discussion. Useful for revisiting old threads as well. Updated daily, manual-assisted for quality, no spam, fan project only. Would love thoughts from the community. https://bit.ly/3YkvvIU January 2, 2026 at 03:45AM

Show HN: Stealth and Browsers and Solvers in Rust https://bit.ly/4aBCCnx

Show HN: Stealth and Browsers and Solvers in Rust hey guys what's up, happy new year to all of you! so basically the link above is a fork of the chromiumoxide crate but with stealth patches implemented using rebrowser as a reference. it plugs runtime.enable and common automation flags, enforces some hardware consistency through profiles and has some convenience features and it's still early but for starters it can pass most of the common detection tests i prefer writing my applications in compiled languages but recently been needing to go into browser automation more and more and always felt that the space for rust didn't have much of the variety that the node/python counterparts had, so this is my attempt to give some life to it, well to solve my needs at least, but i hope someone else finds it useful too! i needed a stealth browser in rust because my need was mostly around captcha solving, so there's a turnstile solver here too and yeah i know there's a lot of them around but having it in rust allows me to integrate it into my application so much better and avoid the mess of so many external services and yeah my use case required not only cloudflare but geetest too so i ported xkiann's python solver to rust too, with some modifications to make it deobfuscate automatically and added support for multi turn verification and user_info parameters for sites that need it both solvers have C FFI bindings for integration with other languages! https://bit.ly/49nxFfS - chromiumoxide stealth fork https://bit.ly/4aGeccC - cloudflare solver https://bit.ly/49hiipf - geetest solver more details on the github repo, im falling asleep so goodnight HN and happy new year again much love to you all! https://bit.ly/49nxFfS January 2, 2026 at 01:33AM

Show HN: Enroll, a tool to reverse-engineer servers into Ansible config mgmt https://bit.ly/3L7Jz5t

Show HN: Enroll, a tool to reverse-engineer servers into Ansible config mgmt Happy new year folks! This tool was born out of a situation where I had 'inherited' a bunch of servers that were not under any form of config management. Oh, the horror... Enroll 'harvests' system information such as what packages are installed, what services are running, what files have 'differed' from their out-of-the-box defaults, and what other custom snowflake data might exist. The harvest state data can be kept as its own sort of SBOM, but also can be converted in a mere second or two into fully-functional Ansible roles/playbooks/inventory. It can be run remotely over SSH or locally on the machine. Debian and Redhat-like systems are supported. There is also a 'diff' mode to detect drift over time. (Years ago I used Puppet instead of Ansible and miss the agent/server model where it would check in and re-align to the expected state, in case people were being silly and side-stepping the config management altogether). For now, diff mode doesn't 'enforce' but is just capable of notification (webhook, email, stdout) if changes occur. Since making the tool, I've found that it's even useful for systems where you already have in Ansible, in that it can detect stuff you forgot to put into Ansible in the first place. I'm now starting to use it as a 'DR strategy' of sorts: still favoring my normal Ansible roles day-to-day (they are more bespoke and easier to read), but running enroll with '--dangerous --sops' in the background periodically as a 'dragnet' catch-all, just in case I ever need it. Bonus: it also can use my other tool JinjaTurtle, which converts native config files into Jinja2 templates / Ansible vars. That one too was born out of frustration, converting a massive TOML file into Ansible :) Anyway, hope it's useful to someone other than me! The website has some demos and more documentation. Have fun every(any)-one. https://bit.ly/4sBBeIa January 1, 2026 at 01:23AM