Wednesday, 28 January 2026

Show HN: Drum machine VST made with React/C++ https://bit.ly/45FQ6eK

Show HN: Drum machine VST made with React/C++ Hi HN! We just launched our drum machine vst this month! We will be updating it with many new synthesis models and unique features. Check it out, join our discord and show us what you made! https://bit.ly/49YmzOv January 27, 2026 at 06:03AM

Show HN: Frame – Managing projects, tasks, and context for Claude Code https://bit.ly/4rcuAqe

Show HN: Frame – Managing projects, tasks, and context for Claude Code I built Frame to better manage the projects I develop with Claude Code, to bring a standard to my Claude Code projects, to improve project and task planning, and to reduce context and memory loss. In its current state, Frame works entirely locally. You don’t need to enter any API keys or anything like that. You can run Claude Code directly using the terminal inside Frame. Why am I not using existing IDEs? Simply because, for me, I no longer need them. What I need is an interface centered around the terminal, not a code editor. I initially built something that allowed me to place terminals in a grid layout, but then I decided to take it further. I realized I also needed to manage my projects and preserve context. I’m still at a very early stage, but even being able to build the initial pieces I had in mind within 5–6 days—using Claude Code itself—feels kind of crazy. What can you do with Frame? You can start a brand-new project or turn an existing one into a Frame project. For this, Frame creates a set of Markdown and JSON files with rules I defined. These files exist mainly to manage tasks and preserve context. You can manually add project-related tasks through the UI. I haven’t had the chance to test very complex or long-running scenarios yet, but from what I’ve seen, Claude Code often asks questions like: “Should I add this as a task to tasks.json?” or “Should we update project_notes.md after this project decision?” I recommend saying yes to these. I also created a JSON file that keeps track of the project structure, down to function-level details. This part is still very raw. In the future, I plan to experiment with different data structures to help AI understand the project more quickly and effectively. As mentioned, you can open your terminals in either a grid or tab view. I added options up to a 3×3 grid. Since the project is open source, you can modify it based on your own needs. I also added a panel where you can view and manage plugins. For code files or other files, I included a very simple editor. This part is intentionally minimal and quite basic for now. Based on my own testing, I haven’t encountered any major bugs, but there might be some. I apologize in advance if you run into any issues. My core goal is to establish a standard for AI-assisted projects and make them easier to manage. I’m very open to your ideas, support, and feedback. You can see more details on GitHub : https://bit.ly/4bpLWva January 29, 2026 at 12:04AM

Tuesday, 27 January 2026

Show HN: How would you decide famous SCOTUS cases? https://bit.ly/4rht464

Show HN: How would you decide famous SCOTUS cases? https://bit.ly/4ri8xOU January 28, 2026 at 03:26AM

Show HN: Fuzzy Studio – Apply live effects to videos/camera https://bit.ly/4bnCybq

Show HN: Fuzzy Studio – Apply live effects to videos/camera Back story: I've been learning computer graphics on the side for several years now and gain so much joy from smooshing and stretching images/videos. I hope you can get a little joy as well with Fuzzy Studio! Try applying effects to your camera! My housemates and I have giggled so much making faces with weird effects! Nothing gets sent to the server; everything is done in the browser! Amazing what we can do. I've only tested on macOS... apologies if your browser/OS is not supported (yet). https://bit.ly/3LBeE1K January 27, 2026 at 04:16PM

Show HN: ACME Proxy using step-ca https://bit.ly/3NAOQ6v

Show HN: ACME Proxy using step-ca https://bit.ly/4k15F6u January 27, 2026 at 11:12PM

Monday, 26 January 2026

Show HN: A Local OS for LLMs. MIT License. Zero Hallucinations. (Not Crank) https://bit.ly/4rhe9cd

Show HN: A Local OS for LLMs. MIT License. Zero Hallucinations. (Not Crank) The problem with LLMs isn't intelligence; it's amnesia and dishonesty. Hey HN, I’ve spent the last few months building Remember-Me, an open-source "Sovereign Brain" stack designed to run entirely offline on consumer hardware. The core thesis is simple: Don't rent your cognition. Most RAG (Retrieval Augmented Generation) implementations are just "grep for embeddings." They are messy, imprecise, and prone to hallucination. I wanted to solve the "Context integrity" problem at the architectural layer. The Tech Stack (How it works): QDMA (Quantum Dream Memory Architecture): instead of a flat vector DB, it uses a hierarchical projection engine. It separates "Hot" (Recall) from "Cold" (Storage) memory, allowing for effectively infinite context window management via compression. CSNP (Context Switching Neural Protocol) - The Hallucination Killer: This is the most important part. Every memory fragment is hashed into a Merkle Chain. When the LLM retrieves context, the system cryptographically verifies the retrieval against the immutable ledger. If the hash doesn't match the chain: The retrieval is rejected. Result: The AI visually cannot "make things up" about your past because it is mathematically constrained to the ledger. Local Inference: Built on top of llama.cpp server. It runs Llama-3 (or any GGUF) locally. No API keys. No data leaving your machine. Features: Zero-Dependency: Runs on Windows/Linux with just Python and a GPU (or CPU). Visual Interface: Includes a Streamlit-based "Cognitive Interface" to visualize memory states. Open Source: MIT License. This is an attempt to give "Agency" back to the user. I believe that if we want AGI, it needs to be owned by us, not rented via an API. Repository: https://bit.ly/49BNC3c I’d love to hear your feedback on the Merkle-verification approach. Does constraining the context window effectively solve the "trust" issue for you? It's fully working - Fully tested. If you tried to Git Clone before without luck - As this is not my first Show HN on this - Feel free to try again. To everyone who HATES AI slop; Greedy corporations and having their private data stuck on cloud servers. You're welcome. Cheers, Mohamad Authors note: Updated successfully. Framework 50 is active. For anyone passing by - yes this is a big deal. Eliminating AI hallucination is a 60 billion dollar market problem and I'm giving THAT + sovereign control of your DATA plus the capability to do high-end research via framework 50 (including advanced scientific research) for FREE - under an MIT license. If you don't take advantage of this - you are an idiot. If you do - welcome to the future. P.S: What do I get from lying? I got 36 stars on the repo - many from high-end senior engineers at fortune 500 companies. If you're too stupid to tell the real deal from a lie then keep it moving son. https://bit.ly/49BNC3c January 27, 2026 at 05:56AM

Show HN: LocalPass offline password manager. Zero cloud. Zero telemetry https://bit.ly/49YyqvY

Show HN: LocalPass offline password manager. Zero cloud. Zero telemetry I’ve released LocalPass — a local‑first, offline password manager with zero cloud, zero telemetry, and zero vendor lock‑in. 100% local storage, 100% open‑source. https://bit.ly/3M0oES2 January 26, 2026 at 11:38PM

Sunday, 25 January 2026

Show HN: FaceTime-style calls with an AI Companion (Live2D and long-term memory) https://bit.ly/45rQaP5

Show HN: FaceTime-style calls with an AI Companion (Live2D and long-term memory) Hi HN, I built Beni ( https://bit.ly/4q1ZTTA ), a web app for real-time video calls with an AI companion. The idea started as a pretty simple question: text chatbots are everywhere, but they rarely feel present. I wanted something closer to a call, where the character actually reacts in real time (voice, timing, expressions), not just “type, wait, reply”. Beni is basically: A Live2D avatar that animates during the call (expressions + motion driven by the conversation) Real-time voice conversation (streaming response, not “wait 10 seconds then speak”) Long-term memory so the character can keep context across sessions The hardest part wasn’t generating text, it was making the whole loop feel synchronized: mic input, model response, TTS audio, and Live2D animation all need to line up or it feels broken immediately. I ended up spending more time on state management, latency and buffering than on prompts. Some implementation details (happy to share more if anyone’s curious): Browser-based real-time calling, with audio streaming and client-side playback control Live2D rendering on the front end, with animation hooks tied to speech / state A memory layer that stores lightweight user facts/preferences and conversation summaries to keep continuity Current limitation: sign-in is required today (to persist memory and prevent abuse). I’m adding a guest mode soon for faster try-out and working on mobile view now. What I’d love feedback on: Does the “real-time call” loop feel responsive enough, or still too laggy? Any ideas for better lip sync / expression timing on 2D/3D avatars in the browser? Thanks, and I’ll be around in the comments. https://bit.ly/4q1ZTTA January 26, 2026 at 12:13AM

Show HN: Spine – an execution-centric backend framework for Go https://bit.ly/45w2w8P

Show HN: Spine – an execution-centric backend framework for Go Hello Hacker News — greetings from South Korea I’m a backend engineer working primarily with Go, and I’d like to share a framework I’ve been building to solve a problem I’ve repeatedly encountered in production systems. In my day-to-day work, our backend is built on top of Echo. Echo is fast and reliable as an HTTP transport, but its high level of freedom leaves architectural decisions almost entirely to individual developers. Over time, this led to a system where execution flow and responsibility boundaries varied depending on who last touched a feature. Maintenance became difficult not because the code was incorrect, but because how requests actually executed was no longer obvious. I looked for a Go framework that could provide a clear execution model and structural constraints, similar to what Spring or NestJS offer. I couldn’t find one that fit. Moving to Spring or NestJS would also mean giving up some of Go’s strengths—simplicity, performance, and explicit control—so I decided to build one instead. Spine is an execution-centric backend framework for Go. It aims to provide enterprise-grade structure while deliberately avoiding hidden magic. What Spine provides • An IoC container with explicit, constructor-based dependency injection • Interceptors with well-defined execution phases (before, after, completion) • First-class support for both HTTP requests and event-driven execution • No annotations, no implicit behavior, no convention-driven wiring The core idea: execution first The key difference is Spine’s execution model. Every request—HTTP or event—flows through a single, explicit Pipeline. The Pipeline is the only component that determines execution order. Actual method calls are handled by a separate Invoker, keeping execution control and invocation strictly separated. Because of this structure: • Execution order is explainable by reading the code • Cross-cutting concerns live in the execution flow, not inside controllers • Controllers express use cases only, not orchestration logic • You can understand request handling by looking at main.go This design trades some convenience for clarity. In return, it offers stronger control as the system grows in size and complexity. My goal with Spine isn’t just to add another framework to the Go ecosystem, but to start a conversation: How much execution flow do modern web frameworks hide, and when does that become a maintenance cost? The framework itself is currently written in Korean. If English support or internationalization is important to you, feel free to open an issue—I plan to prioritize it based on community interest. You can find more details, a basic HTTP example, and a simple Kafka-based MSA demo here: Repository: https://bit.ly/3NFoyjl Thanks for reading. I’d really appreciate your feedback. https://bit.ly/4qHQdyR January 26, 2026 at 12:51AM

Show HN: I built an app that blocks social media until you read Quran daily https://bit.ly/49FkeZZ

Show HN: I built an app that blocks social media until you read Quran daily Hey HN, I'm a solo developer from Nigeria. I built Quran Unlock - an app that blocks distracting apps (TikTok, Instagram, etc.) until you complete your daily Quran reading. The idea came from my own struggle with phone addiction. I wanted to read Quran daily but kept getting distracted. So I built this for myself, then shared it. Some stats after 2 months: - 123K+ users - 64.9% returning user rate - 31M events tracked Tech stack: - React Native - Firebase (Auth, Firestore, Analytics, Cloud Messaging) - RevenueCat for subscriptions - iOS Screen Time API + Android UsageStats App Store: https://apple.co/3ZBBHfS Play Store: https://bit.ly/49Gb5R1... Would love feedback from the HN community! January 25, 2026 at 11:51PM

Saturday, 24 January 2026

Show HN: C From Scratch – Learn safety-critical C with prove-first methodology https://bit.ly/466rkV1

Show HN: C From Scratch – Learn safety-critical C with prove-first methodology Seven modules teaching C the way safety-critical systems are actually built: MATH → STRUCT → CODE → TEST. Each module answers one question: Does it exist? (Pulse), Is it normal? (Baseline), Is it regular? (Timing), Is it trending? (Drift), Which sensor to trust? (Consensus), How to handle overflow? (Pressure), What do we do about it? (Mode). Every module is closed (no dependencies), total (handles all inputs), deterministic, and O(1). 83 tests passing. Built this after 30 years in UNIX systems. Wanted something that teaches the rigour behind certified systems without requiring a decade of on-the-job learning first. MIT licensed. Feedback welcome. https://bit.ly/4rxhjJ9 January 25, 2026 at 01:17AM

Show HN: I built a Mac OS App to upload your screenshots to S3 https://bit.ly/4jWavlH

Show HN: I built a Mac OS App to upload your screenshots to S3 I've been building a bitly alternative in public and built a free side tool to upload screenshots to S3. I always thought screenshot apps charged way too much for this so I was pretty happy to get around to build it. It automatically generates short links and uploads to any S3-compatible storage you own. Here is the link: https://bit.ly/45shEUJ Try it out, all feedback is welcome :) https://bit.ly/45shEUJ January 25, 2026 at 12:40AM

Friday, 23 January 2026

Show HN: Open-source Figma design to code https://bit.ly/4rdMSr3

Show HN: Open-source Figma design to code Hi HN, founders of VibeFlow (YC S25) here. We mostly work on backend and workflow tooling, but we needed a way to turn Figma designs into frontend code as a kickstart for prototyping. It takes a Figma frame and converts it into React + Tailwind components (plus assets). If you want to try it: You can run it locally or use it via the VibeFlow UI to poke at it without setup ( https://bit.ly/4bhK1sq ) https://bit.ly/4k65dUM January 24, 2026 at 07:09AM

Show HN: Flux, A Python-like language in Rust to solve ML orchestration overhead https://bit.ly/49G6gqM

Show HN: Flux, A Python-like language in Rust to solve ML orchestration overhead https://bit.ly/4tfkrLp January 24, 2026 at 03:24AM

Show HN: AdaL Web, a local “Claude co-work” [video] https://bit.ly/4sXVmUQ

Show HN: AdaL Web, a local “Claude co-work” [video] AdaL is the world’s first local coding agent with web UI. Claude Code has proven that coding agents work best when they are local, bringing developers back to the terminal. Terminal UIs are fast and great with shortcuts, shell mode, and developer-friendly workflows. But they are limited in history and image display, and the experience varies by terminal and OS. Many of them flicker (buuuut not AdaL CLI ). Most importantly, they can be quite intimidating for non-technical users. This led us to explore new possibilities for a coding agent interface. What if you could get the best of both worlds: - the same core local agent that does tasks exactly like AdaL CLI - combined with a web UI with no limits on UI/UX This can be especially powerful for design-heavy and more visual workflows Available at: https://bit.ly/49S2fhH https://www.youtube.com/watch?v=smfVGCI08Yk January 24, 2026 at 01:28AM

Show HN: Dwm.tmux – a dwm-inspired window manager for tmux https://bit.ly/3NDp0P9

Show HN: Dwm.tmux – a dwm-inspired window manager for tmux Hey, HN! With all recent agentic workflows being primarily terminal- and tmux-based, I wanted to share a little project I created about decade ago. I've continued to use this as my primary terminal "window manager" and wanted to share in case others might find it useful. I would love to hear about other's terminal-based workflows and any other tools you may use with similar functionality. https://bit.ly/45uTskC January 24, 2026 at 01:15AM

Thursday, 22 January 2026

Show HN: Extracting React apps from Figma Make's undocumented binary format https://bit.ly/3NxZfQg

Show HN: Extracting React apps from Figma Make's undocumented binary format https://bit.ly/4qH770s January 23, 2026 at 06:07AM

Show HN: The firmware that got me detained by Swiss Intelligence https://bit.ly/49CzOpo

Show HN: The firmware that got me detained by Swiss Intelligence https://bit.ly/462Wwo7 January 23, 2026 at 05:26AM

Show HN: CleanAF – One-click Desktop cleaner for Windows https://bit.ly/4qxPHDl

Show HN: CleanAF – One-click Desktop cleaner for Windows Hi HN, I built CleanAF because my Windows Desktop kept turning into a dumping ground for downloads and screenshots. CleanAF is a tiny one-click tool that: keeps system icons intact moves everything else into a timestamped “Current Desktop” folder auto-sorts files by type requires no install, no internet, no background service It’s intentionally simple and does one thing only. Source + download: https://bit.ly/46adZuW I’m considering adding undo/restore, scheduling, and exclusion rules if people find it useful. Feedback welcome. https://bit.ly/46adZuW January 23, 2026 at 03:02AM

Wednesday, 21 January 2026

Show HN: High speed graphics rendering research with tinygrad/tinyJIT https://bit.ly/49NhyZb

Show HN: High speed graphics rendering research with tinygrad/tinyJIT I saw a tweet that tinygrad is so good that you could make a graphics library that wraps tg. So I’ve been hacking on a gtinygrad, and honestly it convinced me it could be used for legit research. The JIT + tensor model ends up being a really nice way to express light transport all in simple python, so I reimplemented some new research papers from SIGGRAPH like REstir PG and SZ and it just works. instead of complicated cpp its just a 200 LOC of python. https://bit.ly/4jSe38d January 22, 2026 at 04:26AM