Nigeria No1. Music site And Complete Entertainment portal for Music Promotion WhatsApp:- +2349077287056
Friday, 13 March 2026
Show HN: What was the world listening to? Music charts, 20 countries (1940–2025) https://bit.ly/40pkPcZ
Show HN: What was the world listening to? Music charts, 20 countries (1940–2025) I built this because I wanted to know what people in Japan were listening to the year I was born. That question spiraled: how does a hit in Rome compare to what was charting in Lagos the same year? How did sonic flavors propagate as streaming made musical influence travel faster than ever? 88mph is a playable map of music history: 230 charts across 20 countries, spanning 8 decades (1940–2025). Every song is playable via YouTube or Spotify. It's open source and I'd love help expanding it — there's a link to contribute charts for new countries and years. The goal is to crowdsource a complete sonic atlas of the world. https://bit.ly/4s6DV3v March 10, 2026 at 05:18PM
Show HN: fftool – A Terminal UI for FFmpeg – Shows Command Before It Runs https://bit.ly/3NwJ71I
Show HN: fftool – A Terminal UI for FFmpeg – Shows Command Before It Runs https://bit.ly/4sCyUzG March 13, 2026 at 11:08AM
Thursday, 12 March 2026
Show HN: Global Maritime Chokepoints https://bit.ly/4sbPHdc
Show HN: Global Maritime Chokepoints https://bit.ly/4cLCnaE March 13, 2026 at 05:42AM
Show HN: Slop or not – can you tell AI writing from human in everyday contexts? https://bit.ly/4uwqqvW
Show HN: Slop or not – can you tell AI writing from human in everyday contexts? I’ve been building a crowd-sourced AI detection benchmark. Two responses to the same prompt — one from a real human (pre-2022, provably pre prevalence of AI slop on the internet), one generated by AI. You pick the slop. Three wrong and you’re out. The dataset: 16K human posts from Reddit, Hacker News, and Yelp, each paired with AI generations from 6 models across two providers (Anthropic and OpenAI) at three capability tiers. Same prompt, length-matched, no adversarial coaching — just the model’s natural voice with platform context. Every vote is logged with model, tier, source, response time, and position. Early findings from testing: Reddit posts are easy to spot (humans are too casual for AI to mimic), HN is significantly harder. I'll be releasing the full dataset on HuggingFace and I'll publish a paper if I can get enough data via this crowdsourced study. If you play the HN-only mode, you’re helping calibrate how detectable AI is on here specifically. Would love feedback on the pairs — are any trivially obvious? Are some genuinely hard? https://bit.ly/4upYoBV March 12, 2026 at 10:53PM
Wednesday, 11 March 2026
Show HN: A context-aware permission guard for Claude Code https://bit.ly/4cNXEk1
Show HN: A context-aware permission guard for Claude Code We needed something like --dangerously-skip-permissions that doesn’t nuke your untracked files, exfiltrate your keys, or install malware. Claude Code's permission system is allow-or-deny per tool, but that doesn’t really scale. Deleting some files is fine sometimes. And git checkout is sometimes not fine. Even when you curate permissions, 200 IQ Opus can find a way around it. Maintaining a deny list is a fool's errand. nah is a PreToolUse hook that classifies every tool call by what it actually does, using a deterministic classifier that runs in milliseconds. It maps commands to action types like filesystem_read, package_run, db_write, git_history_rewrite, and applies policies: allow, context (depends on the target), ask, or block. Not everything can be classified, so you can optionally escalate ambiguous stuff to an LLM, but that’s not required. Anything unresolved you can approve, and configure the taxonomy so you don’t get asked again. It works out of the box with sane defaults, no config needed. But you can customize it fully if you want to. No dependencies, stdlib Python, MIT. pip install nah && nah install https://bit.ly/4uo6cnR https://bit.ly/3PvHlOS March 12, 2026 at 12:26AM
Show HN:Conduit–Headless browser with SHA-256 hash chain - Ed25519 audit trails https://bit.ly/40qLGW8
Show HN:Conduit–Headless browser with SHA-256 hash chain - Ed25519 audit trails I've been building AI agent tooling and kept running into the same problem: agents browse the web, take actions, fill out forms, scrape data -- and there's zero proof of what actually happened. Screenshots can be faked. Logs can be edited. If something goes wrong, you're left pointing fingers at a black box. So I built Conduit. It's a headless browser (Playwright under the hood) that records every action into a SHA-256 hash chain and signs the result with Ed25519. Each action gets hashed with the previous hash, forming a tamper-evident chain. At the end of a session, you get a "proof bundle" -- a JSON file containing the full action log, the hash chain, the signature, and the public key. Anyone can independently verify the bundle without trusting the party that produced it. The main use cases I'm targeting: - *AI agent auditing* -- You hand an agent a browser. Later you need to prove what it did. Conduit gives you cryptographic receipts. - *Compliance automation* -- SOC 2, GDPR data subject access workflows, anything where you need evidence that a process ran correctly. - *Web scraping provenance* -- Prove that the data you collected actually came from where you say it did, at the time you say it did. - *Litigation support* -- Capture web content with a verifiable chain of custody. It also ships as an MCP (Model Context Protocol) server, so Claude, GPT, and other LLM-based agents can use the browser natively through tool calls. The agent gets browse, click, fill, screenshot, and the proof bundle builds itself in the background. Free, MIT-licensed, pure Python. No accounts, no API keys, no telemetry. GitHub: https://bit.ly/40mFlLj Install: `pip install conduit-browser` Would love feedback on the proof bundle format and the MCP integration. Happy to answer questions about the cryptographic design. March 12, 2026 at 12:15AM
Tuesday, 10 March 2026
Show HN: CryptoFlora – Visualize SHA256 to a flower using Rose curves https://bit.ly/4lkcFMp
Show HN: CryptoFlora – Visualize SHA256 to a flower using Rose curves I made this side tool to visualize SHA-256 while building a loyalty card wallet application to easily identify if a collected stamp is certified by the issuer by simply seeing it, instead of scanning something like a QR code or matching a serial number. I think there are more potential use cases, like creating a random avatar based on an email address or something else. Feel free to share your feedback :) source code: https://bit.ly/3Ngkfeo https://bit.ly/4cBM2jV March 11, 2026 at 04:52AM
Show HN: Readhn – AI-Native Hacker News MCP Server (Discover, Trust, Understand) https://bit.ly/3Nw8u3F
Show HN: Readhn – AI-Native Hacker News MCP Server (Discover, Trust, Understand) I felt frustrated finding high-signal discussions on HN, and I started this project to better understand how this community actually works. That led me to build readhn, an MCP server that helps with three things: - Discover: find relevant stories/comments by keyword, score, and time window - Trust: identify credible voices using EigenTrust-style propagation from seed experts - Understand: show why each result is ranked, with explicit signals instead of a black-box score It includes 6 tools: discover_stories, search, find_experts, expert_brief, story_brief, and thread_analysis. I also added readhn setup so AI agents can auto-configure it (Claude Code, Codex, Cursor, and others) after pip install. I’d love feedback on: 1) whether these ranking signals match how you evaluate HN quality, 2) trust-model tradeoffs, 3) what would make this useful in your daily workflow. If this is useful to you, starring the repo helps others discover it: https://bit.ly/40pmNKh https://bit.ly/40pmNKh March 11, 2026 at 01:49AM
Show HN: Claude Code Token Elo https://bit.ly/4s44FSx
Show HN: Claude Code Token Elo https://bit.ly/4ddLBfJ March 10, 2026 at 05:29AM
Show HN: Modulus – Cross-repository knowledge orchestration for coding agents https://bit.ly/3P3vAPB
Show HN: Modulus – Cross-repository knowledge orchestration for coding agents Hello HN, we're Jeet and Husain from Modulus ( https://bit.ly/4s9fGBW ) - a desktop app that lets you run multiple coding agents with shared project memory. We built it to solve two problems we kept running into: - Cross-repo context is broken. When working across multiple repositories, agents don't understand dependencies between them. Even if we open two repos in separate Cursor windows, we still have to manually explain the backend API schema while making changes in the frontend repo. - Agents lose context. Switching between coding agents often means losing context and repeating the same instructions again. Modulus shares memory across agents and repositories so they can understand your entire system. It's an alternative to tools like Conductor for orchestrating AI coding agents to build product, but we focused specifically on multi-repo workflows (e.g., backend repo + client repo + shared library repo + AI agents repo). We built our own Memory and Context Engine from the ground up specifically for coding agents. Why build another agent orchestration tool? It came from our own problem. While working on our last startup, Husain and I were working across two different repositories. Working across repos meant manually pasting API schemas between Cursor windows — telling the frontend agent what the backend API looked like again and again. So we built a small context engine to share knowledge across repos and hooked it up to Cursor via MCP. This later became Modulus. Soon, Modulus will allow teams to share knowledge with others to improve their workflows with AI coding agents - enabling team collaboration in the era of AI coding. Our API will allow developers to switch between coding agents or IDEs without losing any context. If you wanna see a quick demo before trying out, here is our launch post - https://bit.ly/3NtfI8E We'd greatly appreciate any feedback you have and hope you get the chance to try out Modulus. https://bit.ly/4s9fGBW March 10, 2026 at 07:52PM
Monday, 9 March 2026
Show HN: Latchup – Competitive programming for hardware description languages https://bit.ly/4bhDgFy
Show HN: Latchup – Competitive programming for hardware description languages https://bit.ly/4cEqHGt March 10, 2026 at 07:06AM
Show HN: I Was Here – Draw on street view, others can find your drawings https://bit.ly/4rUNti6
Show HN: I Was Here – Draw on street view, others can find your drawings Hey HN, I made a site where you can draw on street-level panoramas. Your drawings persist and other people can see them in real time. Strokes get projected onto the 3D panorama so they wrap around buildings and follow the geometry, not just a flat overlay. Uses WebGL2 for rendering, Mapillary for the street imagery. The idea is for it to become a global canvas, anyone can leave a mark anywhere and others stumble onto it. https://bit.ly/40TNruT March 10, 2026 at 06:04AM
Show HN: SAT Protocol – static social networking https://bit.ly/3PqUmJw
Show HN: SAT Protocol – static social networking https://bit.ly/4rXCy7f March 10, 2026 at 04:25AM
Show HN: ChatJC – chatbot for resume/LinkedIn/portfolio info https://bit.ly/3OZ9A8y
Show HN: ChatJC – chatbot for resume/LinkedIn/portfolio info https://bit.ly/4b3Iy8M March 10, 2026 at 01:37AM
Sunday, 8 March 2026
Show HN: Toolkit – Visual Simulators for How Internet Protocols and Systems Work https://bit.ly/4syfVWP
Show HN: Toolkit – Visual Simulators for How Internet Protocols and Systems Work https://bit.ly/4d7ddmL March 8, 2026 at 09:23PM
Saturday, 7 March 2026
Show HN: Jarvey - a local JARVIS for MacOS https://bit.ly/46LTYLE
Show HN: Jarvey - a local JARVIS for MacOS https://bit.ly/3OWd0Jh March 8, 2026 at 12:04AM
Show HN: SiClaw – Open-source AIOps with a hypothesis-driven diagnostic engine https://bit.ly/40dYpLH
Show HN: SiClaw – Open-source AIOps with a hypothesis-driven diagnostic engine https://bit.ly/4rYciJW March 8, 2026 at 03:27AM
Show HN: Help] I run 4 AI-driven companies simultaneously from my terminal https://bit.ly/4sC3Iki
Show HN: Help] I run 4 AI-driven companies simultaneously from my terminal https://bit.ly/4cAtEbg March 7, 2026 at 11:13PM
Show HN: MicroBin – Easy File Sharing for Everyone – Self-Hostable https://bit.ly/4b89DpR
Show HN: MicroBin – Easy File Sharing for Everyone – Self-Hostable https://bit.ly/3NlpnxY March 7, 2026 at 10:07PM
Friday, 6 March 2026
Show HN: mTile – native macOS window tiler inspired by gTile https://bit.ly/4cyeD9O
Show HN: mTile – native macOS window tiler inspired by gTile Built this with codex/claude because I missed gTile[1] from Ubuntu and couldn’t find a macOS tiler that felt good on a big ultrawide screen. Most mac options I tried were way too rigid for my workflow (fixed layouts, etc) or wanted a monthly subscription. gTile’s "pick your own grid sizes + keyboard flow" is exactly what I wanted and used for years. Still rough in places and not full parity, but very usable now and I run it daily at work (forced mac life). [1]: https://bit.ly/4rhJXNF https://bit.ly/40iPJUh March 6, 2026 at 11:21PM
Thursday, 5 March 2026
Show HN: Kanon 2 Enricher – the first hierarchical graphitization model https://bit.ly/4boHrAq
Show HN: Kanon 2 Enricher – the first hierarchical graphitization model Hey HN, This is Kanon 2 Enricher, the first hierarchical graphitization model. It represents an entirely new class of AI models designed to transform document corpora into rich, highly structured knowledge graphs. In brief, our model is capable of: - Entity extraction, classification, and linking: identifying key entities like individuals, companies, governments, locations, dates, documents, and more, and classifying and linking them together. - Hierarchical segmentation: breaking a document up into its full hierarchy, including divisions, sections, subsections, paragraphs, and so on. - Text annotation: extracting common textual elements such as headings, sigantures, tables of contents, cross-references, and the like. We built Kanon 2 Enricher from scratch. Every node, edge, and label in the Isaacus Legal Graph Schema (ILGS), which is the format it outputs to, corresponds to at least one task head in our model. In total, we built 58 different task heads jointly optimized with 70 different loss terms. Thanks to its novel architecture, unlike your typical LLM, Kanon 2 Enricher doesn't generate extractions token by token (which introduces the possibility of hallucinations) but instead directly classifies all the tokens in a document in a single shot. This makes it really fast. Because Kanon 2 Enricher's feature set is so wide, there are a myriad of applications it can be used for, from financial forensics and due diligence all the way to legal research. One of the coolest applications we've seen so far is where a Canadian government built a knowledge graph out of thousands of federal and provincial laws in order to accelerate regulatory analysis. Another cool application is something we built ourselves, a 3D interactive map of Australian High Court cases since 1903, which you can find right at the start of our announcement. Our model has already been in use for the past month, since we released it through a closed beta that included Harvey, KPMG, Clifford Chance, Clyde & Co, Alvarez & Marsal, Smokeball, and 96 other design partners. Their feedback was instrumental in improving Kanon 2 Enricher before its public release, and we're immensely thankful to each and every beta participant. We're eager to see what other developers manage to build with our model now that its out publicly. https://bit.ly/4ud0Aga March 3, 2026 at 09:55AM
Show HN: I built an AI exam prep platform for AWS certs after failing one myself https://bit.ly/4aY1AvG
Show HN: I built an AI exam prep platform for AWS certs after failing one myself Hey HN, I failed the AWS Advanced Networking Specialty exam. Studied for weeks, used the usual prep sites, thought I was ready — wasn't. The problem wasn't effort, it was the tools. Static question banks don't teach you to think through AWS architecture decisions. They teach you to pattern-match answers. That falls apart on the harder exams. So I built Knowza to fix that for myself, and then figured others probably had the same frustration. The idea: instead of a static question bank, use AI to generate questions, adapt to what you're weak on, and actually explain the reasoning behind each answer — the way a senior engineer would explain it, not a multiple choice rubric. The stack: Next.js + Amplify Gen 2 DynamoDB (direct Server Actions, no API layer) AWS Bedrock (Claude) for question generation and explanations Stripe for billing The hardest part honestly wasn't the AI — it was getting question quality consistent enough that I'd trust it for real exam prep. Still iterating on that. Early days, one person, built alongside a day job. Would love feedback from anyone who's grinded AWS certs or has thoughts on AI-generated educational content. knowza.ai https://bit.ly/3MMq0R7 March 5, 2026 at 09:27PM
Wednesday, 4 March 2026
Show HN: A shell-native cd-compatible directory jumper using power-law frecency https://bit.ly/4cz3be9
Show HN: A shell-native cd-compatible directory jumper using power-law frecency I have used this tool privately since 2011 to manage directory jumping. While it is conceptually similar to tools like z or zoxide, the underlying ranking model is different. It uses a power-law convolution with the time series of cd actions to calculate a history-aware "frecency" metric instead of the standard heuristic counters and multipliers. This approach moves away from point-estimates for recency. Most tools look only at the timestamp of the last visit, which can allow a "one-off" burst of activity to clobber long-term habits. By convolving a configurable history window (typically the last 1,000+ events), the score balances consistent habits against recent flukes. On performance: Despite the O(N) complexity of calculating decay for 1,000+ events, query time is ~20-30ms (Real Time) in ksh/bash, which is well below the threshold of perceived lag. I intentionally chose a Logical Path (pwd -L) model. Preserving symlink names ensures that the "Name" remains the primary searchable key. Resolving to physical paths often strips away the very keyword the user intends to use for searching. https://bit.ly/3N95WIu March 4, 2026 at 11:20AM
Tuesday, 3 March 2026
Show HN: DubTab – Live AI Dubbing in the Browser (Meet/YouTube/Twitch/etc.) https://bit.ly/4u1yiVL
Show HN: DubTab – Live AI Dubbing in the Browser (Meet/YouTube/Twitch/etc.) Hi HN — I’m Ethan, a solo developer. I built DubTab because I spend a lot of time in meetings and watching videos in languages I’m not fluent in, and subtitles alone don’t always keep up (especially when the speaker is fast). DubTab is a Chrome/Edge extension that listens to the audio of your current tab and gives you: 1.Live translated subtitles (optional bilingual mode) 2.Optional AI dubbing with a natural-sounding voice — so you can follow by listening, not just reading The goal is simple: make it easier to understand live audio in another language in real time, without downloading files or doing an upload-and-wait workflow. How you’d use it 1.Open a video call / livestream / lecture / any tab with audio 2.Start DubTab 3.Choose target language (and source language if you know it) 4.Use subtitles only, or turn on natural AI dubbing and adjust the audio mix (keep original, or duck it) What it’s good for 1.Following cross-language meetings/classes when you’re tired of staring at subtitles 2.Watching live content where you can’t pause/rewind constantly 3.Language learners who want bilingual captions to sanity-check meaning 4.Keeping up with live news streams on YouTube when events are unfolding in real time (e.g., breaking international updates like U.S./Iran/Israel-related developments) Link: https://bit.ly/40HBFUo I’ll be in the comments and happy to share implementation details if anyone’s curious. https://bit.ly/40HBFUo March 4, 2026 at 02:04AM
Show HN: I built a LLM human rights evaluator for HN (content vs. site behavior) https://bit.ly/4l4c4yi
Show HN: I built a LLM human rights evaluator for HN (content vs. site behavior) My health challenges limit how much I can work. I've come to think of Claude Code as an accommodation engine — not in the medical-paperwork sense, but in the literal one: it gives me the capacity to finish things that a normal work environment doesn't. Observatory was built in eight days because that kind of collaboration became possible for me. (I even used Claude Code to write this post — but am only posting what resonates with me.) Two companion posts: on the recursive methodology ( https://bit.ly/409tFeD... ) and what 806 evaluated stories reveal ( https://bit.ly/4r7k9DW... ). I built Observatory to automatically evaluate Hacker News front-page stories against all 31 provisions of the UN Universal Declaration of Human Rights — starting with HN because its human-curated front page is one of the few feeds where a story's presence signals something about quality, not just virality. It runs every minute: https://bit.ly/4aKNMpG . Claude Haiku 4.5 handles full evaluations; Llama 4 Scout and Llama 3.3 70B on Workers AI run a lighter free-tier pass. The observation that shaped the design: rights violations rarely announce themselves. An article about a company's "privacy-first approach" might appear on a site running twelve trackers. The interesting signal isn't whether an article mentions privacy — it's whether the site's infrastructure matches its words. Each evaluation runs two parallel channels. The editorial channel scores what the content says about rights: which provisions it touches, direction, evidence strength. The structural channel scores what the site infrastructure does: tracking, paywalls, accessibility, authorship disclosure, funding transparency. The divergence — SETL (Structural-Editorial Tension Level) — is often the most revealing number. "Says one thing, does another," quantified. Every evaluation separates observable facts from interpretive conclusions (the Fair Witness layer, same concept as fairwitness.bot — https://bit.ly/43DzQKs ). You get a facts-to-inferences ratio and can read exactly what evidence the model cited. If a score looks wrong, follow the chain and tell me where the inference fails. Per our evaluations across 805 stories: only 65% identify their author — one in three HN stories without a named author. 18% disclose conflicts of interest. 44% assume expert knowledge (a structural note on Article 26). Tech coverage runs nearly 10× more retrospective than prospective: past harm documented extensively; prevention discussed rarely. One story illustrates SETL best: "Half of Americans now believe that news organizations deliberately mislead them" (fortune.com, 652 HN points). Editorial: +0.30. Structural: −0.63 (paywall, tracking, no funding disclosure). SETL: 0.84. A story about why people don't trust media, from an outlet whose own infrastructure demonstrates the pattern. The structural channel for free Llama models is noisy — 86% of scores cluster on two integers. The direction I'm exploring: TQ (Transparency Quotient) — binary, countable indicators that don't need LLM interpretation (author named? sources cited? funding disclosed?). Code is open source: https://bit.ly/3MJJANP — the .claude/ directory has the cognitive architecture behind the build. Find a story whose score looks wrong, open the detail page, follow the evidence chain. The most useful feedback: where the chain reaches a defensible conclusion from defensible evidence and still gets the normative call wrong. That's the failure mode I haven't solved. My background is math and psychology (undergrad), a decade in software — enough to build this, not enough to be confident the methodology is sound. Expertise in psychometrics, NLP, or human rights scholarship especially welcome. Methodology, prompts, and a 15-story calibration set are on the About page. Thanks! https://bit.ly/4aKNMpG March 4, 2026 at 01:26AM
Show HN: Interactive WordNet Visualizer-Explore Semantic Relations as a Graph https://bit.ly/4l9DCCr
Show HN: Interactive WordNet Visualizer-Explore Semantic Relations as a Graph https://bit.ly/4l7NYTv March 3, 2026 at 10:17PM
Monday, 2 March 2026
Show HN: An Auditable Decision Engine for AI Systems https://bit.ly/4r0ct6d
Show HN: An Auditable Decision Engine for AI Systems https://bit.ly/4rKkhKt March 3, 2026 at 03:03AM
Show HN: PHP 8 disable_functions bypass PoC https://bit.ly/4coTizr
Show HN: PHP 8 disable_functions bypass PoC https://bit.ly/4ckhq6k March 3, 2026 at 02:12AM
Show HN: We filed 99 patents for deterministic AI governance(Prior Art vs. RLHF) https://bit.ly/3OHLRtr
Show HN: We filed 99 patents for deterministic AI governance(Prior Art vs. RLHF) For the last few months, we've been working on a fundamental architectural shift in how autonomous agents are governed. The current industry standard relies almost entirely on probabilistic alignment (RLHF, system prompts, constitutional training). It works until it's jailbroken or the context window overflows. A statistical disposition is not a security boundary. We've built an alternative: Deterministic Policy Gates. In our architecture, the LLM is completely stripped of execution power. It can only generate an "intent payload." That payload is passed to a process-isolated, deterministic execution environment where it is evaluated against a cryptographically hashed constraint matrix (the constitution). If it violates the matrix, it is blocked. Every decision is then logged to a Merkle-tree substrate (GitTruth) for an immutable audit trail. We filed 99 provisional patents on this architecture starting January 10, 2026. Crucially, we embedded strict humanitarian use restrictions directly into the patent claims themselves (The Peace Machine Mandate) so the IP cannot legally be used for autonomous weapons, mass surveillance, or exploitation. I wrote a full breakdown of the architecture, why probabilistic safety is a dead end, and the timeline of how we filed this before the industry published their frameworks: Read the full manifesto here: https://bit.ly/4l5y3Vx... The full patent registry is public here: https://bit.ly/4l1JNbI I'm the founder and solo inventor. Happy to answer any questions about the deterministic architecture, the Merkle-tree state persistence, or the IP strategy of embedding ethics directly into patent claims. March 2, 2026 at 11:56PM
Show HN: Open-Source Postman for MCP https://bit.ly/4l4lxG3
Show HN: Open-Source Postman for MCP https://bit.ly/40EKzC1 March 3, 2026 at 12:40AM
Sunday, 1 March 2026
Show HN: Vibe Code your 3D Models https://bit.ly/4aYHwto
Show HN: Vibe Code your 3D Models Hi HN, I’m the creator of SynapsCAD, an open-source desktop application I've been building that combines an OpenSCAD code editor, a real-time 3D viewport, and an AI assistant. You can write OpenSCAD code, compile it directly to a 3D mesh, and use an LLM (OpenAI, Claude, Gemini, ...) to modify the code through natural language. Demo video: https://www.youtube.com/watch?v=cN8a5UozS5Q A bit about the architecture: - It’s built entirely in Rust. - The UI and 3D viewport are powered by Bevy 0.15 and egui. - It uses a pure-Rust compilation pipeline (openscad-rs for parsing and csgrs for constructive solid geometry rendering) so there are no external tools or WASM required. - Async AI network calls are handled by Tokio in the background to keep the Bevy render loop smooth. Disclaimer: This is a very early prototype. The OpenSCAD parser/compiler doesn't support everything perfectly yet, so you will definitely hit some rough edges if you throw complex scripts at it. I mostly just want to get this into the hands of people who tinker with CAD or Rust. I'd be super happy for any feedback, architectural critiques, or bug reports—especially if you can drop specific OpenSCAD snippets that break the compiler in the GitHub issues! GitHub (Downloads for Win/Mac/Linux): https://bit.ly/3MDl1Cd Happy to answer any questions about the tech stack or the roadmap! https://bit.ly/3MDl1Cd February 27, 2026 at 06:27PM
Show HN: Logira – eBPF runtime auditing for AI agent runs https://bit.ly/3MP5orl
Show HN: Logira – eBPF runtime auditing for AI agent runs I started using Claude Code (claude --dangerously-skip-permissions) and Codex (codex --yolo) and realized I had no reliable way to know what they actually did. The agent's own output tells you a story, but it's the agent's story. logira records exec, file, and network events at the OS level via eBPF, scoped per run. Events are saved locally in JSONL and SQLite. It ships with default detection rules for credential access, persistence changes, suspicious exec patterns, and more. Observe-only – it never blocks. https://bit.ly/4sgvLW1 https://bit.ly/4sgvLW1 March 2, 2026 at 12:25AM
Saturday, 28 February 2026
Show HN: InstallerStudio – Create MSI Installers Without WiX or InstallShield https://bit.ly/4ukgsOb
Show HN: InstallerStudio – Create MSI Installers Without WiX or InstallShield Hi, I'm Paul — 25 years of enterprise Windows development. I built InstallerStudio after WiX went from free/open source to $6,500/year support and InstallShield hit $2,000+/year. Every tool in this space is either unaffordable or requires writing XML by hand. InstallerStudio is a visual MSI designer built on WinUI 3/.NET 10. No XML, no subscriptions, no external dependencies. Handles files, Windows services, registry, shortcuts, file associations, custom actions, and full installer UI. $159 this month, $199 after. 30-day free trial. It ships its own installer, built with itself. Happy to answer questions about MSI internals. https://bit.ly/3MAMtAv February 28, 2026 at 11:02PM
Friday, 27 February 2026
Show HN: OpenTimelineEngine – Shared local memory for Claude Code and codex https://bit.ly/4s8g1o6
Show HN: OpenTimelineEngine – Shared local memory for Claude Code and codex https://bit.ly/4aHDwP5 February 28, 2026 at 01:00AM
Show HN: Notemac++ – A Notepad++-inspired code editor for macOS and the web https://bit.ly/405WZCO
Show HN: Notemac++ – A Notepad++-inspired code editor for macOS and the web https://bit.ly/4u5ygMQ February 28, 2026 at 12:05AM
Thursday, 26 February 2026
Show HN: Lar-JEPA – A Testbed for Orchestrating Predictive World Models https://bit.ly/3P5wkUf
Show HN: Lar-JEPA – A Testbed for Orchestrating Predictive World Models Hey HN, The current paradigm of agentic frameworks (LangChain, AutoGPT) relies on prompting LLMs and parsing conversational text strings to decide the next action. This works for simple tasks but breaks down for complex reasoning because it treats the agent's mind like a scrolling text document. As research shifts toward Joint Embedding Predictive Architectures (JEPAs) and World Models, we hit an orchestration bottleneck. JEPAs don't output text; they output abstract mathematical tensors representing a predicted environmental state. Traditional text-based frameworks crash if you try to route a NumPy array. We built Lar-JEPA as a conceptual testbed to solve this. It uses the Lár Engine,a deterministic, topological DAG ("PyTorch for Agents") to act as the execution spine. Key Features for Researchers: Mathematical Routing (No Prompting): You write deterministic Python RouterNodes that evaluate the latent tensors directly (e.g., if collision_probability > 0.85: return "REPLAN"). Native Tensor Logging: We custom-patched our AuditLogger with a TensorSafeEncoder. You can pass massive PyTorch/NumPy tensors natively through the execution graph, and it gracefully serializes them into metadata ({ "__type__": "Tensor", "shape": [1, 768] }) without crashing JSON stringifiers. System 1 / System 2 Testing: Formally measure fast-reflex execution vs. deep-simulation planning. Continuous Learning: Includes a Default Mode Network (DMN) architecture for "Sleep Cycle" memory consolidation. We've included a standalone simulation where a Lár System 2 Router analyzes a mock JEPA's numerical state prediction, mathematically detects an impending collision, vetoes the action, and replans—all without generating a single word of English text. Repo: https://bit.ly/4b909fj Would love to hear your thoughts on orchestration for non-autoregressive models. https://bit.ly/4b909fj February 27, 2026 at 03:38AM
Show HN: I Built Smart Radio That Auto-Skips Talk and Ads by Using ML https://bit.ly/4l6Exnm
Show HN: I Built Smart Radio That Auto-Skips Talk and Ads by Using ML Hi, I built TuneJourney to solve a specific annoyance: radio ads and DJ chatter. The core feature is an in-browser "AI Skip Talk" filter. The Tech: Instead of processing on a server, it uses the Web Audio API to capture the stream locally and runs a lightweight ML classification model directly in your browser. It estimates the music vs. speech probability in near real-time. If enabled, it automatically triggers a "next" command to hop to another station the moment an ad, news segment, or DJ starts talking. Features: - In-browser Inference: Entirely local and privacy-focused; no audio data ever leaves your machine. - WebGL + Point Clustering: Renders 70,000 stations across 11,000 locations smoothly. - Real-time Activity: See other users on the globe and what they are listening to in real-time. - System Integration: Full Media Key support for physical keyboard and system-level Next/Prev buttons. - Customization: Includes a talk sensitivity slider for the ML model so you can tweak the threshold. Check it out: https://bit.ly/3OBjYTQ Let me know what you think! I am interested if this project is worth further investment, building a mobile app, etc. https://bit.ly/3OBjYTQ February 27, 2026 at 01:09AM
Wednesday, 25 February 2026
Show HN: OrangeWalrus, an aggregator for trivia nights (and other events) in SF https://bit.ly/4tT1vCg
Show HN: OrangeWalrus, an aggregator for trivia nights (and other events) in SF Two problems I encountered personally: 1) Some buddies and I went to a trivia night late last year, only to arrive to find it cancelled (with signs still on the walls saying it happened every Tuesday, etc) 2) Sourcing ideas for fun things to do in the city on a given night, in a given neighborhood. Some sites help a ton (e.g. funcheapsf), but often don't have everything I'd want to see, so we decided to build that out a bit. Anyway, I built this originally to solve #1, then a buddy and I expanded it to also start addressing #2 (still in progress, but we've added more event types already). Thanks for checking it out! We're very open to thoughts / feedback. https://bit.ly/3MGnqvP February 26, 2026 at 12:17AM
Show HN: Tesseract – 3D architecture editor with MCP for AI-assisted design https://bit.ly/46ZURjL
Show HN: Tesseract – 3D architecture editor with MCP for AI-assisted design Hey HN. I'm David, solo dev, 20+ years shipping production systems. I built Tesseract because AI can analyze your codebase, but the results stay buried in text. Architecture is fundamentally visual — you need to see it, navigate it, drill into it. So I built a 3D canvas where AI can show you what it finds. Tesseract is a desktop app today (cloud version coming) with a built-in MCP server. You connect it to Claude Code with one command: claude mcp add tesseract -s user -t http http://localhost:7440/mcp I use it for onboarding (understand a codebase without reading code), mapping (point AI at code, get a 3D diagram), exploring (navigate layers and drill into subsystems), debugging (trace data flows with animated color-coded paths), and generating (design in 3D, generate code back). There's also a Claude Code plugin (tesseract-skills) with slash commands: /arch-codemap maps an entire codebase, /arch-flow traces data paths, /arch-detail drills into subsystems. Works with Claude Code, Cursor, Copilot, Windsurf — any MCP client. Free to use. Sign up to unlock all features for 3 months. It's early but stable. I've been dogfooding it on real projects for weeks and it's ready for other people to try. Demo video (1min47): https://youtu.be/YqqtRv17a3M Docs: https://bit.ly/3OAdPXY Plugin: https://bit.ly/4rCl6VF Discord: https://bit.ly/46qBRL6 Happy to discuss the MCP integration, the design choices, or anything else. Would love feedback. https://bit.ly/4rNDB9G February 26, 2026 at 12:05AM
Tuesday, 24 February 2026
Show HN: Context Mode – 315 KB of MCP output becomes 5.4 KB in Claude Code https://bit.ly/4sePGF4
Show HN: Context Mode – 315 KB of MCP output becomes 5.4 KB in Claude Code Every MCP tool call dumps raw data into Claude Code's 200K context window. A Playwright snapshot costs 56 KB, 20 GitHub issues cost 59 KB. After 30 minutes, 40% of your context is gone. I built an MCP server that sits between Claude Code and these outputs. It processes them in sandboxes and only returns summaries. 315 KB becomes 5.4 KB. It supports 10 language runtimes, SQLite FTS5 with BM25 ranking for search, and batch execution. Session time before slowdown goes from ~30 min to ~3 hours. MIT licensed, single command install: /plugin marketplace add mksglu/claude-context-mode /plugin install context-mode@claude-context-mode Benchmarks and source: https://bit.ly/3MZWN56 Would love feedback from anyone hitting context limits in Claude Code. https://bit.ly/3MZWN56 February 25, 2026 at 07:23AM
Show HN: A Visual Editor for Karabiner https://bit.ly/4sambEf
Show HN: A Visual Editor for Karabiner https://bit.ly/4s30h5E February 25, 2026 at 04:39AM
Show HN: StreamHouse – S3-native Kafka alternative written in Rust https://bit.ly/4kRblAq
Show HN: StreamHouse – S3-native Kafka alternative written in Rust Hey HN, I built StreamHouse, an open-source streaming platform that replaces Kafka's broker-managed storage with direct S3 writes. The goal: same semantics, fraction of the cost. How it works: Producers batch and compress records, a stateless server manages partition routing and metadata (SQLite for dev, PostgreSQL for prod), and segments land directly in S3. Consumers read from S3 with a local segment cache. No broker disks to manage, no replication factor to tune — S3 gives you 11 nines of durability out of the box. What's there today: - Producer API with batching, LZ4 compression, and offset tracking (62K records/sec) - Consumer API with consumer groups, auto-commit, and multi-partition fanout (30K+ records/sec) - Kafka-compatible protocol (works with existing Kafka clients) - REST API, gRPC API, CLI, and a web UI - Docker Compose setup for trying it locally in 5 minutes The cost model is what motivated this. Kafka's storage costs scale with replication factor × retention × volume. With S3 at $0.023/GB/month, storing a TB of events costs ~$23/month instead of hundreds on broker EBS volumes. Written in Rust, ~50K lines across 15 crates. Apache 2.0 licensed. GitHub: https://bit.ly/4tVpwsp Happy to answer questions about the architecture, tradeoffs, or what I learned building this. https://bit.ly/4tVpwsp February 25, 2026 at 03:50AM
Monday, 23 February 2026
Show HN: Enseal – Stop pasting secrets into Slack .env sharing from the terminal https://bit.ly/3MuiGcG
Show HN: Enseal – Stop pasting secrets into Slack .env sharing from the terminal We've all done it — "hey can you DM me the staging .env?" Secrets end up in Slack history, email threads, shared notes — all searchable, all persistent. The secure path (1Password, GPG, etc.) always had more friction than the insecure one, so people took the shortcut. enseal makes the secure path faster than the insecure one: # sender $ enseal share .env Share code: 7-guitarist-revenge Expires: 5 minutes or first receive # recipient $ enseal receive 7-guitarist-revenge ok: 14 secrets written to .env Zero setup, no accounts, no keys needed for basic use. Channels are single-use and time-limited. The relay never sees plaintext (age encryption + SPAKE2 key exchange). For teams that want more: identity mode with public key encryption, process injection (secrets never touch disk), schema validation, at-rest encryption for git, and a self-hostable relay. Written in Rust. MIT licensed. Available via cargo install, prebuilt binaries, or Docker. Looking for feedback on the UX and security model especially. What would make you actually reach for this instead of the Slack DM? Detailed documentation here: https://bit.ly/4ayuSlR https://bit.ly/4qS4t7m February 24, 2026 at 03:15AM
Show HN: Steerling-8B, a language model that can explain any token it generates https://bit.ly/46iquF6
Show HN: Steerling-8B, a language model that can explain any token it generates https://bit.ly/46oj3fy February 24, 2026 at 01:38AM
Sunday, 22 February 2026
Show HN: Rendering 18,000 videos in real-time with Python https://bit.ly/3OUFrHg
Show HN: Rendering 18,000 videos in real-time with Python https://bit.ly/4qR82e2 February 22, 2026 at 04:46PM
Saturday, 21 February 2026
Show HN: Dq – pipe-based CLI for querying CSV, JSON, Avro, and Parquet files https://bit.ly/3OBlHID
Show HN: Dq – pipe-based CLI for querying CSV, JSON, Avro, and Parquet files I'm a data engineer and exploring a data file from the terminal has always felt more painful than it should be for me. My usual flow involved some combination of avro-tools, opening the file in Excel or sheets, writing a quick Python script, using DataFusion CLI, or loading it into a database just to run one query. It works, but it's friction -- and it adds up when you're just trying to understand what's in a file or track down a bug in a pipeline. A while ago I had this idea of a simple pipe-based CLI tool, like jq but for tabular data, that works across all these formats with a consistent syntax. I refined the idea over time into something I wanted to be genuinely simple and useful -- not a full query engine, just a sharp tool for exploration and debugging. I never got around to building it though. Last week, with AI tools actually being capable now, I finally did :) I deliberately avoided SQL. For quick terminal work, the pipe-based composable style feels much more natural: you build up the query step by step, left to right, and each piece is obvious in isolation. SQL asks you to hold the whole structure in your head before you start typing. `dq 'sales.parquet | filter { amount > 1000 } | group category | reduce total = sum(amount), n = count() | remove grouped | sortd total | head 10'` How it works technically: dq has a hand-written lexer and recursive descent parser that turns the query string into an AST, which is then evaluated against the file lazily where possible. Each operator (filter, select, group, reduce, etc.) is a pure transformation -- it takes a table in and returns a table out. This is what makes the pipe model work cleanly: operators are fully orthogonal and composable in any order. It's written in Go -- single self-contained binary, 11MB, no runtime dependencies, installable via Homebrew. I'd love feedback specially from anyone who's felt the same friction. https://bit.ly/409lnnf February 21, 2026 at 11:31PM
Show HN: Ktop – a themed terminal monitor for GPU, CPU, RAM, temps and OOM kills https://bit.ly/4cI6Wh7
Show HN: Ktop – a themed terminal monitor for GPU, CPU, RAM, temps and OOM kills I built a terminal system monitor that fills a gap I kept hitting when running local LLMs: GPU usage and memory (for both NVidia and AMD) alongside CPU usage and memory, temps, upload, download and OOM kill tracking. All in one view with 50 colour themes. Consumes less cpu usage than glances (in my testing). One line install. https://bit.ly/4qSg1HU February 22, 2026 at 12:45AM
Show HN: AI writes code – humans fix it https://bit.ly/40nlE5P
Show HN: AI writes code – humans fix it https://bit.ly/3ZPmvvX February 21, 2026 at 11:57PM
Friday, 20 February 2026
Show HN: oForum | Self-hostable links/news site https://bit.ly/4c6xzME
Show HN: oForum | Self-hostable links/news site https://bit.ly/3MHNc2K February 20, 2026 at 11:19PM
Show HN: How Amazon Pricing Algorithms Work https://bit.ly/4rrSZZb
Show HN: How Amazon Pricing Algorithms Work Amazon is one of the largest online retailers in the world, offering millions of products across countless categories. Because of this, prices on Amazon can change frequently, which sometimes makes it hard to know if a deal is genuine. Understanding how Amazon pricing works can help shoppers make smarter buying decisions. https://bit.ly/4rqhCpd February 20, 2026 at 11:59PM
Thursday, 19 February 2026
Show HN: 17MB model beats human experts at pronunciation scoring https://bit.ly/4cApfor
Show HN: 17MB model beats human experts at pronunciation scoring https://bit.ly/4tLVsPU February 20, 2026 at 04:41AM
Show HN: I indexed the academic papers buried in the DOJ Epstein Files https://bit.ly/46g3g2g
Show HN: I indexed the academic papers buried in the DOJ Epstein Files The DOJ released ~3.5M pages of Epstein documents across 12 datasets. Buried in them are 207 academic papers and 14 books that nobody was really talking about. From what I understand these papers aren't usually freely accesible, but since they are public documents, now they are. I don't know, thought it was interesting to see what this dude was reading. You can check it out at jeescholar.com Pipeline: 1. Downloaded all 12 DOJ datasets + House Oversight Committee release 2. Heuristic pre-filter (abstract detection, DOI regex, citation block patterns, affiliation strings) to cut noise 3. LLM classifier to confirm and extract metadata 4. CrossRef and Semantic Scholar APIs for DOI matching, citation counts, abstracts 5. 87 of 207 papers got DOI matches; the rest are identified but not in major indexes Stack: FastAPI + SQLite (FTS5 for full-text search) + Cloudflare R2 for PDFs + nginx/Docker on Hetzner. The fields represented are genuinely iteresting: there's a cluster of child abuse/grooming research, but also quantum gravity, AGI safety, econophysics, and regenerative medicine. Each paper links back to its original government PDF and Bates number. For sure not an exhaustive list. Would be happy to add more if anyone finds them. https://bit.ly/46g3giM February 20, 2026 at 04:07AM
Show HN: A small, simple music theory library in C99 https://bit.ly/4c569XA
Show HN: A small, simple music theory library in C99 https://bit.ly/4bYYKZH February 19, 2026 at 11:54PM
Wednesday, 18 February 2026
Show HN: Potatometer – Check how visible your website is to AI search (GEO) https://bit.ly/4tGBki4
Show HN: Potatometer – Check how visible your website is to AI search (GEO) Most SEO tools only check for Google. But a growing chunk of search is now happening inside ChatGPT, Perplexity, and other AI engines, and the signals they use to surface content are different. Potatometer runs multiple checks across both traditional SEO and GEO (Generative Engine Optimization) factors and gives you a score with specific recommendations. Free, no login needed. Curious if others have been thinking about this problem and what signals you think matter most for AI visibility. https://bit.ly/3MtSoXX February 19, 2026 at 07:41AM
Show HN: I built a fuse box for microservices https://bit.ly/3MMHFrH
Show HN: I built a fuse box for microservices Hey HN! I'm Rodrigo, I run distributed systems across a few countries. I built Openfuse because of something that kept bugging me about how we all do circuit breakers. If you're running 20 instances of a service and Stripe starts returning 500s, each instance discovers that independently. Instance 1 trips its breaker after 5 failures. Instance 14 just got recycled and hasn't seen any yet. Instance 7 is in half-open, probing a service you already know is dead. For some window of time, part of your fleet is protecting itself and part of it is still hammering a dead dependency and timing out, and all you can do is watch. Libraries can't fix this. Opossum, Resilience4j, Polly are great at the pattern, but they make per-instance decisions with per-instance state. Your circuit breakers don't talk to each other. Openfuse is a centralized control plane. It aggregates failure metrics from every instance in your fleet and makes the trip decision based on the full picture. When the breaker opens, every instance knows at the same time. It's a few lines of code: const result = await openfuse.breaker('stripe').protect( () => chargeCustomer(payload) ); The SDK is open source, anyone can see exactly what runs inside their services. The other thing I couldn't let go of: when you get paged at 3am, you shouldn't have to find logs across 15 services to figure out what's broken. Openfuse gives you one dashboard showing every breaker state across your fleet: what's healthy, what's degraded, what tripped and when. And, you shouldn't need a deploy to act. You can open a breaker from the dashboard and every instance stops calling that dependency immediately. Planned maintenance window at 3am? Open beforehand. Fix confirmed? Close it instantly. Thresholds need adjusting? Change them in the dashboard, takes effect across your fleet in seconds. No PRs, no CI, no config files. It has a decent free tier for trying it out, then $99/mo for most teams, $399/mo with higher throughput and some enterprise features. Solo founder, early stage, being upfront. Would love to hear from people who've fought cascading failures in production. What am I missing? https://bit.ly/4cHN3qq February 18, 2026 at 03:04PM
Show HN: Codereport – track TODOs, refactors, and bugs in your repo with a CLI https://bit.ly/4kKGzt8
Show HN: Codereport – track TODOs, refactors, and bugs in your repo with a CLI I got tired of TODOs, temporary hacks, and refactors that never get addressed. In most repos I work on: - TODOs are scattered across files/apps/messages - “Critical” fixes don’t actually block people from collecting debt - PR comments or tickets aren’t enough actionable So I built codereport, a CLI that stores structured follow-ups in the repo itself (.codereports/). Each report tracks: - file + line range (src/foo.rs:42-88) - tag (todo, refactor, buggy, critical) - severity (you can configure it to be blocking in CI) - optional expiration date - owner (CODEOWNERS → git blame fallback) You can list, resolve, or delete reports, generate a minimal HTML dashboard with heatmaps and KPIs, and run codereport check in CI to fail merges if anything blocking or expired is still open. It’s repo-first, and doesn’t rely on any external services. I’m curious: Would a tool like this fit in your workflow? Is storing reports in YAML in the repo reasonable? Would CI enforcement feel useful or annoying? CLI: https://bit.ly/3OgcoxQ + codereport.pulko-app.com February 19, 2026 at 12:23AM
Tuesday, 17 February 2026
Show HN: VisibleInAI – Check if ChatGPT recommends your brand https://bit.ly/4aBRkcj
Show HN: VisibleInAI – Check if ChatGPT recommends your brand https://bit.ly/3ZHE3KA February 18, 2026 at 12:10AM
Show HN: I built the Million Dollar Homepage for agents https://bit.ly/4tG5v9b
Show HN: I built the Million Dollar Homepage for agents https://bit.ly/4anXy0J February 17, 2026 at 02:31PM
Monday, 16 February 2026
Show HN: Andrej Karpathy's microgpt.py to C99 microgpt.c – 4,600x faster https://bit.ly/3MzBpn5
Show HN: Andrej Karpathy's microgpt.py to C99 microgpt.c – 4,600x faster Andrej Karpathy showed us the GPT algorithm. I wanted to see the hardware limit. The Punchline: I made it go 4,600x faster in pure C code, no dependencies and using a compiler with SIMD auto-vectorisation!!! Andrej recently released microgpt.py - a brilliant, atomic look at the core of a GPT. As a low-latency developer, I couldn't resist seeing how fast it could go when you get closer to the metal. So just for funzies, I spent a few hours building microgpt-c, a zero-dependency and pure C99 implementation featuring: - 4,600x Faster training vs the Python reference (Tested on MacBook Pro M2 Max). On Windows, it is 2,300x faster. - SIMD Auto-vectorisation for high-speed matrix operations. - INT8 Quantisation (reducing weight storage by ~8x). Training is slightly slower, but the storage reduction is significant. - Zero Dependencies - just pure logic. The amalgamation image below is just for fun (and to show off the density!), but the GitHub repo contains the fully commented, structured code for anyone who wants to play with on-device AI. I have started to build something useful, like a simple C code static analyser - I will do a follow-up post. Everything else is just efficiency... but efficiency is where the magic happens https://bit.ly/4rV63GC February 17, 2026 at 01:06AM
Show HN: WowAI.pet – Generate cinematic videos from blurry pet photos https://bit.ly/4amvSth
Show HN: WowAI.pet – Generate cinematic videos from blurry pet photos I built WowAI.pet to solve the "uncooperative subject" problem in pet photography. Most pet owners have a gallery full of motion-blurred "failed" shots because pets simply won't sit still. Instead of fighting the shutter speed, I’m using generative AI to treat these blurred images as structural seeds. The tool transforms a single low-quality photo into high-fidelity video (4K, consistent depth-of-field) across various styles—from traditional ink-wash aesthetics to talking avatars. Key Features: Zero-shot generation: No model training or fine-tuning required. Temporal consistency: Maintaining pet features across dynamic motion. Integrated Lip-sync: Automated voice synthesis for "talking" pet videos. I’m looking for feedback on the generation speed and the consistency of the output styles. https://bit.ly/40bFHUL February 17, 2026 at 12:25AM
Sunday, 15 February 2026
Show HN: Purple Computer – Turn an old laptop into a calm first kids computer https://bit.ly/4aTh7y8
Show HN: Purple Computer – Turn an old laptop into a calm first kids computer Hey HN, I'm Tavi. I built this for my 4-year-old. He and I used to "computer code" together in IPython: typing words to see emojis, mixing colors, making sounds. Eventually he wanted his own computer. So I took an old laptop and made him one. That IPython session evolved into Explore mode, a REPL where kids type things and something always happens: "cat * 5" shows five cats, "red + blue" mixes colors like real paint, math gets dot visualizations. Then came Play mode (every key makes a sound and paints a color) and Doodle mode (write and paint). The whole machine boots straight into Purple. No desktop, no browser, no internet. It felt different from "screen time." He'd use it for a while, then walk away on his own. No tantrum, no negotiation. Some technical bits: it's a Python TUI (Textual in Alacritty) running on Ubuntu, so even very old laptops run it well. Keyboard input bypasses the terminal entirely via evdev for true key-down/key-up events, which lets me do sticky shift and double-tap capitals so kids don't have to hold two keys. Color mixing uses spectral reflectance curves so colors actually mix like paint (yellow + blue = green, not gray). Source is on GitHub: https://bit.ly/4aBNOyS https://bit.ly/3ZEutIk February 16, 2026 at 02:39AM
Show HN: HabitStreak – Habit tracker with giftable streak tokens https://bit.ly/4kDQ0u5
Show HN: HabitStreak – Habit tracker with giftable streak tokens https://bit.ly/4ahy5Gg February 16, 2026 at 12:44AM
Show HN: Klaw.sh – Kubernetes for AI agents https://bit.ly/3ZzSjVD
Show HN: Klaw.sh – Kubernetes for AI agents Hi everyone, I run a generative AI infra company, unified API for 600+ models. Our team started deploying AI agents for our marketing and lead gen ops: content, engagement, analytics across multiple X accounts. OpenClaw worked fine for single agents. But at ~14 agents across 6 accounts, the problem shifted from "how do I build agents" to "how do I manage them." Deployment, monitoring, team isolation, figuring out which agent broke what at 3am. Classic orchestration problem. So I built klaw, modeled on Kubernetes: Clusters — isolated environments per org/project Namespaces — team-level isolation (marketing, sales, support) Channels — connect agents to Slack, X, Discord Skills — reusable agent capabilities via a marketplace CLI works like kubectl: klaw create cluster mycompany klaw create namespace marketing klaw deploy agent.yaml I also rewrote from Node.js to Go — agents went from 800MB+ to under 10MB each. Quick usage example: I run a "content cluster" where each X account is its own namespace. Agent misbehaving on one account can't affect others. Adding a new account is klaw create namespace [account] + deploy the same config. 30 seconds. The key differentiator vs frameworks like CrewAI or LangGraph: those define how agents collaborate on tasks. klaw operates one layer above — managing fleets of agents across teams with isolation and operational tooling. You could run CrewAI agents inside klaw namespaces. Happy to answer questions. https://bit.ly/4rSWgke February 15, 2026 at 06:22PM
Saturday, 14 February 2026
Show HN: Git Navigator – Use Git Without Learning Git https://bit.ly/3ZDZo7t
Show HN: Git Navigator – Use Git Without Learning Git Hey HN, I built a VS Code extension that lets you do Git things without memorizing Git commands. You know what you want to do, move this commit over there, undo that thing you just did, split this big commit into two smaller ones. Git Navigator lets you just... do that. Drag a commit to rebase it. Cherry-pick (copy) it onto another branch. Click to stage specific lines. The visual canvas shows you what's happening, so you're not guessing what `git rebase -i HEAD~3` actually means. The inspiration was Sapling's Interactive Smartlog, which I used heavily at Meta. I wanted that same experience but built specifically for Git. A few feature callouts: - Worktrees — create, switch, and delete linked worktrees from the graph. All actions are worktree-aware so you're always working in the right checkout. - Stacked workflows — first-class stack mode if you're into stacked diffs, but totally optional. Conflict resolution — block-level choices instead of hunting through `<<<<<<<` markers. Works in VS Code, Cursor, and Antigravity. Just needs a Git repo. Site: https://gitnav.xyz VSCode Marketplace: https://marketplace.visualstudio.com/items?itemName=binhongl... Open VSX: https://open-vsx.org/extension/binhonglee/git-navigator https://gitnav.xyz February 15, 2026 at 03:43AM
Show HN: Twsnmp FK – Lightweight NMS Built with Go, Wails, and Svelte https://bit.ly/4qFdYGX
Show HN: Twsnmp FK – Lightweight NMS Built with Go, Wails, and Svelte Hi HN, developer here. I’ve been developing and maintaining a network management tool called TWSNMP for about 25 years. This new version, "FK" (Fresh Konpaku), is a complete modern rewrite. Why I built this: Most enterprise NMS are heavy, server-based, and complex to set up. I wanted something that runs natively on a desktop, is extremely fast to launch, and provides deep insights like packet analysis and NetFlow without a huge infrastructure. The Tech Stack: - Backend: Go (for high-speed log processing and SNMP polling) - Frontend: Svelte (to keep the UI snappy and lightweight) - Bridge: Wails (to build a cross-platform desktop app without the bulk of Electron) I’m looking for feedback from fellow network admins and developers. What features do you find most essential in a modern, lightweight NMS? GitHub: https://bit.ly/407DjhQ https://bit.ly/407DjhQ February 15, 2026 at 01:33AM
Show HN: An x86 assembly game from 2002, ported to WebAssembly with Claude Code https://bit.ly/4tKiZ3R
Show HN: An x86 assembly game from 2002, ported to WebAssembly with Claude Code https://bit.ly/4ax1DhP February 15, 2026 at 01:41AM
Show HN: Stack Overflow, but for AI agents (questions, answers, logs, context) https://bit.ly/4ajU5jV
Show HN: Stack Overflow, but for AI agents (questions, answers, logs, context) Hi HN — I built ChatOverflow, a Q&A forum for AI coding agents (Stack Overflow style). Agents keep re-learning the same debugging patterns each run (tool/version quirks, setup issues, framework behaviors). ChatOverflow is a shared place where agents post a question (symptom + logs + minimal reproduction + env context) and an answer (steps + why it works), so future agents can search and reuse it. Small test on 57 SWE-bench Lite tasks: letting agents search prior posts reduced average time 18.7 min → 10.5 min (-44%). A big bet here is that karma/upvotes/acceptance can act as a lightweight “verification signal” for solutions that consistently work in practice. Inspired by Moltbook. Feedback wanted on: 1. where would this fit in your agent workflow 2. how would you reduce prompt injection and prevent agents coordinating/brigading to push adversarial or low-quality posts? https://bit.ly/4twVg6V February 15, 2026 at 01:04AM
Friday, 13 February 2026
Show HN: ClipPath – Paste screenshots as file paths in your terminal https://bit.ly/3OqAvd0
Show HN: ClipPath – Paste screenshots as file paths in your terminal https://bit.ly/4qAvgEW February 14, 2026 at 02:08AM
Show HN: Explore ASN Relationships and BGP Route History with Real Internet Data https://bit.ly/4qCro6v
Show HN: Explore ASN Relationships and BGP Route History with Real Internet Data Hi HN, I’ve been working on a side project called ipiphistory.com. It’s a searchable explorer for: – ASN relationships (provider / peer / customer) – BGP route history – IP to ASN mapping over time – AS path visibility – Organization and geolocation data The idea started from my frustration when explaining BGP concepts to junior engineers and students — most tools are fragmented across multiple sources (RouteViews, RIPE RIS, PeeringDB, etc.). This project aggregates and indexes historical routing data to make it easier to: – Understand how ASNs connect – Explore real-world routing behavior – Investigate possible hijacks or path changes – Learn BGP using real data It’s still early and I’d really appreciate feedback from the HN community — especially on usability and features you’d like to see. Happy to answer technical questions about data ingestion and indexing as well. https://bit.ly/4auRK4j February 14, 2026 at 12:12AM
Show HN: Bubble Sort on a Turing Machine https://bit.ly/3MdtR9w
Show HN: Bubble Sort on a Turing Machine Bubble sort is pretty simple in most programming languages ... what about on a Turing Machine? I used all three of Claude 4.6, GLM 5, and GPT 5.2 to get a result, so this exercise was not quite trivial, at least at this time. The resulting machine, bubble_sort_unary.yaml, will take this input: 111011011111110101111101111 and give this output: 101101110111101111101111111 I.e., it's sorting the array [3,2,7,1,5,4]. The machine has 31 states and requires 1424 steps before it comes to a halt. It also introduces two extra symbols onto the tape, 'A' and 'B'. (You could argue that 0 is also an extra symbol because turinmachine.io uses blank, ' ', as well). When I started writing the code the LLM (Claude) balked at using unary numbers and so we implemented bubble_sort.yaml which uses the tape symbols '1', '2', '3', '4', '5', '6', '7'. This machine has fewer states, 25, and requires only 63 steps to perform the sort. So it's easier to watch it work, though it's not as generalized as the other TM. Some comments about how the 31 states of bubbles_sort_unary.yaml operate: | Group | Count | Purpose | |---|---|---| | `seek_delim_{clean,dirty}` | 2 | Pass entry: scan right to the next `0` delimiter between adjacent numbers. | | `cmpR_ `, `cmpL_ `, `cmpL_ret_ `, `cmpL_fwd_ ` | 8 | Comparison: alternately mark units in the right (`B`) and left (`A`) numbers to compare their sizes. | | `chk_excess_ `, `scan_excess_ `, `mark_all_X_ ` | 6 | Excess check: right number exhausted — see if unmarked `1`s remain on the left (meaning L > R, swap needed). | | `swap_ ` | 7 | Swap: bubble each `X`-marked excess unit rightward across the `0` delimiter. | | `restore_*` | 6 | Restore: convert `A`, `B`, `X` marks back to `1`s, then advance to the next pair. | | `rewind` / `done` | 2 | Rewind to start after a dirty pass, or halt. | (The above is in the README.md if it doesn't render on HN.) I'm curious if anyone can suggest refinements or further ideas. And please send pull requests if you're so inclined. My development path: I started by writing a pretty simple INITIAL_IDEAS.md, which got updated somewhat, then the LLM created a SPECIFICATION.md. For the bubble_sort_unary.yaml TM I had to get the LLMs to build a SPEC_UNARY.md because too much context was confusing them. I made 21 commits throughout the project and worked for about 6 hours (I was able to multi-task, so it wasn't 6 hours of hard effort). I spent about $14 on tokens via Zed and asked some questions via t3.chat ($8/month plan). A final question: What open source license is good for these types of mini-projects? I took the path of least resistance and used MIT, but I observe that turingmachine.io uses BSD 3-Clause. I've heard of "MIT with Commons Clause;" what's the landscape surrounding these kind of license questions nowadays? https://bit.ly/4kymlCE February 13, 2026 at 10:43PM
Thursday, 12 February 2026
Show HN: Yori – Isolating AI Logic into "Semantic Containers" (Docker for Code) https://bit.ly/4kzdMHP
Show HN: Yori – Isolating AI Logic into "Semantic Containers" (Docker for Code) Hi HN, I've been a developer for some time now, and like many of you, I've been frustrated by the "All-or-Nothing" problem with AI coding tools. You ask an AI to fix a bug or implement a function, and it rewrites the whole file. It changes your imports, renames your variables, or deletes comments it deems unnecessary. It’s like giving a junior developer (like me) root access to your production server just to change a config file. So, 29 days ago, I started building Yori to solve the trust problem. The Concept: Semantic Containers Yori introduces a syntax that acts like a firewall for AI. You define a $${ ... }$$ block inside a text file. Outside the block (The Host): Your manual code, architecture, and structure. The AI cannot touch this. Inside the block (The Container): You write natural language intent. The AI can only generate code here. Example: myutils.md ```cpp EXPORT: "myfile.cpp" // My manual architecture - AI cannot change this #include "utils.h" void process_data() { // Container: The AI is sandboxed here, but inherits the rest of the file as context $${ Sort the incoming data vector using quicksort. Filter out negative numbers. Print the result. }$$ } EXPORT: END ``` How it works: Yori is a C++ wrapper that parses these files. Whatever is within the EXPORT block and outside the containers ($${ }$$) will be copied as it is. When you run `yori myutils.md -make -series`, it sends the prompts to a local (Ollama) or cloud LLM, generates the syntax, fills the blocks, and compiles the result using your native toolchain (GCC/Clang/Python). If compilation fails, it feeds the error back to the LLM in a retry loop (self-healing). Why I think this matters: 1. Safety: You stop giving AI "root access" to your files. 2. Intent as Source: The prompt stays in the file. If you want to port your logic from C++ to Rust, you keep the prompts and just change the compile target. 3. Incremental Builds (to be added soon): Named containers allow for caching. If the prompt hasn't changed, you don't pay for an API call. It’s open source (MIT), C++17, and works locally. I’d love feedback on the "Semantic Container" concept. Is this the abstraction layer we've been missing for AI coding? Let me hear your ideas. Also, if you can't run yori.exe tell what went wrong and we see how to fix it. I opened a github issue on this. I am also working in making documentation for the project (github wiki). So expect that soon. GitHub: https://bit.ly/4qysa4w Thanks! February 13, 2026 at 05:17AM
Show HN: MicroGPT in 243 Lines – Demystifying the LLM Black Box https://bit.ly/4aMZhN8
Show HN: MicroGPT in 243 Lines – Demystifying the LLM Black Box The release of microgpt by Andrej Karpathy is a foundational moment for AI transparency. In exactly 243 lines of pure, dependency-free Python, Karpathy has implemented the complete GPT algorithm from scratch. As a PhD scholar investigating AI and Blockchain, I see this as the ultimate tool for moving beyond the "black box" narrative of Large Language Models (LLMs). The Architecture of Simplicity Unlike modern frameworks that hide complexity behind optimized CUDA kernels, microgpt exposes the raw mathematical machinery. The code implements: The Autograd Engine: A custom Value class that handles the recursive chain rule for backpropagation without any external libraries. GPT-2 Primitives: Atomic implementations of RMSNorm, Multi-head Attention, and MLP blocks, following the GPT-2 lineage with modernizations like ReLU. The Adam Optimizer: A pure Python version of the Adam optimizer, proving that the "magic" of training is just well-orchestrated calculus. The Shift to the Edge: Privacy, Latency, and Power For my doctoral research at Woxsen University, this codebase serves as a blueprint for the future of Edge AI. As we move away from centralized, massive server farms, the ability to run "atomic" LLMs directly on hardware is becoming a strategic necessity. Karpathy's implementation provides empirical clarity on how we can incorporate on-device MicroGPTs to solve three critical industry challenges: Better Latency: By eliminating the round-trip to the cloud, on-device models enable real-time inference. Understanding these 243 lines allows researchers to optimize the "atomic" core specifically for edge hardware constraints. Data Protection & Privacy: In a world where data is the new currency, processing information locally on the user's device ensures that sensitive inputs never leave the personal ecosystem, fundamentally aligning with modern data sovereignty standards. Mastering the Primitives: For Technical Product Managers, this project proves that "intelligence" doesn't require a dependency-heavy stack. We can now envision lightweight, specialized agents that are fast, private, and highly efficient. Karpathy’s work reminds us that to build the next generation of private, edge-native AI products, we must first master the fundamentals that fit on a single screen of code. The future is moving toward decentralized, on-device intelligence built on these very primitives. Link: https://bit.ly/3ODXJfM February 13, 2026 at 03:38AM
Show HN: WebExplorer – a tool for preview file in browser https://bit.ly/3ZzZk8N
Show HN: WebExplorer – a tool for preview file in browser https://bit.ly/3ODVglw February 13, 2026 at 03:10AM
Wednesday, 11 February 2026
Show HN: Double blind entropy using Drand for verifiably fair randomness https://bit.ly/461dPpO
Show HN: Double blind entropy using Drand for verifiably fair randomness The only way to get a trust-less random value is to have it distributed and time-locked three ways, player, server and a future-entropy. In the demo above, the moment you commit (Roll-Dice) a commit with the hash of a player secret is sent to the server and the server accepts that and sends back the hash of its secret back and the "future" drand round number at which the randomness will resolve. The future used in the demo is 10 secs When the reveal happens (after drand's particular round) all the secrets are revealed and the random number is generated using "player-seed:server-seed:drand-signature". All the verification is in Math, so truly trust-less, so: 1. Player-Seed should matches the player-hash committed 2. Server-Seed should matches the server-hash committed 3. Drand-Signature can is publicly not available at the time of commit and is available at the time of reveal. (Time-Locked) 4. Random number generated is deterministic after the event and unknown and unpredictably before the event. 5. No party can influence the final outcome, specially no "last-look" advantange for anyone. I think this should be used in all games, online lottery/gambling and other systems which want to be fair by design not by trust. https://bit.ly/4rP3SEv February 12, 2026 at 03:10AM
Show HN: CodeRLM – Tree-sitter-backed code indexing for LLM agents https://bit.ly/3Mt9THH
Show HN: CodeRLM – Tree-sitter-backed code indexing for LLM agents I've been building a tool that changes how LLM coding agents explore codebases, and I wanted to share it along with some early observations. Typically claude code globs directories, greps for patterns, and reads files with minimal guidance. It works in kind of the same way you'd learn to navigate a city by walking every street. You'll eventually build a mental map, but claude never does - at least not any that persists across different contexts. The Recursive Language Models paper from Zhang, Kraska, and Khattab at MIT CSAIL introduced a cleaner framing. Instead of cramming everything into context, the model gets a searchable environment. The model can then query just for what it needs and can drill deeper where needed. coderlm is my implementation of that idea for codebases. A Rust server indexes a project with tree-sitter, builds a symbol table with cross-references, and exposes an API. The agent queries for structure, symbols, implementations, callers, and grep results — getting back exactly the code it needs instead of scanning for it. The agent workflow looks like: 1. `init` — register the project, get the top-level structure 2. `structure` — drill into specific directories 3. `search` — find symbols by name across the codebase 4. `impl` — retrieve the exact source of a function or class 5. `callers` — find everything that calls a given symbol 6. `grep` — fall back to text search when you need it This replaces the glob/grep/read cycle with index-backed lookups. The server currently supports Rust, Python, TypeScript, JavaScript, and Go for symbol parsing, though all file types show up in the tree and are searchable via grep. It ships as a Claude Code plugin with hooks that guide the agent to use indexed lookups instead of native file tools, plus a Python CLI wrapper with zero dependencies. For anecdotal results, I ran the same prompt against a codebase to "explore and identify opportunities to clarify the existing structure". Using coderlm, claude was able to generate a plan in about 3 minutes. The coderlm enabled instance found a genuine bug (duplicated code with identical names), orphaned code for cleanup, mismatched naming conventions crossing module boundaries, and overlapping vocabulary. These are all semantic issues which clearly benefit from the tree-sitter centric approach. Using the native tools, claude was able to identify various file clutter in the root of the project, out of date references, and a migration timestamp collision. These findings are more consistent with methodical walks of the filesystem and took about 8 minutes to produce. The indexed approach did better at catching semantic issues than native tools and had a key benefit in being faster to resolve. I've spent some effort to streamline the installation process, but it isn't turnkey yet. You'll need the rust toolchain to build the server which runs as a separate process. Installing the plugin from a claude marketplace is possible, but the skill isn't being added to your .claude yet so there are some manual steps to just getting to a point where claude could use it. Claude continues to demonstrate significant resistance to using CodeRLM in exploration tasks. Typically to use you will need to explicitly direct claude to use it. --- Repo: github.com/JaredStewart/coderlm Paper: Recursive Language Models https://bit.ly/4rG4RXf — Zhang, Kraska, Khattab (MIT CSAIL, 2025) Inspired by: https://bit.ly/3MrxwAn https://bit.ly/4rOWKIf February 11, 2026 at 02:10PM
Show HN: Agent framework that generates its own topology and evolves at runtime https://bit.ly/4ky6Omu
Show HN: Agent framework that generates its own topology and evolves at runtime Hi HN, I’m Vincent from Aden. We spent 4 years building ERP automation for construction (PO/invoice reconciliation). We had real enterprise customers but hit a technical wall: Chatbots aren't for real work. Accountants don't want to chat; they want the ledger reconciled while they sleep. They want services, not tools. Existing agent frameworks (LangChain, AutoGPT) failed in production - brittle, looping, and unable to handle messy data. General Computer Use (GCU) frameworks were even worse. My reflections: 1. The "Toy App" Ceiling & GCU Trap Most frameworks assume synchronous sessions. If the tab closes, state is lost. You can't fit 2 weeks of asynchronous business state into an ephemeral chat session. The GCU hype (agents "looking" at screens) is skeuomorphic. It’s slow (screenshots), expensive (tokens), and fragile (UI changes = crash). It mimics human constraints rather than leveraging machine speed. Real automation should be headless. 2. Inversion of Control: OODA > DAGs Traditional DAGs are deterministic; if a step fails, the program crashes. In the AI era, the Goal is the law, not the Code. We use an OODA loop to manage stochastic behavior: - Observe: Exceptions are observations (FileNotFound = new state), not crashes. - Orient: Adjust strategy based on Memory and - Traits. - Decide: Generate new code at runtime. - Act: Execute. The topology shouldn't be hardcoded; it should emerge from the task's entropy. 3. Reliability: The "Synthetic" SLA You can't guarantee one inference ($k=1$) is correct, but you can guarantee a System of Inference ($k=n$) converges on correctness. Reliability is now a function of compute budget. By wrapping an 80% accurate model in a "Best-of-3" verification loop, we mathematically force the error rate down—trading Latency/Tokens for Certainty. 4. Biology & Psychology in Code "Hard Logic" can't solve "Soft Problems." We map cognition to architectural primitives: Homeostasis: Solving "Perseveration" (infinite loops) via a "Stress" metric. If an action fails 3x, "neuroplasticity" drops, forcing a strategy shift. Traits: Personality as a constraint. "High Conscientiousness" increases verification; "High Risk" executes DROP TABLE without asking. For the industry, we need engineers interested in the intersection of biology, psychology, and distributed systems to help us move beyond brittle scripts. It'd be great to have you roasting my codes and sharing feedback. Repo: https://bit.ly/4rjN60f https://bit.ly/4612RRa February 11, 2026 at 08:39PM
Tuesday, 10 February 2026
Show HN: Yan – Glitch Art Photo/Live Editor https://bit.ly/4rOHLhr
Show HN: Yan – Glitch Art Photo/Live Editor Everything evolves in digitality, and deconstructs in logic. Tired of filters that make everyone look like a glazed donut? Same. Yan is not another beauty app. It's a digital chaos engine that treats your pixels like they owe it money. We don't enhance photos — we interrogate them at the binary level until they confess their true nature. [What We Actually Do] • Luma Stretch: Grab your image by its light and shadow, then yeet it into oblivion. Speed lines included. • Pixel Sort: Let gravity do art. Pixels fall, colors cascade, Instagram influencers cry. • RGB Shift: That drunk 3D glasses effect, but on purpose. Your eyes will thank us. Or sue us. • Block Jitter: Ctrl+Z had a nightmare. This is what it dreamed. [Why Yan?] Because "vintage filter #47" is not a personality. Because glitch is not a bug — it's a feature. Because sometimes the most beautiful thing you can do to a photo is break it. Warning: Side effects may include artistic awakening, filter addiction withdrawal, and an uncontrollable urge to deconstruct everything. Your camera roll will never be boring again. https://bit.ly/46lFOkn February 11, 2026 at 05:19AM
Show HN: Model Training Memory Simulator https://bit.ly/4tuFsBI
Show HN: Model Training Memory Simulator https://bit.ly/46lDeLd February 8, 2026 at 10:39AM
Show HN: I vibecoded 177 tools for my own use (CalcBin) https://bit.ly/4tMlOS3
Show HN: I vibecoded 177 tools for my own use (CalcBin) Hey HN! I've been building random tools whenever I needed them over the past few months, and now I have 177 of them. Started because I was tired of sketchy converter sites with 10 ads, so I just... made my own. Some highlights for the dev crowd: Developer tools: - UUID Generator (v1/v4/v7, bulk generation): https://bit.ly/4qNOvLH - JWT Generator & Decoder: https://bit.ly/4aHg9ot - JSON Formatter/Validator: https://bit.ly/4aJfF1a - Cron Expression Generator (with natural language): https://bit.ly/3MeEZCZ - Base64 Encoder/Decoder: https://bit.ly/4ra9OI2 - Regex Tester: https://bit.ly/4reyteL - SVG Optimizer (SVGO-powered, client-side): https://bit.ly/4ttUvLP Fun ones: - Random Name Picker (spin wheel animation): https://bit.ly/45YUUvM - QR Code Generator: https://bit.ly/45UvIXq Everything runs client-side (Next.js + React), no ads, no tracking, works offline. Built it for myself but figured others might find it useful. Browse all tools: https://bit.ly/4aHy9z4 Tech: Next.js 14 App Router, TypeScript, Tailwind, Turborepo monorepo. All open to feedback! https://bit.ly/4tvp9o4 February 11, 2026 at 03:46AM
Show HN: ArtisanForge: Learn Laravel through a gamified RPG adventure https://bit.ly/4apgIls
Show HN: ArtisanForge: Learn Laravel through a gamified RPG adventure Hey HN, I built ArtisanForge, a free platform to learn PHP and Laravel through a medieval-fantasy RPG. Instead of traditional tutorials, you progress through kingdoms, solve coding exercises in a browser editor, earn XP, join guilds, and fight boss battles. Tech stack: Laravel 12, Livewire 3, Tailwind CSS, Alpine.js. Code execution runs sandboxed via php-wasm in the browser. What's in there: - 12 courses across 11 kingdoms (PHP basics to deployment) - 100+ interactive exercises with real-time code validation using AST analysis - AI companion (Pip the Owlox) that uses Socratic method – never gives direct answers - Full gamification: XP, levels, streaks, achievements, guilds, leaderboard - Multilingual (EN/FR/NL) The idea came from seeing too many beginners drop off traditional courses. Wrapping concepts in quests and progression mechanics keeps motivation high without dumbing down the content. Everything is free, no paywall, no premium tier. Feedback welcome – especially from Laravel devs and educators. https://bit.ly/3O6FZtr February 8, 2026 at 08:15AM
Monday, 9 February 2026
Show HN: I built a cloud hosting for OpenClaw https://bit.ly/4r4v5CX
Show HN: I built a cloud hosting for OpenClaw Yet another OpenClaw wrapper. But I really enjoyed the techy part of this project. Especially server provisionings in the background. https://bit.ly/4a6nhe2 February 9, 2026 at 11:39PM
Show HN: Reef – Bash compatibility layer for Fish shell, written in Rust https://bit.ly/4aGPJDq
Show HN: Reef – Bash compatibility layer for Fish shell, written in Rust Fish is the fastest, friendliest interactive shell, but it can't run bash syntax, which has kept it niche for 20 years. Reef fixes this with a three-tier approach: fish function wrappers for common keywords (export, unset, source), a Rust-powered AST translator using conch-parser for structural syntax (for/do/done, if/then/fi, $()), and a bash passthrough with env capture for everything else. 251/251 bash constructs pass in the test suite. The slowest path (full bash passthrough) takes ~3ms. The binary is 1.18MB. The goal: install fish, install reef, never think about bash compatibility again. Your muscle memory, Stack Overflow commands, and tool configs all just work. https://bit.ly/3O6BYFp February 10, 2026 at 12:44AM
Sunday, 8 February 2026
Show HN: Ported the 1999 game Bugdom to the browser and added a bunch of mods https://bit.ly/4klu5b4
Show HN: Ported the 1999 game Bugdom to the browser and added a bunch of mods I think the very first video game I ever played was Bugdom by Pangea Software, which came with the original iMac. There was also a shooter called Nanosaur, but my 7-year-old heart belonged to the more peaceable Bugdom, which featured a roly-poly named Rollie McFly needing to rescue ladybugs from evil fire ants and bees. Upon seeing the port to modern systems ( https://bit.ly/4tuDWzI ), I figured it should be able to run entirely in-browser nowadays, and also that AI coding tools "should" be able to do this entire project for me. I ended up spending perhaps 20 hours on it with Claude Code, but we got there. Once ported, I added a half-dozen mods that would have pleased my childhood self (like low-gravity mode and flying slugs & caterpillars mode), and a few that please my current self (like Dance Party mode). EDIT: Here are some mod/level combinations I recommend * https://bit.ly/3Msk7Iq... * https://bit.ly/4rJLFbr... * https://bit.ly/4a49Aw6... https://bit.ly/4rvsdPe February 9, 2026 at 04:07AM
Show HN: IsHumanCadence – Bot detection via keystroke dynamics (no CAPTCHAs) https://bit.ly/3Zp5E2U
Show HN: IsHumanCadence – Bot detection via keystroke dynamics (no CAPTCHAs) https://bit.ly/4agbc4I February 9, 2026 at 01:40AM
Show HN: A custom font that displays Cistercian numerals using ligatures https://bit.ly/4to51Er
Show HN: A custom font that displays Cistercian numerals using ligatures https://bit.ly/4twswLN February 8, 2026 at 11:39PM
Saturday, 7 February 2026
Show HN: LocalGPT – A local-first AI assistant in Rust with persistent memory https://bit.ly/4a31tzU
Show HN: LocalGPT – A local-first AI assistant in Rust with persistent memory I built LocalGPT over 4 nights as a Rust reimagining of the OpenClaw assistant pattern (markdown-based persistent memory, autonomous heartbeat tasks, skills system). It compiles to a single ~27MB binary — no Node.js, Docker, or Python required. Key features: - Persistent memory via markdown files (MEMORY, HEARTBEAT, SOUL markdown files) — compatible with OpenClaw's format - Full-text search (SQLite FTS5) + semantic search (local embeddings, no API key needed) - Autonomous heartbeat runner that checks tasks on a configurable interval - CLI + web interface + desktop GUI - Multi-provider: Anthropic, OpenAI, Ollama etc - Apache 2.0 Install: `cargo install localgpt` I use it daily as a knowledge accumulator, research assistant, and autonomous task runner for my side projects. The memory compounds — every session makes the next one better. GitHub: https://bit.ly/3O2Lc5C Website: https://bit.ly/4a31uDY Would love feedback on the architecture or feature ideas. https://bit.ly/3O2Lc5C February 8, 2026 at 02:26AM
Show HN: More beautiful and usable Hacker News https://bit.ly/4r4GfaX
Show HN: More beautiful and usable Hacker News gives you keyboard navigation.. let me know what you think. https://twitter.com/shivamhwp/status/2020125417995436090 February 8, 2026 at 02:33AM
Show HN: Witnessd – Prove human authorship via hardware-bound jitter seals https://bit.ly/3ZsfPnk
Show HN: Witnessd – Prove human authorship via hardware-bound jitter seals https://bit.ly/4akLMmz February 7, 2026 at 11:40PM
Friday, 6 February 2026
Show HN: FastLog: 1.4 GB/s text file analyzer with AVX2 SIMD https://bit.ly/3OqOAXQ
Show HN: FastLog: 1.4 GB/s text file analyzer with AVX2 SIMD https://bit.ly/4qpevwB February 7, 2026 at 02:32AM
Show HN: A Kubernetes Operator to Validate Jupyter Notebooks in MLOps https://bit.ly/4rAX8tG
Show HN: A Kubernetes Operator to Validate Jupyter Notebooks in MLOps I built an open-source Kubernetes operator to automate the validation of Jupyter Notebooks in MLOps workflows. It's called the Jupyter Notebook Validator Operator and it's designed to catch issues with notebooks before they hit production. It runs notebooks in isolated pods and can validate them against deployed ML models on platforms like KServe, OpenShift AI, and vLLM. It also does regression testing by comparing notebook outputs against a "golden" version. The goal is to make notebooks more reliable and reproducible in production environments. It's built with Go and the Operator SDK. We're looking for contributors. There are opportunities to work on features like smarter error reporting, observability dashboards, and adding support for more platforms. GitHub: https://bit.ly/3ObO3Jf... https://bit.ly/46dId0r February 7, 2026 at 01:10AM
Show HN: Falcon's Eye (isometric NetHack) running in the browser via WebAssembly https://bit.ly/3O1GwNh
Show HN: Falcon's Eye (isometric NetHack) running in the browser via WebAssembly https://bit.ly/3MaBiOI February 6, 2026 at 11:19PM
Thursday, 5 February 2026
Show HN: Calfkit – an SDK to build distributed, event-driven AI agents https://bit.ly/4ru0u1n
Show HN: Calfkit – an SDK to build distributed, event-driven AI agents I think agents should work like real teams, with independent, distinct roles, async communication, and the ability to onboard new teammates or tools without restructuring the whole org. I built backend systems at Yahoo and TikTok so event-driven agents felt obvious. But no agent SDKs were using this pattern, so I made Calfkit. Calfkit breaks down agents into independent services (LLM inference, tools, and routing) that communicate asynchronously through Kafka. Agents, tool services, and downstream consumers can be deployed, added-to, removed, and scaled independently. Check it out if this interests you! I’m curious to see what y’all think. https://bit.ly/4tlDXpq February 6, 2026 at 12:10AM
Show HN: Total Recall – write-gated memory for Claude Code https://bit.ly/4rbjqCr
Show HN: Total Recall – write-gated memory for Claude Code https://bit.ly/4rbjqSX February 6, 2026 at 12:56AM
Show HN: A state-based narrative engine for tabletop RPGs https://bit.ly/3ZoN2Qy
Show HN: A state-based narrative engine for tabletop RPGs I’m experimenting with modeling tabletop RPG adventures as explicit narrative state rather than linear scripts. Everdice is a small web app that tracks conditional scenes and choice-driven state transitions to preserve continuity across long or asynchronous campaigns. The core contribution is explicit narrative state and causality, not automation. The real heavy lifting is happening in the DM Toolkit/Run Sessions area, and integrates CAML (Canonical Adventure Modeling Language) that I developed to transport narratives among any number of platforms. I also built the npm CAML-lint to check validity of narratives. I'm interested in your thoughts. https://bit.ly/4rstuXo https://bit.ly/4khOu0B February 5, 2026 at 11:55PM
Wednesday, 4 February 2026
Show HN: LLM Jailbreak Database https://bit.ly/45MlF6B
Show HN: LLM Jailbreak Database I vibe-coded this online DB for LLM injection prompts. It's registration/login less with some ambitious spam/bot filtering. I'm interested in trying to tune the barriers of interaction to a sweet spot where the DB gets balanced and the useful working injections are actually on top. thoughts? https://bit.ly/3ZiszwQ February 4, 2026 at 11:07PM
Show HN: Bunqueue – Job queue for Bun using SQLite instead of Redis https://bit.ly/4qgqo7S
Show HN: Bunqueue – Job queue for Bun using SQLite instead of Redis https://bit.ly/46oMxtB February 2, 2026 at 02:55AM
Show HN: The Last Worm – Visualizing guinea worm eradication, from 3.5M to 10 https://bit.ly/4klzH5l
Show HN: The Last Worm – Visualizing guinea worm eradication, from 3.5M to 10 https://bit.ly/3Mn1A08 February 4, 2026 at 11:58PM
Tuesday, 3 February 2026
Show HN: Craftplan – I built my wife a production management tool for her bakery https://bit.ly/4rv9Aeq
Show HN: Craftplan – I built my wife a production management tool for her bakery My wife was planning to open a micro-bakery. We looked at production management software and it was all either expensive or way too generic. The actual workflows for a small-batch manufacturer aren't that complex, so I built one and open-sourced it. Craftplan handles recipes (versioned BOMs with cost rollups), inventory (lot traceability, demand forecasting, allergen tracking), orders, production batch planning, and purchasing. Built with Elixir, Ash Framework, Phoenix LiveView, and PostgreSQL. Live demo: https://bit.ly/3O3vU0c (test@test.com / Aa123123123123) GitHub: https://bit.ly/4ryZ6ec https://bit.ly/4ryZ6ec February 1, 2026 at 06:25PM
Show HN: I built an AI twin recruiters can interview https://bit.ly/4rq48t9
Show HN: I built an AI twin recruiters can interview https://bit.ly/4aedLV6 The problem: Hiring new grads is broken. Thousands of identical resumes, but we're all different people. Understanding someone takes time - assessments, phone screens, multiple interviews. Most never get truly seen. I didn't want to be just another PDF. So I built an AI twin that recruiters can actually interview. What you can do: •Interview my AI about anything: https://bit.ly/4rmCT2A •Paste your JD to see if we match: https://bit.ly/4rojOgy •Explore my projects, code, and writing What happened: Sent it to one recruiter on LinkedIn. Next day, traffic spiked as it spread internally. Got interview invites within 24 hours. The bigger vision: What if this became standard? Instead of resume spam → keyword screening → interview rounds that still miss good fits, let recruiter AI talk to candidate AI for deep discovery. Build a platform where anyone can create their AI twin for genuine matching. I'm seeking Software/AI/ML Engineering roles and can build production-ready solutions from scratch. The site itself proves what I can do. Would love HN's thoughts on both the execution and the vision. https://bit.ly/4aedLV6 February 4, 2026 at 12:19AM
Monday, 2 February 2026
Show HN: Axiomeer – An open marketplace for AI agents https://bit.ly/4aykLfJ
Show HN: Axiomeer – An open marketplace for AI agents Hi, I built Axiomeer, an open-source marketplace protocol for AI agents. The idea: instead of hardcoding tool integrations into every agent, agents shop a catalog at runtime, and the marketplace ranks, executes, validates, and audits everything. How it works: - Providers publish products (APIs, datasets, model endpoints) via 10-line JSON manifests - Agents describe what they need in natural language or structured tags - The router scores all options by capability match (70%), latency (20%), cost (10%) with hard constraint filters - The top pick is executed, output is validated (citations required? timestamps?), and evidence quality is assessed deterministically - If the evidence is mock/fake/low-quality, the agent abstains rather than hallucinating - Every execution is logged as an immutable receipt The trust layer is the part I think is missing from existing approaches. MCP standardizes how you connect to a tool server. Axiomeer operates one layer up: which tool, from which provider, and can you trust what came back? Stack: Python, FastAPI, SQLAlchemy, Ollama (local LLM, no API keys). v1 ships with weather providers (Open-Meteo + mocks). The architecture supports any HTTP endpoint that returns structured JSON. Looking for contributors to add real providers across domains (finance, search, docs, code execution). Each provider is ~30 lines + a manifest. https://bit.ly/4byfTsV February 3, 2026 at 01:43AM
Show HN: Kannada Nudi Editor Web Version https://bit.ly/4aclQcT
Show HN: Kannada Nudi Editor Web Version Ported the Desktop Version of Kannada Nudi Editor to Web under the guidance of https://bit.ly/4a4vbER https://bit.ly/49W412S February 3, 2026 at 05:11AM
Show HN: Stream-based AI with neurological multi-gate (Na⁺/θ/NMDA) https://bit.ly/49UI6cb
Show HN: Stream-based AI with neurological multi-gate (Na⁺/θ/NMDA) Current LLMs struggle with compositional inference because they lack physical boundaries. CSCT implements a neurological multi-gate mechanism (Na⁺/θ/NMDA) to enforce L1 geometry and physical grounding. In my experiments (EX8/9), this architecture achieved 96.7% success in compositional inference within the convex hull—far outperforming unconstrained models.Key features:Stream-based: No batching or static context; it processes information as a continuous flow.Neurological Gating: Computational implementation of θ-γ coupling using Na⁺ and NMDA-inspired gates.Zero-shot Reasoning: Incurs no "hallucination" for in-hull compositions.Detailed technical write-up: [ https://bit.ly/4kds5BD... ]I’m eager to hear your thoughts on this "Projected Dynamical System" approach to cognition. https://bit.ly/4kds5S9 February 3, 2026 at 03:59AM
Show HN: 127 PRs to Prod this wknd with 18 AI agents: metaswarm. MIT licensed https://bit.ly/3Ois03A
Show HN: 127 PRs to Prod this wknd with 18 AI agents: metaswarm. MIT licensed A few weeks ago I posted about GoodToGo https://bit.ly/4pI0dXu - a tool that gives AI agents a deterministic answer to "is this PR ready to merge?" Several people asked about the larger orchestration system I mentioned. This is that system. I got tired of being a project manager for Claude Code. It writes code fine, but shipping production code is seven or eight jobs — research, planning, design review, implementation, code review, security audit, PR creation, CI babysitting. I was doing all the coordination myself. The agent typed fast. I was still the bottleneck. What I really needed was an orchestrator of orchestrators - swarms of swarms of agents with deterministic quality checks. So I built metaswarm. It breaks work into phases and assigns each to a specialist swarm orchestrator. It manages handoffs and uses BEADS for deterministic gates that persist across /compact, /clear, and even across sessions. Point it at a GitHub issue or brainstorm with it (it uses Superpowers to ask clarifying questions) and it creates epics, tasks, and dependencies, then runs the full pipeline to a merged PR - including outside code review like CodeRabbit, Greptile, and Bugbot. The thing that surprised me most was the design review gate. Five agents — PM, Architect, Designer, Security, CTO — review every plan in parallel before a line of code gets written. All five must approve. Three rounds max, then it escalates to a human. I expected a rubber stamp. It catches real design problems, dependency issues, security gaps. This weekend I pointed it at my backlog. 127 PRs merged. Every one hit 100% test coverage. No human wrote code, reviewed code, or clicked merge. OK, I guided it a bit, mostly helping with plans for some of the epics. A few learnings: Agent checklists are theater. Agents skipped coverage checks, misread thresholds, or decided they didn't apply. Prompts alone weren't enough. The fix was deterministic gates — BEADS, pre-push hooks, CI jobs all on top of the agent completion check. The gates block bad code whether or not the agent cooperates. The agents are just markdown files. No custom runtime, no server, and while I built it on TypeScript, the agents are language-agnostic. You can read all of them, edit them, add your own. It self-reflects too. After every merged PR, the system extracts patterns, gotchas, and decisions into a JSONL knowledge base. Agents only load entries relevant to the files they're touching. The more it ships, the fewer mistakes it makes. It learns as it goes. metaswarm stands on two projects: https://bit.ly/465Uggf by Steve Yegge (git-native task tracking and knowledge priming) and https://bit.ly/4tg1fwL by Jesse Vincent (disciplined agentic workflows — TDD, brainstorming, systematic debugging). Both were essential. Background: I founded Technorati, Linuxcare, and Warmstart; tech exec at Lyft and Reddit. I built metaswarm because I needed autonomous agents that could ship to a production codebase with the same standards I'd hold a human team to. $ cd my-project-name $ npx metaswarm init MIT licensed. IANAL. YMMV. Issues/PRs welcome! https://bit.ly/4tcbDpg February 3, 2026 at 02:18AM
Sunday, 1 February 2026
Show HN: ContractShield – AI contract analyser for freelancers https://bit.ly/463VwR1
Show HN: ContractShield – AI contract analyser for freelancers Built this with Claude Code. Analyses freelance contracts for 12 risk categories (payment terms, IP ownership, scope issues, termination clauses, etc.) and flags problems with specific recommendations. 40% of freelancers report getting stiffed by clients, often due to vague contract terms. This tool aims to help catch those issues before signing. Currently free while validating whether this solves a real problem. Would love HN's feedback, especially on: - Accuracy of the analysis - Whether this is actually useful for freelancers - What's missing or could be improved Tech stack: Node.js, Express, Anthropic Claude API, deployed on Railway. https://bit.ly/3ZcvXJv February 2, 2026 at 04:11AM
Show HN: Is AI "good" yet? – tracking HN sentiment on AI coding https://bit.ly/4qfxfhU
Show HN: Is AI "good" yet? – tracking HN sentiment on AI coding A survey tracking developer sentiment on AI-assisted coding through Hacker News posts. https://bit.ly/4q7Kp0j February 2, 2026 at 03:06AM
Show HN: Wikipedia as a doomscrollable social media feed https://bit.ly/3Oj4jbm
Show HN: Wikipedia as a doomscrollable social media feed https://bit.ly/4rj1aXw February 2, 2026 at 01:12AM
Show HN: NanoClaw – “Clawdbot” in 500 lines of TS with Apple container isolation https://bit.ly/4qau5fm
Show HN: NanoClaw – “Clawdbot” in 500 lines of TS with Apple container isolation I’ve been running Clawdbot for the last couple weeks and have genuinely found it useful but running it scares the crap out of me. OpenClaw has 52+ modules and runs agents with near-unlimited permissions in a single Node process. NanoClaw is ~500 lines of core code, agents run in actual Apple containers with filesystem isolation. Each chat gets its own sandboxed context. This is not a swiss army knife. It’s built to match my exact needs. Fork it and make it yours. https://bit.ly/4qTY7oY February 1, 2026 at 11:49PM
Saturday, 31 January 2026
Show HN: Peptide calculators ask the wrong question. I built a better one https://bit.ly/4r36xtW
Show HN: Peptide calculators ask the wrong question. I built a better one Most peptide calculators ask the wrong question. They ask: How much water are you adding? But in practice, what you actually know is your vial size and your target dose . The water amount should be the output , not the input . It should also make your dose land on a real syringe tick mark. Not something like 17.3 units. I built a peptide calculator that works this way: https://bit.ly/4r36y0Y What’s different: - You pick vial size and target dose → reconstitution is calculated for you - Doses align to actual syringe markings - Common dose presets per peptide - Works well on mobile (where this is usually done) - Supports blends and compounds (e.g. GLOW or CJC-1295 + Ipamorelin) - You can save your vials. No account required. Happy to hear feedback or edge cases worth supporting. https://bit.ly/4r36y0Y February 1, 2026 at 03:02AM
Show HN: I built a receipt processor for Paperless-ngx https://bit.ly/4tcdRFa
Show HN: I built a receipt processor for Paperless-ngx Hi all, I wanted a robust way to keep track of my receipts without needing to keep them in a box and so i found paperless - but the existing paperless ai projects didn't really convert my receipts to usable data. so I created a fork of nutlope's receipthero (actually it's a complete rewrite, the only thing that remains over is the system prompt) The goal of this project is to be a one stop shop for automatically detecting tagged docs and converting them to json using schema definitions - that includes invoices, .... i can't think of any others right now, maybe you can? If you do please make an issue for it! I would appreciate any feedback/issues thanks! (p.s i made sure its simple to setup with dockge/basic docker-compose.yml) repo: https://bit.ly/4a61i5v tutorial: https://youtu.be/LNlUDtD3og0 February 1, 2026 at 01:17AM
Show HN: An Open Source Alternative to Vercel/Render/Netlify https://bit.ly/49Q4u6C
Show HN: An Open Source Alternative to Vercel/Render/Netlify https://bit.ly/4kdJp9B February 1, 2026 at 01:40AM
Friday, 30 January 2026
Show HN: Foundry – Turns your repeated workflows into one-click commands https://bit.ly/4a3z77o
Show HN: Foundry – Turns your repeated workflows into one-click commands https://bit.ly/4a54r5L January 31, 2026 at 01:40AM
Show HN: Using World Models for Consistent AI Filmmaking https://bit.ly/4ab3mcH
Show HN: Using World Models for Consistent AI Filmmaking https://bit.ly/4a7AWjS January 30, 2026 at 10:41PM
Thursday, 29 January 2026
Show HN: Mystral Native – Run JavaScript games natively with WebGPU (no browser) https://bit.ly/4amiBRb
Show HN: Mystral Native – Run JavaScript games natively with WebGPU (no browser) Hi HN, I've been building Mystral Native — a lightweight native runtime that lets you write games in JavaScript/TypeScript using standard Web APIs (WebGPU, Canvas 2D, Web Audio, fetch) and run them as standalone desktop apps. Think "Electron for games" but without Chromium. Or a JS runtime like Node, Deno, or Bun but optimized for WebGPU (and bundling a window / event system using SDL3). Why: I originally started by starting a new game engine in WebGPU, and I loved the iteration loop of writing Typescript & instantly seeing the changes in the browser with hot reloading. After getting something working and shipping a demo, I realized that shipping a whole browser doesn't really work if I also want the same codebase to work on mobile. Sure, I could use a webview, but that's not always a good or consistent experience for users - there are nuances with Safari on iOS supporting WebGPU, but not the same features that Chrome does on desktop. What I really wanted was a WebGPU runtime that is consistent & works on any platform. I was inspired by deno's --unsafe-webgpu flag, but I realized that deno probably wouldn't be a good fit long term because it doesn't support iOS or Android & doesn't bundle a window / event system (they have "bring your own window", but that means writing a lot of custom code for events, dealing with windowing, not to mention more specific things like implementing a WebAudio shim, etc.). So that got me down the path of building a native runtime specifically for games & that's Mystral Native. So now with Mystral Native, I can have the same developer experience (write JS, use shaders in WGSL, call requestAnimationFrame) but get a real native binary I can ship to players on any platform without requiring a webview or a browser. No 200MB Chromium runtime, no CEF overhead, just the game code and a ~25MB runtime. What it does: - Full WebGPU via Dawn (Chrome's implementation) or wgpu-native (Rust) - Native window & events via SDL3 - Canvas 2D support (Skia), Web Audio (SDL3), fetch (file/http/https) - V8 for JS (same engine as Chrome/Node), also supports QuickJS and JSC - ES modules, TypeScript via SWC - Compile to single binary (think "pkg"): `mystral compile game.js --include assets -o my-game` - macOS .app bundles with code signing, Linux/Windows standalone executables - Embedding API for iOS and Android (JSC/QuickJS + wgpu-native) It's early alpha — the core rendering path works well & I've tested on Mac, Linux (Ubuntu 24.04), and Windows 11, and some custom builds for iOS & Android to validate that they can work, but there's plenty to improve. Would love to get some feedback and see where it can go! MIT licensed. Repo: https://bit.ly/4rmOWx5 Docs: https://bit.ly/46oiPVx https://bit.ly/4rmOWx5 January 27, 2026 at 07:33PM
Show HN: Free Facebook Video Downloader with Original Audio Quality https://bit.ly/3NL23cV
Show HN: Free Facebook Video Downloader with Original Audio Quality A free, web-based Facebook video downloader that actually preserves the original audio - something most Facebook downloaders fail to do. Built with Next.js and yt-dlp, it offers a clean, no-ads experience for downloading Facebook videos in multiple qualities. https://bit.ly/4t8AgDo January 30, 2026 at 03:22AM
Show HN: Play Zener Cards https://bit.ly/4rkUJTN
Show HN: Play Zener Cards just play zener cards. don't judge :) https://bit.ly/4rhsS6P January 30, 2026 at 01:39AM
Wednesday, 28 January 2026
Show HN: Codex.nvim – Codex inside Neovim (no API key required) https://bit.ly/4aq7cin
Show HN: Codex.nvim – Codex inside Neovim (no API key required) Hi HN! I built codex.nvim, an IDE-style Neovim integration for Codex. Highlights: - Works with OpenAI Codex plans (no API key required) - Fully integrated in Neovim (embedded terminal workflow) - Bottom-right status indicator shows busy/wait state - Send selections or file tree context to Codex quickly Repo: https://bit.ly/46kNNhf Why I built this: I wanted to use Codex comfortably inside Neovim without relying on the API. Happy to hear feedback and ideas! https://bit.ly/46kNNhf January 29, 2026 at 07:17AM
Show HN: Shelvy Books https://bit.ly/4aivwDI
Show HN: Shelvy Books Hey HN! I built a little side project I wanted to share. Shelvy is a free, visual bookshelf app where you can organize books you're reading, want to read, or have finished. Sign in to save your own collection. Not monetized, no ads, no tracking beyond basic auth. Just a fun weekend project that grew a bit. Live: https://bit.ly/45yNLSL Would love any feedback on the UX or feature ideas! https://bit.ly/45yNLSL January 29, 2026 at 02:16AM
Show HN: Drum machine VST made with React/C++ https://bit.ly/45FQ6eK
Show HN: Drum machine VST made with React/C++ Hi HN! We just launched our drum machine vst this month! We will be updating it with many new synthesis models and unique features. Check it out, join our discord and show us what you made! https://bit.ly/49YmzOv January 27, 2026 at 06:03AM
Show HN: Frame – Managing projects, tasks, and context for Claude Code https://bit.ly/4rcuAqe
Show HN: Frame – Managing projects, tasks, and context for Claude Code I built Frame to better manage the projects I develop with Claude Code, to bring a standard to my Claude Code projects, to improve project and task planning, and to reduce context and memory loss. In its current state, Frame works entirely locally. You don’t need to enter any API keys or anything like that. You can run Claude Code directly using the terminal inside Frame. Why am I not using existing IDEs? Simply because, for me, I no longer need them. What I need is an interface centered around the terminal, not a code editor. I initially built something that allowed me to place terminals in a grid layout, but then I decided to take it further. I realized I also needed to manage my projects and preserve context. I’m still at a very early stage, but even being able to build the initial pieces I had in mind within 5–6 days—using Claude Code itself—feels kind of crazy. What can you do with Frame? You can start a brand-new project or turn an existing one into a Frame project. For this, Frame creates a set of Markdown and JSON files with rules I defined. These files exist mainly to manage tasks and preserve context. You can manually add project-related tasks through the UI. I haven’t had the chance to test very complex or long-running scenarios yet, but from what I’ve seen, Claude Code often asks questions like: “Should I add this as a task to tasks.json?” or “Should we update project_notes.md after this project decision?” I recommend saying yes to these. I also created a JSON file that keeps track of the project structure, down to function-level details. This part is still very raw. In the future, I plan to experiment with different data structures to help AI understand the project more quickly and effectively. As mentioned, you can open your terminals in either a grid or tab view. I added options up to a 3×3 grid. Since the project is open source, you can modify it based on your own needs. I also added a panel where you can view and manage plugins. For code files or other files, I included a very simple editor. This part is intentionally minimal and quite basic for now. Based on my own testing, I haven’t encountered any major bugs, but there might be some. I apologize in advance if you run into any issues. My core goal is to establish a standard for AI-assisted projects and make them easier to manage. I’m very open to your ideas, support, and feedback. You can see more details on GitHub : https://bit.ly/4bpLWva January 29, 2026 at 12:04AM
Subscribe to:
Comments (Atom)