Nigeria No1. Music site And Complete Entertainment portal for Music Promotion WhatsApp:- +2349077287056
Monday, 2 February 2026
Show HN: Stream-based AI with neurological multi-gate (Na⁺/θ/NMDA) https://bit.ly/49UI6cb
Show HN: Stream-based AI with neurological multi-gate (Na⁺/θ/NMDA) Current LLMs struggle with compositional inference because they lack physical boundaries. CSCT implements a neurological multi-gate mechanism (Na⁺/θ/NMDA) to enforce L1 geometry and physical grounding. In my experiments (EX8/9), this architecture achieved 96.7% success in compositional inference within the convex hull—far outperforming unconstrained models.Key features:Stream-based: No batching or static context; it processes information as a continuous flow.Neurological Gating: Computational implementation of θ-γ coupling using Na⁺ and NMDA-inspired gates.Zero-shot Reasoning: Incurs no "hallucination" for in-hull compositions.Detailed technical write-up: [ https://bit.ly/4kds5BD... ]I’m eager to hear your thoughts on this "Projected Dynamical System" approach to cognition. https://bit.ly/4kds5S9 February 3, 2026 at 03:59AM
Show HN: 127 PRs to Prod this wknd with 18 AI agents: metaswarm. MIT licensed https://bit.ly/3Ois03A
Show HN: 127 PRs to Prod this wknd with 18 AI agents: metaswarm. MIT licensed A few weeks ago I posted about GoodToGo https://bit.ly/4pI0dXu - a tool that gives AI agents a deterministic answer to "is this PR ready to merge?" Several people asked about the larger orchestration system I mentioned. This is that system. I got tired of being a project manager for Claude Code. It writes code fine, but shipping production code is seven or eight jobs — research, planning, design review, implementation, code review, security audit, PR creation, CI babysitting. I was doing all the coordination myself. The agent typed fast. I was still the bottleneck. What I really needed was an orchestrator of orchestrators - swarms of swarms of agents with deterministic quality checks. So I built metaswarm. It breaks work into phases and assigns each to a specialist swarm orchestrator. It manages handoffs and uses BEADS for deterministic gates that persist across /compact, /clear, and even across sessions. Point it at a GitHub issue or brainstorm with it (it uses Superpowers to ask clarifying questions) and it creates epics, tasks, and dependencies, then runs the full pipeline to a merged PR - including outside code review like CodeRabbit, Greptile, and Bugbot. The thing that surprised me most was the design review gate. Five agents — PM, Architect, Designer, Security, CTO — review every plan in parallel before a line of code gets written. All five must approve. Three rounds max, then it escalates to a human. I expected a rubber stamp. It catches real design problems, dependency issues, security gaps. This weekend I pointed it at my backlog. 127 PRs merged. Every one hit 100% test coverage. No human wrote code, reviewed code, or clicked merge. OK, I guided it a bit, mostly helping with plans for some of the epics. A few learnings: Agent checklists are theater. Agents skipped coverage checks, misread thresholds, or decided they didn't apply. Prompts alone weren't enough. The fix was deterministic gates — BEADS, pre-push hooks, CI jobs all on top of the agent completion check. The gates block bad code whether or not the agent cooperates. The agents are just markdown files. No custom runtime, no server, and while I built it on TypeScript, the agents are language-agnostic. You can read all of them, edit them, add your own. It self-reflects too. After every merged PR, the system extracts patterns, gotchas, and decisions into a JSONL knowledge base. Agents only load entries relevant to the files they're touching. The more it ships, the fewer mistakes it makes. It learns as it goes. metaswarm stands on two projects: https://bit.ly/465Uggf by Steve Yegge (git-native task tracking and knowledge priming) and https://bit.ly/4tg1fwL by Jesse Vincent (disciplined agentic workflows — TDD, brainstorming, systematic debugging). Both were essential. Background: I founded Technorati, Linuxcare, and Warmstart; tech exec at Lyft and Reddit. I built metaswarm because I needed autonomous agents that could ship to a production codebase with the same standards I'd hold a human team to. $ cd my-project-name $ npx metaswarm init MIT licensed. IANAL. YMMV. Issues/PRs welcome! https://bit.ly/4tcbDpg February 3, 2026 at 02:18AM
Sunday, 1 February 2026
Show HN: ContractShield – AI contract analyser for freelancers https://bit.ly/463VwR1
Show HN: ContractShield – AI contract analyser for freelancers Built this with Claude Code. Analyses freelance contracts for 12 risk categories (payment terms, IP ownership, scope issues, termination clauses, etc.) and flags problems with specific recommendations. 40% of freelancers report getting stiffed by clients, often due to vague contract terms. This tool aims to help catch those issues before signing. Currently free while validating whether this solves a real problem. Would love HN's feedback, especially on: - Accuracy of the analysis - Whether this is actually useful for freelancers - What's missing or could be improved Tech stack: Node.js, Express, Anthropic Claude API, deployed on Railway. https://bit.ly/3ZcvXJv February 2, 2026 at 04:11AM
Show HN: Is AI "good" yet? – tracking HN sentiment on AI coding https://bit.ly/4qfxfhU
Show HN: Is AI "good" yet? – tracking HN sentiment on AI coding A survey tracking developer sentiment on AI-assisted coding through Hacker News posts. https://bit.ly/4q7Kp0j February 2, 2026 at 03:06AM
Show HN: Wikipedia as a doomscrollable social media feed https://bit.ly/3Oj4jbm
Show HN: Wikipedia as a doomscrollable social media feed https://bit.ly/4rj1aXw February 2, 2026 at 01:12AM
Show HN: NanoClaw – “Clawdbot” in 500 lines of TS with Apple container isolation https://bit.ly/4qau5fm
Show HN: NanoClaw – “Clawdbot” in 500 lines of TS with Apple container isolation I’ve been running Clawdbot for the last couple weeks and have genuinely found it useful but running it scares the crap out of me. OpenClaw has 52+ modules and runs agents with near-unlimited permissions in a single Node process. NanoClaw is ~500 lines of core code, agents run in actual Apple containers with filesystem isolation. Each chat gets its own sandboxed context. This is not a swiss army knife. It’s built to match my exact needs. Fork it and make it yours. https://bit.ly/4qTY7oY February 1, 2026 at 11:49PM
Saturday, 31 January 2026
Show HN: Peptide calculators ask the wrong question. I built a better one https://bit.ly/4r36xtW
Show HN: Peptide calculators ask the wrong question. I built a better one Most peptide calculators ask the wrong question. They ask: How much water are you adding? But in practice, what you actually know is your vial size and your target dose . The water amount should be the output , not the input . It should also make your dose land on a real syringe tick mark. Not something like 17.3 units. I built a peptide calculator that works this way: https://bit.ly/4r36y0Y What’s different: - You pick vial size and target dose → reconstitution is calculated for you - Doses align to actual syringe markings - Common dose presets per peptide - Works well on mobile (where this is usually done) - Supports blends and compounds (e.g. GLOW or CJC-1295 + Ipamorelin) - You can save your vials. No account required. Happy to hear feedback or edge cases worth supporting. https://bit.ly/4r36y0Y February 1, 2026 at 03:02AM
Show HN: I built a receipt processor for Paperless-ngx https://bit.ly/4tcdRFa
Show HN: I built a receipt processor for Paperless-ngx Hi all, I wanted a robust way to keep track of my receipts without needing to keep them in a box and so i found paperless - but the existing paperless ai projects didn't really convert my receipts to usable data. so I created a fork of nutlope's receipthero (actually it's a complete rewrite, the only thing that remains over is the system prompt) The goal of this project is to be a one stop shop for automatically detecting tagged docs and converting them to json using schema definitions - that includes invoices, .... i can't think of any others right now, maybe you can? If you do please make an issue for it! I would appreciate any feedback/issues thanks! (p.s i made sure its simple to setup with dockge/basic docker-compose.yml) repo: https://bit.ly/4a61i5v tutorial: https://youtu.be/LNlUDtD3og0 February 1, 2026 at 01:17AM
Show HN: An Open Source Alternative to Vercel/Render/Netlify https://bit.ly/49Q4u6C
Show HN: An Open Source Alternative to Vercel/Render/Netlify https://bit.ly/4kdJp9B February 1, 2026 at 01:40AM
Friday, 30 January 2026
Show HN: Foundry – Turns your repeated workflows into one-click commands https://bit.ly/4a3z77o
Show HN: Foundry – Turns your repeated workflows into one-click commands https://bit.ly/4a54r5L January 31, 2026 at 01:40AM
Show HN: Using World Models for Consistent AI Filmmaking https://bit.ly/4ab3mcH
Show HN: Using World Models for Consistent AI Filmmaking https://bit.ly/4a7AWjS January 30, 2026 at 10:41PM
Thursday, 29 January 2026
Show HN: Mystral Native – Run JavaScript games natively with WebGPU (no browser) https://bit.ly/4amiBRb
Show HN: Mystral Native – Run JavaScript games natively with WebGPU (no browser) Hi HN, I've been building Mystral Native — a lightweight native runtime that lets you write games in JavaScript/TypeScript using standard Web APIs (WebGPU, Canvas 2D, Web Audio, fetch) and run them as standalone desktop apps. Think "Electron for games" but without Chromium. Or a JS runtime like Node, Deno, or Bun but optimized for WebGPU (and bundling a window / event system using SDL3). Why: I originally started by starting a new game engine in WebGPU, and I loved the iteration loop of writing Typescript & instantly seeing the changes in the browser with hot reloading. After getting something working and shipping a demo, I realized that shipping a whole browser doesn't really work if I also want the same codebase to work on mobile. Sure, I could use a webview, but that's not always a good or consistent experience for users - there are nuances with Safari on iOS supporting WebGPU, but not the same features that Chrome does on desktop. What I really wanted was a WebGPU runtime that is consistent & works on any platform. I was inspired by deno's --unsafe-webgpu flag, but I realized that deno probably wouldn't be a good fit long term because it doesn't support iOS or Android & doesn't bundle a window / event system (they have "bring your own window", but that means writing a lot of custom code for events, dealing with windowing, not to mention more specific things like implementing a WebAudio shim, etc.). So that got me down the path of building a native runtime specifically for games & that's Mystral Native. So now with Mystral Native, I can have the same developer experience (write JS, use shaders in WGSL, call requestAnimationFrame) but get a real native binary I can ship to players on any platform without requiring a webview or a browser. No 200MB Chromium runtime, no CEF overhead, just the game code and a ~25MB runtime. What it does: - Full WebGPU via Dawn (Chrome's implementation) or wgpu-native (Rust) - Native window & events via SDL3 - Canvas 2D support (Skia), Web Audio (SDL3), fetch (file/http/https) - V8 for JS (same engine as Chrome/Node), also supports QuickJS and JSC - ES modules, TypeScript via SWC - Compile to single binary (think "pkg"): `mystral compile game.js --include assets -o my-game` - macOS .app bundles with code signing, Linux/Windows standalone executables - Embedding API for iOS and Android (JSC/QuickJS + wgpu-native) It's early alpha — the core rendering path works well & I've tested on Mac, Linux (Ubuntu 24.04), and Windows 11, and some custom builds for iOS & Android to validate that they can work, but there's plenty to improve. Would love to get some feedback and see where it can go! MIT licensed. Repo: https://bit.ly/4rmOWx5 Docs: https://bit.ly/46oiPVx https://bit.ly/4rmOWx5 January 27, 2026 at 07:33PM
Show HN: Free Facebook Video Downloader with Original Audio Quality https://bit.ly/3NL23cV
Show HN: Free Facebook Video Downloader with Original Audio Quality A free, web-based Facebook video downloader that actually preserves the original audio - something most Facebook downloaders fail to do. Built with Next.js and yt-dlp, it offers a clean, no-ads experience for downloading Facebook videos in multiple qualities. https://bit.ly/4t8AgDo January 30, 2026 at 03:22AM
Show HN: Play Zener Cards https://bit.ly/4rkUJTN
Show HN: Play Zener Cards just play zener cards. don't judge :) https://bit.ly/4rhsS6P January 30, 2026 at 01:39AM
Wednesday, 28 January 2026
Show HN: Codex.nvim – Codex inside Neovim (no API key required) https://bit.ly/4aq7cin
Show HN: Codex.nvim – Codex inside Neovim (no API key required) Hi HN! I built codex.nvim, an IDE-style Neovim integration for Codex. Highlights: - Works with OpenAI Codex plans (no API key required) - Fully integrated in Neovim (embedded terminal workflow) - Bottom-right status indicator shows busy/wait state - Send selections or file tree context to Codex quickly Repo: https://bit.ly/46kNNhf Why I built this: I wanted to use Codex comfortably inside Neovim without relying on the API. Happy to hear feedback and ideas! https://bit.ly/46kNNhf January 29, 2026 at 07:17AM
Show HN: Shelvy Books https://bit.ly/4aivwDI
Show HN: Shelvy Books Hey HN! I built a little side project I wanted to share. Shelvy is a free, visual bookshelf app where you can organize books you're reading, want to read, or have finished. Sign in to save your own collection. Not monetized, no ads, no tracking beyond basic auth. Just a fun weekend project that grew a bit. Live: https://bit.ly/45yNLSL Would love any feedback on the UX or feature ideas! https://bit.ly/45yNLSL January 29, 2026 at 02:16AM
Show HN: Drum machine VST made with React/C++ https://bit.ly/45FQ6eK
Show HN: Drum machine VST made with React/C++ Hi HN! We just launched our drum machine vst this month! We will be updating it with many new synthesis models and unique features. Check it out, join our discord and show us what you made! https://bit.ly/49YmzOv January 27, 2026 at 06:03AM
Show HN: Frame – Managing projects, tasks, and context for Claude Code https://bit.ly/4rcuAqe
Show HN: Frame – Managing projects, tasks, and context for Claude Code I built Frame to better manage the projects I develop with Claude Code, to bring a standard to my Claude Code projects, to improve project and task planning, and to reduce context and memory loss. In its current state, Frame works entirely locally. You don’t need to enter any API keys or anything like that. You can run Claude Code directly using the terminal inside Frame. Why am I not using existing IDEs? Simply because, for me, I no longer need them. What I need is an interface centered around the terminal, not a code editor. I initially built something that allowed me to place terminals in a grid layout, but then I decided to take it further. I realized I also needed to manage my projects and preserve context. I’m still at a very early stage, but even being able to build the initial pieces I had in mind within 5–6 days—using Claude Code itself—feels kind of crazy. What can you do with Frame? You can start a brand-new project or turn an existing one into a Frame project. For this, Frame creates a set of Markdown and JSON files with rules I defined. These files exist mainly to manage tasks and preserve context. You can manually add project-related tasks through the UI. I haven’t had the chance to test very complex or long-running scenarios yet, but from what I’ve seen, Claude Code often asks questions like: “Should I add this as a task to tasks.json?” or “Should we update project_notes.md after this project decision?” I recommend saying yes to these. I also created a JSON file that keeps track of the project structure, down to function-level details. This part is still very raw. In the future, I plan to experiment with different data structures to help AI understand the project more quickly and effectively. As mentioned, you can open your terminals in either a grid or tab view. I added options up to a 3×3 grid. Since the project is open source, you can modify it based on your own needs. I also added a panel where you can view and manage plugins. For code files or other files, I included a very simple editor. This part is intentionally minimal and quite basic for now. Based on my own testing, I haven’t encountered any major bugs, but there might be some. I apologize in advance if you run into any issues. My core goal is to establish a standard for AI-assisted projects and make them easier to manage. I’m very open to your ideas, support, and feedback. You can see more details on GitHub : https://bit.ly/4bpLWva January 29, 2026 at 12:04AM
Tuesday, 27 January 2026
Show HN: How would you decide famous SCOTUS cases? https://bit.ly/4rht464
Show HN: How would you decide famous SCOTUS cases? https://bit.ly/4ri8xOU January 28, 2026 at 03:26AM
Show HN: Fuzzy Studio – Apply live effects to videos/camera https://bit.ly/4bnCybq
Show HN: Fuzzy Studio – Apply live effects to videos/camera Back story: I've been learning computer graphics on the side for several years now and gain so much joy from smooshing and stretching images/videos. I hope you can get a little joy as well with Fuzzy Studio! Try applying effects to your camera! My housemates and I have giggled so much making faces with weird effects! Nothing gets sent to the server; everything is done in the browser! Amazing what we can do. I've only tested on macOS... apologies if your browser/OS is not supported (yet). https://bit.ly/3LBeE1K January 27, 2026 at 04:16PM
Subscribe to:
Comments (Atom)