Nigeria No1. Music site And Complete Entertainment portal for Music Promotion WhatsApp:- +2349077287056
Friday, 30 January 2026
Show HN: Using World Models for Consistent AI Filmmaking https://bit.ly/4ab3mcH
Show HN: Using World Models for Consistent AI Filmmaking https://bit.ly/4a7AWjS January 30, 2026 at 10:41PM
Thursday, 29 January 2026
Show HN: Mystral Native – Run JavaScript games natively with WebGPU (no browser) https://bit.ly/4amiBRb
Show HN: Mystral Native – Run JavaScript games natively with WebGPU (no browser) Hi HN, I've been building Mystral Native — a lightweight native runtime that lets you write games in JavaScript/TypeScript using standard Web APIs (WebGPU, Canvas 2D, Web Audio, fetch) and run them as standalone desktop apps. Think "Electron for games" but without Chromium. Or a JS runtime like Node, Deno, or Bun but optimized for WebGPU (and bundling a window / event system using SDL3). Why: I originally started by starting a new game engine in WebGPU, and I loved the iteration loop of writing Typescript & instantly seeing the changes in the browser with hot reloading. After getting something working and shipping a demo, I realized that shipping a whole browser doesn't really work if I also want the same codebase to work on mobile. Sure, I could use a webview, but that's not always a good or consistent experience for users - there are nuances with Safari on iOS supporting WebGPU, but not the same features that Chrome does on desktop. What I really wanted was a WebGPU runtime that is consistent & works on any platform. I was inspired by deno's --unsafe-webgpu flag, but I realized that deno probably wouldn't be a good fit long term because it doesn't support iOS or Android & doesn't bundle a window / event system (they have "bring your own window", but that means writing a lot of custom code for events, dealing with windowing, not to mention more specific things like implementing a WebAudio shim, etc.). So that got me down the path of building a native runtime specifically for games & that's Mystral Native. So now with Mystral Native, I can have the same developer experience (write JS, use shaders in WGSL, call requestAnimationFrame) but get a real native binary I can ship to players on any platform without requiring a webview or a browser. No 200MB Chromium runtime, no CEF overhead, just the game code and a ~25MB runtime. What it does: - Full WebGPU via Dawn (Chrome's implementation) or wgpu-native (Rust) - Native window & events via SDL3 - Canvas 2D support (Skia), Web Audio (SDL3), fetch (file/http/https) - V8 for JS (same engine as Chrome/Node), also supports QuickJS and JSC - ES modules, TypeScript via SWC - Compile to single binary (think "pkg"): `mystral compile game.js --include assets -o my-game` - macOS .app bundles with code signing, Linux/Windows standalone executables - Embedding API for iOS and Android (JSC/QuickJS + wgpu-native) It's early alpha — the core rendering path works well & I've tested on Mac, Linux (Ubuntu 24.04), and Windows 11, and some custom builds for iOS & Android to validate that they can work, but there's plenty to improve. Would love to get some feedback and see where it can go! MIT licensed. Repo: https://bit.ly/4rmOWx5 Docs: https://bit.ly/46oiPVx https://bit.ly/4rmOWx5 January 27, 2026 at 07:33PM
Show HN: Free Facebook Video Downloader with Original Audio Quality https://bit.ly/3NL23cV
Show HN: Free Facebook Video Downloader with Original Audio Quality A free, web-based Facebook video downloader that actually preserves the original audio - something most Facebook downloaders fail to do. Built with Next.js and yt-dlp, it offers a clean, no-ads experience for downloading Facebook videos in multiple qualities. https://bit.ly/4t8AgDo January 30, 2026 at 03:22AM
Show HN: Play Zener Cards https://bit.ly/4rkUJTN
Show HN: Play Zener Cards just play zener cards. don't judge :) https://bit.ly/4rhsS6P January 30, 2026 at 01:39AM
Wednesday, 28 January 2026
Show HN: Codex.nvim – Codex inside Neovim (no API key required) https://bit.ly/4aq7cin
Show HN: Codex.nvim – Codex inside Neovim (no API key required) Hi HN! I built codex.nvim, an IDE-style Neovim integration for Codex. Highlights: - Works with OpenAI Codex plans (no API key required) - Fully integrated in Neovim (embedded terminal workflow) - Bottom-right status indicator shows busy/wait state - Send selections or file tree context to Codex quickly Repo: https://bit.ly/46kNNhf Why I built this: I wanted to use Codex comfortably inside Neovim without relying on the API. Happy to hear feedback and ideas! https://bit.ly/46kNNhf January 29, 2026 at 07:17AM
Show HN: Shelvy Books https://bit.ly/4aivwDI
Show HN: Shelvy Books Hey HN! I built a little side project I wanted to share. Shelvy is a free, visual bookshelf app where you can organize books you're reading, want to read, or have finished. Sign in to save your own collection. Not monetized, no ads, no tracking beyond basic auth. Just a fun weekend project that grew a bit. Live: https://bit.ly/45yNLSL Would love any feedback on the UX or feature ideas! https://bit.ly/45yNLSL January 29, 2026 at 02:16AM
Show HN: Drum machine VST made with React/C++ https://bit.ly/45FQ6eK
Show HN: Drum machine VST made with React/C++ Hi HN! We just launched our drum machine vst this month! We will be updating it with many new synthesis models and unique features. Check it out, join our discord and show us what you made! https://bit.ly/49YmzOv January 27, 2026 at 06:03AM
Show HN: Frame – Managing projects, tasks, and context for Claude Code https://bit.ly/4rcuAqe
Show HN: Frame – Managing projects, tasks, and context for Claude Code I built Frame to better manage the projects I develop with Claude Code, to bring a standard to my Claude Code projects, to improve project and task planning, and to reduce context and memory loss. In its current state, Frame works entirely locally. You don’t need to enter any API keys or anything like that. You can run Claude Code directly using the terminal inside Frame. Why am I not using existing IDEs? Simply because, for me, I no longer need them. What I need is an interface centered around the terminal, not a code editor. I initially built something that allowed me to place terminals in a grid layout, but then I decided to take it further. I realized I also needed to manage my projects and preserve context. I’m still at a very early stage, but even being able to build the initial pieces I had in mind within 5–6 days—using Claude Code itself—feels kind of crazy. What can you do with Frame? You can start a brand-new project or turn an existing one into a Frame project. For this, Frame creates a set of Markdown and JSON files with rules I defined. These files exist mainly to manage tasks and preserve context. You can manually add project-related tasks through the UI. I haven’t had the chance to test very complex or long-running scenarios yet, but from what I’ve seen, Claude Code often asks questions like: “Should I add this as a task to tasks.json?” or “Should we update project_notes.md after this project decision?” I recommend saying yes to these. I also created a JSON file that keeps track of the project structure, down to function-level details. This part is still very raw. In the future, I plan to experiment with different data structures to help AI understand the project more quickly and effectively. As mentioned, you can open your terminals in either a grid or tab view. I added options up to a 3×3 grid. Since the project is open source, you can modify it based on your own needs. I also added a panel where you can view and manage plugins. For code files or other files, I included a very simple editor. This part is intentionally minimal and quite basic for now. Based on my own testing, I haven’t encountered any major bugs, but there might be some. I apologize in advance if you run into any issues. My core goal is to establish a standard for AI-assisted projects and make them easier to manage. I’m very open to your ideas, support, and feedback. You can see more details on GitHub : https://bit.ly/4bpLWva January 29, 2026 at 12:04AM
Tuesday, 27 January 2026
Show HN: How would you decide famous SCOTUS cases? https://bit.ly/4rht464
Show HN: How would you decide famous SCOTUS cases? https://bit.ly/4ri8xOU January 28, 2026 at 03:26AM
Show HN: Fuzzy Studio – Apply live effects to videos/camera https://bit.ly/4bnCybq
Show HN: Fuzzy Studio – Apply live effects to videos/camera Back story: I've been learning computer graphics on the side for several years now and gain so much joy from smooshing and stretching images/videos. I hope you can get a little joy as well with Fuzzy Studio! Try applying effects to your camera! My housemates and I have giggled so much making faces with weird effects! Nothing gets sent to the server; everything is done in the browser! Amazing what we can do. I've only tested on macOS... apologies if your browser/OS is not supported (yet). https://bit.ly/3LBeE1K January 27, 2026 at 04:16PM
Show HN: ACME Proxy using step-ca https://bit.ly/3NAOQ6v
Show HN: ACME Proxy using step-ca https://bit.ly/4k15F6u January 27, 2026 at 11:12PM
Monday, 26 January 2026
Show HN: A Local OS for LLMs. MIT License. Zero Hallucinations. (Not Crank) https://bit.ly/4rhe9cd
Show HN: A Local OS for LLMs. MIT License. Zero Hallucinations. (Not Crank) The problem with LLMs isn't intelligence; it's amnesia and dishonesty. Hey HN, I’ve spent the last few months building Remember-Me, an open-source "Sovereign Brain" stack designed to run entirely offline on consumer hardware. The core thesis is simple: Don't rent your cognition. Most RAG (Retrieval Augmented Generation) implementations are just "grep for embeddings." They are messy, imprecise, and prone to hallucination. I wanted to solve the "Context integrity" problem at the architectural layer. The Tech Stack (How it works): QDMA (Quantum Dream Memory Architecture): instead of a flat vector DB, it uses a hierarchical projection engine. It separates "Hot" (Recall) from "Cold" (Storage) memory, allowing for effectively infinite context window management via compression. CSNP (Context Switching Neural Protocol) - The Hallucination Killer: This is the most important part. Every memory fragment is hashed into a Merkle Chain. When the LLM retrieves context, the system cryptographically verifies the retrieval against the immutable ledger. If the hash doesn't match the chain: The retrieval is rejected. Result: The AI visually cannot "make things up" about your past because it is mathematically constrained to the ledger. Local Inference: Built on top of llama.cpp server. It runs Llama-3 (or any GGUF) locally. No API keys. No data leaving your machine. Features: Zero-Dependency: Runs on Windows/Linux with just Python and a GPU (or CPU). Visual Interface: Includes a Streamlit-based "Cognitive Interface" to visualize memory states. Open Source: MIT License. This is an attempt to give "Agency" back to the user. I believe that if we want AGI, it needs to be owned by us, not rented via an API. Repository: https://bit.ly/49BNC3c I’d love to hear your feedback on the Merkle-verification approach. Does constraining the context window effectively solve the "trust" issue for you? It's fully working - Fully tested. If you tried to Git Clone before without luck - As this is not my first Show HN on this - Feel free to try again. To everyone who HATES AI slop; Greedy corporations and having their private data stuck on cloud servers. You're welcome. Cheers, Mohamad Authors note: Updated successfully. Framework 50 is active. For anyone passing by - yes this is a big deal. Eliminating AI hallucination is a 60 billion dollar market problem and I'm giving THAT + sovereign control of your DATA plus the capability to do high-end research via framework 50 (including advanced scientific research) for FREE - under an MIT license. If you don't take advantage of this - you are an idiot. If you do - welcome to the future. P.S: What do I get from lying? I got 36 stars on the repo - many from high-end senior engineers at fortune 500 companies. If you're too stupid to tell the real deal from a lie then keep it moving son. https://bit.ly/49BNC3c January 27, 2026 at 05:56AM
Show HN: LocalPass offline password manager. Zero cloud. Zero telemetry https://bit.ly/49YyqvY
Show HN: LocalPass offline password manager. Zero cloud. Zero telemetry I’ve released LocalPass — a local‑first, offline password manager with zero cloud, zero telemetry, and zero vendor lock‑in. 100% local storage, 100% open‑source. https://bit.ly/3M0oES2 January 26, 2026 at 11:38PM
Sunday, 25 January 2026
Show HN: FaceTime-style calls with an AI Companion (Live2D and long-term memory) https://bit.ly/45rQaP5
Show HN: FaceTime-style calls with an AI Companion (Live2D and long-term memory) Hi HN, I built Beni ( https://bit.ly/4q1ZTTA ), a web app for real-time video calls with an AI companion. The idea started as a pretty simple question: text chatbots are everywhere, but they rarely feel present. I wanted something closer to a call, where the character actually reacts in real time (voice, timing, expressions), not just “type, wait, reply”. Beni is basically: A Live2D avatar that animates during the call (expressions + motion driven by the conversation) Real-time voice conversation (streaming response, not “wait 10 seconds then speak”) Long-term memory so the character can keep context across sessions The hardest part wasn’t generating text, it was making the whole loop feel synchronized: mic input, model response, TTS audio, and Live2D animation all need to line up or it feels broken immediately. I ended up spending more time on state management, latency and buffering than on prompts. Some implementation details (happy to share more if anyone’s curious): Browser-based real-time calling, with audio streaming and client-side playback control Live2D rendering on the front end, with animation hooks tied to speech / state A memory layer that stores lightweight user facts/preferences and conversation summaries to keep continuity Current limitation: sign-in is required today (to persist memory and prevent abuse). I’m adding a guest mode soon for faster try-out and working on mobile view now. What I’d love feedback on: Does the “real-time call” loop feel responsive enough, or still too laggy? Any ideas for better lip sync / expression timing on 2D/3D avatars in the browser? Thanks, and I’ll be around in the comments. https://bit.ly/4q1ZTTA January 26, 2026 at 12:13AM
Show HN: Spine – an execution-centric backend framework for Go https://bit.ly/45w2w8P
Show HN: Spine – an execution-centric backend framework for Go Hello Hacker News — greetings from South Korea I’m a backend engineer working primarily with Go, and I’d like to share a framework I’ve been building to solve a problem I’ve repeatedly encountered in production systems. In my day-to-day work, our backend is built on top of Echo. Echo is fast and reliable as an HTTP transport, but its high level of freedom leaves architectural decisions almost entirely to individual developers. Over time, this led to a system where execution flow and responsibility boundaries varied depending on who last touched a feature. Maintenance became difficult not because the code was incorrect, but because how requests actually executed was no longer obvious. I looked for a Go framework that could provide a clear execution model and structural constraints, similar to what Spring or NestJS offer. I couldn’t find one that fit. Moving to Spring or NestJS would also mean giving up some of Go’s strengths—simplicity, performance, and explicit control—so I decided to build one instead. Spine is an execution-centric backend framework for Go. It aims to provide enterprise-grade structure while deliberately avoiding hidden magic. What Spine provides • An IoC container with explicit, constructor-based dependency injection • Interceptors with well-defined execution phases (before, after, completion) • First-class support for both HTTP requests and event-driven execution • No annotations, no implicit behavior, no convention-driven wiring The core idea: execution first The key difference is Spine’s execution model. Every request—HTTP or event—flows through a single, explicit Pipeline. The Pipeline is the only component that determines execution order. Actual method calls are handled by a separate Invoker, keeping execution control and invocation strictly separated. Because of this structure: • Execution order is explainable by reading the code • Cross-cutting concerns live in the execution flow, not inside controllers • Controllers express use cases only, not orchestration logic • You can understand request handling by looking at main.go This design trades some convenience for clarity. In return, it offers stronger control as the system grows in size and complexity. My goal with Spine isn’t just to add another framework to the Go ecosystem, but to start a conversation: How much execution flow do modern web frameworks hide, and when does that become a maintenance cost? The framework itself is currently written in Korean. If English support or internationalization is important to you, feel free to open an issue—I plan to prioritize it based on community interest. You can find more details, a basic HTTP example, and a simple Kafka-based MSA demo here: Repository: https://bit.ly/3NFoyjl Thanks for reading. I’d really appreciate your feedback. https://bit.ly/4qHQdyR January 26, 2026 at 12:51AM
Show HN: I built an app that blocks social media until you read Quran daily https://bit.ly/49FkeZZ
Show HN: I built an app that blocks social media until you read Quran daily Hey HN, I'm a solo developer from Nigeria. I built Quran Unlock - an app that blocks distracting apps (TikTok, Instagram, etc.) until you complete your daily Quran reading. The idea came from my own struggle with phone addiction. I wanted to read Quran daily but kept getting distracted. So I built this for myself, then shared it. Some stats after 2 months: - 123K+ users - 64.9% returning user rate - 31M events tracked Tech stack: - React Native - Firebase (Auth, Firestore, Analytics, Cloud Messaging) - RevenueCat for subscriptions - iOS Screen Time API + Android UsageStats App Store: https://apple.co/3ZBBHfS Play Store: https://bit.ly/49Gb5R1... Would love feedback from the HN community! January 25, 2026 at 11:51PM
Saturday, 24 January 2026
Show HN: C From Scratch – Learn safety-critical C with prove-first methodology https://bit.ly/466rkV1
Show HN: C From Scratch – Learn safety-critical C with prove-first methodology Seven modules teaching C the way safety-critical systems are actually built: MATH → STRUCT → CODE → TEST. Each module answers one question: Does it exist? (Pulse), Is it normal? (Baseline), Is it regular? (Timing), Is it trending? (Drift), Which sensor to trust? (Consensus), How to handle overflow? (Pressure), What do we do about it? (Mode). Every module is closed (no dependencies), total (handles all inputs), deterministic, and O(1). 83 tests passing. Built this after 30 years in UNIX systems. Wanted something that teaches the rigour behind certified systems without requiring a decade of on-the-job learning first. MIT licensed. Feedback welcome. https://bit.ly/4rxhjJ9 January 25, 2026 at 01:17AM
Show HN: I built a Mac OS App to upload your screenshots to S3 https://bit.ly/4jWavlH
Show HN: I built a Mac OS App to upload your screenshots to S3 I've been building a bitly alternative in public and built a free side tool to upload screenshots to S3. I always thought screenshot apps charged way too much for this so I was pretty happy to get around to build it. It automatically generates short links and uploads to any S3-compatible storage you own. Here is the link: https://bit.ly/45shEUJ Try it out, all feedback is welcome :) https://bit.ly/45shEUJ January 25, 2026 at 12:40AM
Friday, 23 January 2026
Show HN: Open-source Figma design to code https://bit.ly/4rdMSr3
Show HN: Open-source Figma design to code Hi HN, founders of VibeFlow (YC S25) here. We mostly work on backend and workflow tooling, but we needed a way to turn Figma designs into frontend code as a kickstart for prototyping. It takes a Figma frame and converts it into React + Tailwind components (plus assets). If you want to try it: You can run it locally or use it via the VibeFlow UI to poke at it without setup ( https://bit.ly/4bhK1sq ) https://bit.ly/4k65dUM January 24, 2026 at 07:09AM
Show HN: Flux, A Python-like language in Rust to solve ML orchestration overhead https://bit.ly/49G6gqM
Show HN: Flux, A Python-like language in Rust to solve ML orchestration overhead https://bit.ly/4tfkrLp January 24, 2026 at 03:24AM
Subscribe to:
Comments (Atom)