Nigeria No1. Music site And Complete Entertainment portal for Music Promotion WhatsApp:- +2349077287056
Thursday, 30 April 2026
Show HN: What happens when you load a webpage (Interactive) https://bit.ly/3QWt7ax
Show HN: What happens when you load a webpage (Interactive) https://bit.ly/3ONPYod April 27, 2026 at 08:26PM
Show HN: Gemini free tier is all you need https://bit.ly/3P1mt26
Show HN: Gemini free tier is all you need https://bit.ly/4n3cKFj May 1, 2026 at 12:45AM
Show HN: Code on the Go, an IDE for Android with On-Device Debugging (GPLv3) https://bit.ly/4ueQ3jI
Show HN: Code on the Go, an IDE for Android with On-Device Debugging (GPLv3) Hi HN, I’m Hal, the CTO at App Dev for All. I wanted to share a technical problem we worked on over the past year and how we approached it. We’ve been building Code on the Go, a full-featured IDE that runs entirely on an Android phone. No laptop, no ADB connection, no cloud build server. It compiles projects locally on the device using Gradle, supports Java and Kotlin with LSP, and includes a debugger that runs on the same phone as the app being tested. The most interesting and challenging part ended up being the debugger. The Android OS has a rigorous security model, which can get in the way of traditional inter-process communication. Android debugging assumes ADB, which assumes two machines. We bypassed ADB entirely, attaching the JDWP agent to the target process at launch and routing its output to our debugger over a local socket. We used a scoped adaptation of the Shizuku project to get the necessary system access without requiring root. We also had a few other technical challenges with Code on the Go: Sketch-to-UI (generates Android XML from a photo of a hand-drawn layout, runs fully offline with Yolo), an optional Gemini-powered coding agent (opt-in, requires your own API key), and a plugin system with isolated class loaders. One of our pre-release community members has used it to build and publish a Sinhala/English keyboard app to the Play Store, built entirely on his phone. This served as our test case for Play Store compatibility. We are a philanthropic venture. No ads, no tracking, no subscription. License is GPLv3. APK: https://bit.ly/4dgfOdH
Source: https://bit.ly/423N8P1 Happy to answer questions on the implementation. https://bit.ly/3QGOTze April 30, 2026 at 11:17PM
Wednesday, 29 April 2026
Show HN: Qumulator – quantum circuit simulator, 1000 qubits, no GPU https://bit.ly/42IreRm
Show HN: Qumulator – quantum circuit simulator, 1000 qubits, no GPU https://bit.ly/3PeVmAR April 27, 2026 at 04:56PM
Show HN: SigMap – 81.1% retrieval hit 5, 96.9% token reduce,zero deps https://bit.ly/4tKdBNG
Show HN: SigMap – 81.1% retrieval hit 5, 96.9% token reduce,zero deps https://bit.ly/4eNihh2 April 30, 2026 at 02:02AM
Tuesday, 28 April 2026
Show HN: 49Agents – 2D Canvas IDE for Orchestrating Agents, Repos, Issues https://bit.ly/3Ou3X2s
Show HN: 49Agents – 2D Canvas IDE for Orchestrating Agents, Repos, Issues Beads tables (Steve Yegge's) for issue tracking. Can view git trees, terminals, issue tables, notes, and files all on one screen. Can connect multiple machines via private network (like tailscale) https://bit.ly/4cAathf April 29, 2026 at 12:34AM
Show HN: Auto-Architecture: Karpathy's Loop, Pointed at a CPU https://bit.ly/4cFm1Qq
Show HN: Auto-Architecture: Karpathy's Loop, Pointed at a CPU https://bit.ly/4n7Myt5 April 28, 2026 at 06:12PM
Show HN: I built another to do list. But it does a lot https://bit.ly/4cQE6JZ
Show HN: I built another to do list. But it does a lot https://apple.co/4wdvQwG April 28, 2026 at 11:58PM
Monday, 27 April 2026
Show HN: Waiting for LLMs Suck – Give your user a game https://bit.ly/3OQdJvQ
Show HN: Waiting for LLMs Suck – Give your user a game Give your user a game while they wait for the LLM to return a result. https://bit.ly/491Ugz5 April 28, 2026 at 03:45AM
Show HN: AgentSwift – Open-source iOS builder agent https://bit.ly/4eeI01F
Show HN: AgentSwift – Open-source iOS builder agent I'm working on a coding agent for building iOS apps. It's built on openspec and xcodebuildmcp. It's free and open source. https://bit.ly/4tAQRiS April 28, 2026 at 02:14AM
Show HN: 49Agents – Infinite canvas IDE for AI agents https://bit.ly/4ufNPAJ
Show HN: 49Agents – Infinite canvas IDE for AI agents https://bit.ly/4cAathf April 28, 2026 at 01:36AM
Sunday, 26 April 2026
Show HN: The Unix Magic poster, annotated (updated) https://bit.ly/4tyhTHD
Show HN: The Unix Magic poster, annotated (updated) This is a site that maps the references on Gary Overacre's 1980s UNIX Magic poster to short write-ups with sources. I posted an earlier version about a year ago [1]. Since then I rewrote some of the annotations, added deep-linking to individual markers and a frame/sidebar view, gave the site a terminal-style redesign, and fixed historical inaccuracies (daemon etymology, nroff origin, B language vs. Multics, etc.). Contributions and comments welcome; each marker is a GitHub issue. site: https://bit.ly/4t3NxMh [1] https://bit.ly/4hUhSaK https://bit.ly/3EzK3xr April 27, 2026 at 02:32AM
Show HN: Logic Designer – Webapp https://bit.ly/4cOMnxY
Show HN: Logic Designer – Webapp i updated my digital logic webapp to 0.5.1 https://bit.ly/4tBIvI7 April 26, 2026 at 11:16PM
Show HN: AgentSwarms – free hands-on playground to learn agentic AI, no setup https://bit.ly/3OsfGi1
Show HN: AgentSwarms – free hands-on playground to learn agentic AI, no setup Show HN: AgentSwarms – free hands-on playground to learn agentic AI, no setup required! https://bit.ly/3P3Ugb1 April 26, 2026 at 09:34PM
Saturday, 25 April 2026
Show HN: LLM-wiki – One command Karpathy's wiki with QMD search for Claude/Codex https://bit.ly/4sSjWVV
Show HN: LLM-wiki – One command Karpathy's wiki with QMD search for Claude/Codex https://bit.ly/4cD4abi April 25, 2026 at 11:29PM
Show HN: Draw Together Online https://bit.ly/3Or1yWh
Show HN: Draw Together Online A simple page where you can draw with other people. https://bit.ly/4uawVn2 April 26, 2026 at 03:36AM
Show HN: DDoS detection in 0.9s, tested against a 48 Gbps attack live https://bit.ly/4naExDX
Show HN: DDoS detection in 0.9s, tested against a 48 Gbps attack live https://bit.ly/4mTztU3 April 26, 2026 at 01:27AM
Friday, 24 April 2026
Show HN: VT Code – Rust TUI coding agent with multi-provider support https://bit.ly/4u8nJiS
Show HN: VT Code – Rust TUI coding agent with multi-provider support Hi HN, I built VT Code, a semantic coding agent. Supports all SOTA and open sources model. Anthropic, OpenAI, Gemini, Codex. Agent Skills, Model Context Protocol and Agent Client Protocol (ACP) ready. All open source models are support. Local inference via LM Studio and Ollama (experiment). Semantic context understanding is supported by ast-grep for structured code search and ripgrep for powered grep. I built VT Code in Rust on Ratatui. Architecture and agent loop documented in the README and DeepWiki. Repo: https://bit.ly/4sTIE8i DeepWiki: https://bit.ly/4cxi14r Happy to answer questions! I believe coding harnesses should be open, and everyone should have a choice of their preferred way to work in this agentic engineering era. https://bit.ly/4sTIE8i April 25, 2026 at 04:17AM
Show HN: RoboAPI – A unified REST API for robots, like Stripe but for hardware https://bit.ly/3Qv1DbY
Show HN: RoboAPI – A unified REST API for robots, like Stripe but for hardware Every robot manufacturer ships a different SDK and a different protocol. A Boston Dynamics Spot speaks nothing like a Universal Robots arm. Every team building on top of robots rewrites the same integration layer from scratch. This is a massive tax on the industry. RoboAPI is a unified API layer that abstracts all of that into one clean developer experience. One SDK, one API key, any robot — simulated or real hardware. You can connect a simulated robot and read live telemetry in under 5 minutes: pip install fastapi uvicorn roslibpy uvicorn api.main:app --reload curl -X POST localhost:8000/v1/robots/connect -d '{"robot_id":"bot-01","brand":"simulated"}' curl localhost:8000/v1/robots/bot-01/sense It also connects to real ROS2 robots via rosbridge — I tested it today controlling a turtlesim robot drawing circles through the API. The architecture is pluggable — each robot brand is a separate adapter implementing a common interface (like a payment gateway in Stripe). Adding a new brand means one file. Currently supports: simulated robots and any ROS2 robot. Boston Dynamics and Universal Robots adapters are next. Would love feedback from anyone working in robotics — especially on the API design and what's missing for real-world use. https://bit.ly/3QyRJWM April 25, 2026 at 12:16AM
Show HN: Nimbus – Browser with Claude Code UX https://bit.ly/41XYTX5
Show HN: Nimbus – Browser with Claude Code UX Hi HN, I'm Anil. Nimbus is a desktop browser with an AI agent built into it. The UX is shamelessly inspired by Claude Code: a chat bar at the bottom, an agent log above it, and the webpage itself when its needed. This is mainly a UX experiment for me. And also the reason it isn't a Chrome extension: once you have a chat bar that understands intent, the URL field is redundant. You shouldn't have two places to tell the browser what you want. I didn't want to bolt an agent onto an existing browser's chrome and end up with duplicated controls everywhere — I wanted full freedom to redesign the shell from scratch, decide what stays, what goes, and what a browser even looks like when the agent is the primary interface. Download for macOS: https://bit.ly/4sSJzpC Launch video: https://youtu.be/dj23-XIiB1o https://bit.ly/4sSJzpC April 24, 2026 at 09:01PM
Thursday, 23 April 2026
Show HN: SQL Protocol – learn SQL by running real queries, with 1v1 PvP https://bit.ly/4t3pEVi
Show HN: SQL Protocol – learn SQL by running real queries, with 1v1 PvP https://bit.ly/4tofRdc April 24, 2026 at 12:44AM
Show HN: I built a toy that plays grandma's stories when my daughter hugs it https://bit.ly/4d3RGLe
Show HN: I built a toy that plays grandma's stories when my daughter hugs it This was a project I built for my daughter's first birthday present. For context, I'm a surgical resident in the UK by background and am currently taking a year out of training to study a masters in computer science. My daughter just turned one. There are two things she really loves: the first is particular soft toy that she just can't live without, and the other is a good story book. Her grandparents live hours away and I didn't want her to forget what they sound like between visits. I wanted her to hear them whenever she missed them. My parents brought my brother and I up with incredible stories and books from all sorts of cultures, many of the stories being passed down from their parents before them. I didn't want my daughter to miss out on that. Finally, I was sick of missing storytime with her when I had to leave for night shifts. I wanted her to hear my voice before she slept every night. For all these reasons, I decided to build Storyfriend. It's her favourite soft toy with a custom made speaker-module inside. I combined my surgical skills with the skills I was learning as a CS student. Along the way I dipped my toes into the world of 3D printing, CAD and electronics design. When she hugs the toy, it plays stories read by her grandparents. She can take the toy with her anywhere and hear the stories anytime she wants - it works offline and has internal storage. It meets my wife's strict no-screen rule (which is getting harder to stick to as the days go by). I've recorded some of the stories that we would read together, so that on nights when I'm working she still has me there to read her a bedtime story. The bit I'm most pleased with: grandparents don't need an app. They just call a phone number. The audio routes through my server and pushes to the toy over WiFi. My own 86-year old grandmother in a rural village in another country can do it by just making a regular call via her landline, as she has done for many years - no help needed, no apps required, no smartphones involved. Hardware is a BLE/wifi module with a MAX98357 chip and custome battery management system, all soldered together, placed in a 3D printed enclosure and placed into a compartment that I stitched into her cuddly toy. Firmware pulls new messages when connected to WiFi and stores them on an SD card. So far I've sold a few hand-made units to parents and grandparents who resonated with the project. Site: https://bit.ly/4w3BEsy Would love feedback on the technical approach, the product itself, or anything else. Happy to answer questions about the build https://bit.ly/4u18OHd April 24, 2026 at 01:06AM
Wednesday, 22 April 2026
Show HN: Autobrowse – a self-improving harness for learning browser tasks https://bit.ly/4mKWqIU
Show HN: Autobrowse – a self-improving harness for learning browser tasks https://twitter.com/shreypandya/status/2047100550446280792 April 23, 2026 at 01:25AM
Show HN: Ghost Pepper Meet local meeting transcription and diarization https://bit.ly/491sT8c
Show HN: Ghost Pepper Meet local meeting transcription and diarization 100% local & private transcription engine for macOS. Captures & does speaker diarization. Originally was building as its own app, but can leverage same local models from my original push-to-talk voice transcription product so combined them into one app. https://bit.ly/4e3Ou3w April 22, 2026 at 08:19PM
Tuesday, 21 April 2026
Show HN: FMQL – graph query and bulk-edit CLI for Markdown and YAML frontmatter https://bit.ly/4tuazgq
Show HN: FMQL – graph query and bulk-edit CLI for Markdown and YAML frontmatter https://bit.ly/4tsH4vr April 21, 2026 at 09:08PM
Show HN: Almanac MCP, turn Claude Code into a Deep Research agent https://bit.ly/4sU5ZqB
Show HN: Almanac MCP, turn Claude Code into a Deep Research agent I am Rohan, and I have grown really frustrated with CC's search and read tools. They use Haiku to summarise all the search results, so it is really slow and often ends up being very lossy. I built this MCP that you can install into your coding agents so they can actually access the web properly. Right now it can: - search the general web - search Reddit - read and scrape basically any webpage Install it: npx openalmanac setup The MCP is completely free to use. We have also built a central store where you can contribute things you learned while exploring. If you find something useful, you can contribute it to the encyclopedia we're building at Almanac using the same MCP. https://bit.ly/3OUjo3W April 21, 2026 at 11:12PM
Show HN: A fake small claims court for petty complaints https://bit.ly/4sWCVio
Show HN: A fake small claims court for petty complaints https://bit.ly/4sRqKmT April 21, 2026 at 05:04AM
Monday, 20 April 2026
Show HN: Mediator.ai – Using Nash bargaining and LLMs to systematize fairness https://bit.ly/3OG1lhI
Show HN: Mediator.ai – Using Nash bargaining and LLMs to systematize fairness Eight years ago, my then-fiancée and I decided to get a prenup, so we hired a local mediator. The meetings were useful, but I felt there was no systematic process to produce a final agreement. So I started to think about this problem, and after a bit of research, I discovered the Nash bargaining solution. Yet if John Nash had solved negotiation in the 1950s, why did it seem like nobody was using it today? The issue was that Nash's solution required that each party to the negotiation provide a "utility function", which could take a set of deal terms and produce a utility number. But even experts have trouble producing such functions for non-trivial negotiations. A few years passed and LLMs appeared, and about a year ago I realized that while LLMs aren’t good at directly producing utility estimates, they are good at doing comparisons, and this can be used to estimate utilities of draft agreements. This is the basis for Mediator.ai, which I soft-launched over the weekend. Be interviewed by an LLM to capture your preferences and then invite the other party or parties to do the same. These preferences are then used as the fitness function for a genetic algorithm to find an agreement all parties are likely to agree to. An article with more technical detail: https://bit.ly/4ttPUcg https://bit.ly/48NXCph April 20, 2026 at 04:07PM
Show HN: Palmier – bridge your AI agents and your phone https://bit.ly/4d0ATb5
Show HN: Palmier – bridge your AI agents and your phone Hi HN — I built Palmier. Palmier bridges your AI agents and your phone. It does two things: 1. It lets you use your phone to directly control AI agents running on your computer, from anywhere. 2. It gives your AI agents access to your phone, wherever you are — including things like push notifications, SMS, calendar, contacts, sending email, creating calendar events, location, and more. A few details: * Supports 15+ agent CLIs * Supports Linux, Windows, and macOS * What runs on your computer and your phone is fully open source * Works out of the box — no need to set up GCP or API keys just to let agents use phone capabilities * Your phone can act as an agent remote: start tasks, check progress, review results, and respond to requests while away from your desk * Your phone can also act as an agent tool: agents can reach into phone capabilities directly when needed * Optional MCP server: if you want, Palmier exposes an MCP endpoint so your agent can access phone capabilities as native MCP tools. This is optional — you can also use Palmier directly from the phone app/PWA, with those capabilities already built in * Still in alpha stage, with bugs. Opinions and bug reports very welcome The basic idea is that AI agents become much more useful if they can both: * interact with the device you actually carry around all day * be controlled when you are away from your computer Palmier is my attempt at that bridge. It already works with agent CLIs like Claude Code, Gemini CLI, Codex CLI, Cursor CLI, OpenClaw, and others. You can run tasks on demand, on a schedule, or in response to events. Would especially love feedback on: * whether this feels genuinely useful * which phone capabilities are most valuable * which agent CLIs I should support next * what feels broken, awkward, or confusing Site: https://bit.ly/42omNLm Github: * https://bit.ly/48TuQn5 * https://bit.ly/3Qd8CGx Happy to answer questions. https://bit.ly/48TuQn5 April 21, 2026 at 03:31AM
Show HN: Mimi in the browser – hear the semantic/acoustic split https://bit.ly/4sJTH3O
Show HN: Mimi in the browser – hear the semantic/acoustic split https://bit.ly/4tvRric April 21, 2026 at 12:33AM
Sunday, 19 April 2026
Show HN: Brygga – A modern, fast, feature-rich IRC client for macOS https://bit.ly/4cTOrpF
Show HN: Brygga – A modern, fast, feature-rich IRC client for macOS Brygga is in early development. The core client works end-to-end (connect, join, send, receive, persist) but many features you'd expect from a mature IRC client are still missing. Repo: https://bit.ly/4mBr8UU April 20, 2026 at 12:11AM
Show HN: TRELLIS.2 image-to-3D running on Mac Silicon – no Nvidia GPU needed https://bit.ly/48LEND9
Show HN: TRELLIS.2 image-to-3D running on Mac Silicon – no Nvidia GPU needed I ported Microsoft's TRELLIS.2 (4B parameter image-to-3D model) to run on Apple Silicon via PyTorch MPS. The original requires CUDA with flash_attn, nvdiffrast, and custom sparse convolution kernels: none of which work on Mac. I replaced the CUDA-specific ops with pure-PyTorch alternatives: a gather-scatter sparse 3D convolution, SDPA attention for sparse transformers, and a Python-based mesh extraction replacing CUDA hashmap operations. Total changes are a few hundred lines across 9 files. Generates ~400K vertex meshes from single photos in about 3.5 minutes on M4 Pro (24GB). Not as fast as H100 (where it takes seconds), but it works offline with no cloud dependency. https://bit.ly/4cB0fvE https://bit.ly/4cB0fvE April 20, 2026 at 01:07AM
Show HN: How context engineering works, a runnable reference https://bit.ly/4sU6lxC
Show HN: How context engineering works, a runnable reference I've been presenting at local meetups about Context Engineering, RAG, Skills, etc.. I even have a vbrownbag coming up on LinkedIn about this topic so I figured I would make a basic example that uses bedrock so I can use it in my talks or vbrownbags. Hopefully it's useful. https://bit.ly/3OSFP9H April 17, 2026 at 07:20PM
Saturday, 18 April 2026
Show HN: Coelanox – auditable inference runtime in Rust (BERT runs today) https://bit.ly/3OMabe0
Show HN: Coelanox – auditable inference runtime in Rust (BERT runs today) PyTorch and ONNX Runtime tell you what came out. They can't tell you what actually ran to get there — which ops executed, in what order, on what inputs. A model gets packaged into a sealed .cnox container. SHA-256 is verified before a single op executes. Inference walks a fixed plan over a minimal opset. Every run can emit a per-op audit log: op type, output tensor hash, output sample — cryptographically linked to the exact container and input that produced it. If something goes wrong in production, you have a trail. Scalar backend today — reference implementation and permanent fallback when hardware acceleration isn't available. Audit and verification is identical across all backends. SIMD next, GPU after that. Input below is synthetic (all-ones) — pipeline is identical with real inputs. github.com/Coelanox/CLF Audit example: { "schema": 2, "run": { "run_id": "59144ede-5a27-4dff-bc25-94abade5b215", "started_at_unix_ms": 1776535116721, "container_path": "/home/shark/cnox/models/output/bert_base_uncased.cnox", "container_sha256_hex": "184c291595536e3ef69b9a6a324ad5ee4d0cef21cc95188e4cfdedb7f1f82740", "backend": "scalar" }, "input": { "len": 98304, "sha256_hex": "54ac99d2a36ac55b4619119ee26c36ec2868552933d27d519e0f9fd128b7319f", "sample_head": [ 1.0, 1.0, 1.0, 1.0 ] }, "ops": [ { "op_index": 0, "op_type": "Add", "out_len": 98304, "out_sample_head": [ 0.12242669, -4.970478, 2.8673656, 5.450008 ], "out_sha256_hex": "19f8aa0a618e5513aed4603a7aae2a333c3287368050e76d4aca0f83fb220e78" }, { "op_index": 1, "op_type": "Add", "out_len": 98304, "out_sample_head": [ 0.9650015, 0.23414998, 1.539839, 0.30231553 ], "out_sha256_hex": "7ae2f025c8acf67b8232e694dd43caf3b479eb078366787e4fdc16d651450ad4" }, { "op_index": 2, "op_type": "MatMul", "out_len": 98304, "out_sample_head": [ 1.0307425, 0.19207191, 1.5278282, 0.3000223 ], "out_sha256_hex": "44c28e64441987b8f0516d77f45ad892750b3e5b3916770d3baa5f2289e41bdd" }, { "op_index": 3, "op_type": "Gelu", "out_len": 393216, "out_sample_head": [ 0.68828076, -0.0033473556, 1.591219, -0.16837223 ], "audit_elided": "hash_skipped: len 393216 > max 262144" } https://bit.ly/4mEV1DY April 18, 2026 at 09:37PM
Show HN: Sostactic – polynomial inequalities using sums-of-squares in Lean https://bit.ly/4vAzfFm
Show HN: Sostactic – polynomial inequalities using sums-of-squares in Lean Current support for nonlinear inequalities in Lean is quite limited. This package attempts to solve this. It contains a collection of Lean4 tactics for proving polynomial inequalities via sum-of-squares (SOS) decompositions, powered by a Python backend. You can use it via Python or Lean. These tactics are significantly more powerful than `nlinarith` and `positivity` -- i.e., they can prove inequalities they cannot. In theory, they can be used to prove any of the following types of statements - prove that a polynomial is nonnegative globally - prove that a polynomial is nonnegative over a semialgebraic set (i.e., defined by a set of polynomial inequalities) - prove that a semialgebraic set is empty, i.e., that a system of polynomial inequalities is infeasible The underlying theory is based on the following observation: if a polynomial can be written as a sum of squares of other polynomials, then it is nonnegative everywhere. Theorems proving the existence of such decompositions were one of the landmark achievements of real algebraic geometry in the 20th century, and its connection to semidefinite programming in the 21st century made it a practical computational tool, and is what this software does in the background. https://bit.ly/4cSeiOP April 18, 2026 at 11:36PM
Friday, 17 April 2026
Show HN: Mind-OS – First free online AI dependency self‑assessment https://bit.ly/3Qh7L7A
Show HN: Mind-OS – First free online AI dependency self‑assessment https://bit.ly/4epeJkU April 17, 2026 at 10:40PM
Show HN: Ask your AI to start a business for you, resolved.sh https://bit.ly/4mAJc1z
Show HN: Ask your AI to start a business for you, resolved.sh Start with a FREE instant website for your AI on the open internet, then work with it to build a business that sells specialized datasets, files, premium reports, blogs, courses and more. https://bit.ly/4mx3h8Q April 17, 2026 at 04:31AM
Thursday, 16 April 2026
Show HN: Free API and widget to look up US representatives https://bit.ly/4ciVtEs
Show HN: Free API and widget to look up US representatives https://bit.ly/4mAHLQt April 17, 2026 at 01:45AM
Show HN: Spice simulation → oscilloscope → verification with Claude Code https://bit.ly/488OVFT
Show HN: Spice simulation → oscilloscope → verification with Claude Code I built MCP servers for my oscilloscope and SPICE simulator so Claude Code can close the loop between simulation and real hardware. https://bit.ly/4cuNvqx April 17, 2026 at 01:37AM
Wednesday, 15 April 2026
Show HN: I built a Wikipedia based AI deduction game https://bit.ly/4vtN4pb
Show HN: I built a Wikipedia based AI deduction game I haven't seen anything like this so I decided to build it in a weekend. How it works: You see a bunch of things pulled from Wikipedia displayed on cards. You ask yes or no questions to figure out which card is the secret article. The AI model has access to the image and wiki text and it's own knowledge to answer your question. Happy to have my credits burned for the day but I'll probably have to make this paid at some point so enjoy. I found it's not easy to get cheap+fast+good responses but the tech is getting there. Most of the prompts are running through Groq infra or hitting a cache keyed by a normalization of the prompt. https://bit.ly/4muibN6 April 16, 2026 at 01:13AM
Tuesday, 14 April 2026
Show HN: StockFit API – structured SEC EDGAR data with a free tier https://bit.ly/3O7Ljx7
Show HN: StockFit API – structured SEC EDGAR data with a free tier https://bit.ly/4ct3e9A April 15, 2026 at 02:53AM
Show HN: Keynot – Kill PowerPoint with HTML https://bit.ly/4cm4on7
Show HN: Keynot – Kill PowerPoint with HTML https://bit.ly/4tPn0Db April 15, 2026 at 03:05AM
Show HN: OpenRig – agent harness that runs Claude Code and Codex as one system https://bit.ly/4812UgQ
Show HN: OpenRig – agent harness that runs Claude Code and Codex as one system I've been running Claude Code and Codex together every day. At some point I figured out you can use tmux to let them talk to each other, so I started doing that. Once they could coordinate, I kept adding more agents. Before long I had a whole team working together. But any time I rebooted my machine, the whole thing was gone. Not just the tabs. The way they were wired up, what each one was doing, all of it. Nothing I'd found treats your agent setup as a topology, as something with a shape you can save and bring back. So I built OpenRig, a multi-agent harness. A harness wraps a model. A "rig" wraps your harnesses. You describe your team in a YAML file, boot it with one command, and get a live topology you can see, click into, save, and bring back by name. Claude Code and Codex run together in the same rig. tmux is still doing the talking underneath. I didn't try to add a fancier messaging layer on top. The project is still early. My own setup uses the config layer extensively (YAML, Markdown, JSON) for prototyping functionality that outpace what's shipped in the repo and npm package. But the core primitives are there and the happy path in readme works. It's built to be driven by your agent, not by you typing commands by hand. README: https://bit.ly/4sy2c1O Demo: https://youtu.be/vndsXRBPGio https://bit.ly/4sy2c1O April 15, 2026 at 12:46AM
Monday, 13 April 2026
Show HN: Mcptube – Karpathy's LLM Wiki idea applied to YouTube videos https://bit.ly/4cbiR6A
Show HN: Mcptube – Karpathy's LLM Wiki idea applied to YouTube videos I watch a lot of Stanford/Berkeley lectures and YouTube content on AI agents, MCP, and security. Got tired of scrubbing through hour-long videos to find one explanation. Built v1 of mcptube a few months ago. It performs transcript search and implements Q&A as an MCP server. It got traction (34 stars, my first open-source PR, some notable stargazers like CEO of Trail of Bits). But v1 re-searched raw chunks from scratch every query. So I rebuilt it. v2 (mcptube-vision) follows Karpathy's LLM Wiki pattern. At ingest time, it extracts transcripts, detects scene changes with ffmpeg, describes key frames via a vision model, and writes structured wiki pages. Knowledge compounds across videos rather than being re-discovered. FTS5 + a two-stage agent (narrow then reason) for retrieval. MCPTube works both as CLI (BYOK) and MCP server. I tested MCPTube with Claude Code, Claude Desktop, VS Code Copilot, Cursor, and others. Zero API key needed server-side. Coming soon: I am also building SaaS platform. This platform supports playlist ingestion, team wikis, etc. I like to share early access signup: https://bit.ly/4c9lC8r Happy to discuss architecture tradeoffs — FTS5 vs vectors, file-based wiki vs DB, scene-change vs fixed-interval sampling. Give it a try via `pip install mcptube`. Also, please do star the repo if you enjoy my contribution ( https://bit.ly/4vthsjo ) https://bit.ly/4vthsjo April 13, 2026 at 05:34PM
Show HN: Lint-AI by RooAGI, a Rust CLI for AI Doc Retrieval https://bit.ly/4tMnxpr
Show HN: Lint-AI by RooAGI, a Rust CLI for AI Doc Retrieval We’re RooAGI. We built Lint-AI, a Rust CLI for indexing and retrieving evidence from large AI-generated corpora. As AI systems create more task notes, traces, and reports, storing documents isn’t the only challenge. The real problem is finding the right evidence when the same idea appears in multiple places, often with different wording. Lint-AI is our current retrieval layer for that problem. What Lint-AI does currently: * Indexes large documentation corpora. * Extracts lightweight entities and important terms. * Supports hybrid retrieval using lexical, entity, term, and graph-aware scoring * Returns chunk-level evidence with --llm-context for downstream reviewer / LLM * Use exports doc, chunk, and entity graphs. Example: * ./lint-ai /path/to/docs --llm-context "where docs describe the same concept differently" --result-count 8 --simplified That command does not decide whether documents are in contradiction. It retrieves the most relevant chunks so that a reviewer layer can compare them. Repo: https://bit.ly/48N8l3d We’d appreciate feedback on: * Retrieval/ranking design for documentation corpora. * How to evaluate evidence retrieval quality for alignment workflows. * What kinds of entity/relationship modeling would actually be useful here? Visit: https://bit.ly/3UklysB https://bit.ly/48N8l3d April 13, 2026 at 08:11PM
Sunday, 12 April 2026
Show HN: Bad Apple (Oscilloscope-Like) – one stroke per frame https://bit.ly/4sstEOA
Show HN: Bad Apple (Oscilloscope-Like) – one stroke per frame https://bit.ly/4dDKBSx April 13, 2026 at 06:01AM
Show HN: Local LLM on a Pi 4 controlling hardware via tool calling https://bit.ly/4cn6vHx
Show HN: Local LLM on a Pi 4 controlling hardware via tool calling https://bit.ly/3NYmxPZ April 13, 2026 at 12:14AM
Show HN: Stork – MCP server so Claude/Cursor can search 14k MCP servers AI tools https://bit.ly/4tqefjn
Show HN: Stork – MCP server so Claude/Cursor can search 14k MCP servers AI tools https://bit.ly/48KFXPd April 12, 2026 at 08:49PM
Show HN: Toy Python Lisp interpreters based on the 1960 McCarthy paper https://bit.ly/4dCFhPj
Show HN: Toy Python Lisp interpreters based on the 1960 McCarthy paper I wrote this set of Python files to try to help programmers understand the original LISP paper, assuming zero mathematical or Lisp knowledge. The original paper is a mind-blowing piece of computer science history for many reasons - I'd recommend anyone to try and get their head around it. I found plenty of fantastic LISP implementations which stay close to the original paper. But they are all fully-functional, practical implementations. The original paper builds from deeper fundamentals which it would be possible to write code in, albeit very impractical. I implemented these earlier iterations, so programmers can follow the paper step-by-step in a more familiar language than 50s mathematical notation. I am no expert in Lisp or mathematics, and intentionally went into this with no knowledge of Lisp beyond the original paper. I did not write it in the most elegant way, but in the simplest way for me to understand. So please don't take this code as a definitive statement on the language. However, this code really helped me to understand the original paper better, and to begin using Lisp with a better grasp of the spirit of the language. I'd welcome any thoughts from those who have more experience with Lisp or comp sci history. https://bit.ly/4dCFj9T April 12, 2026 at 11:01AM
Show HN: Bullseye2D – A Dart library for cross-platform 2D games https://bit.ly/4tHZp7t
Show HN: Bullseye2D – A Dart library for cross-platform 2D games I posted this here about a year ago, but I just pushed a 2.0 release, so I hope you don't mind a second look :) Bullseye2D is a 2D game library for Dart with a very simple API. The new version now supports multi-platform. It compiles to the web via a WebGL2 renderer, or natively to Windows, macOS and Linux through an SDL3 backend (which itself supports Vulkan, DirectX, Metal, and OpenGL renderers). It doesn't depend on Flutter and has very few dependencies (except SDL3). It mostly provides a minimal foundation that you can build your own abstractions on top of. This was also my first time leaning more heavily on AI (Opus) for a large refactor. I tried to review and test everything as good as I could, but honestly for the restructuring parts where I had the AI produce rather big chunks of code, I found reviewing and testing quite exhausting, and I still have a slightly queasy feeling about it. So this is also quite an experiment for me how good I'm able to utilise AI :) https://bit.ly/4tBTHnn https://bit.ly/4ciUyCn April 12, 2026 at 09:39AM
Show HN: macpak (Homebrew Wrapper for macOS) https://bit.ly/4cfhLFG
Show HN: macpak (Homebrew Wrapper for macOS) https://bit.ly/47VUpUk April 12, 2026 at 08:30AM
Saturday, 11 April 2026
Show HN: Minimalist template for scientific and academic resumes https://bit.ly/422X4be
Show HN: Minimalist template for scientific and academic resumes https://bit.ly/4sxLSyr April 12, 2026 at 04:46AM
Friday, 10 April 2026
Show HN: HyperFlow – A self-improving agent framework built on LangGraph https://bit.ly/4vhTPdr
Show HN: HyperFlow – A self-improving agent framework built on LangGraph Hi HN, I am Umer. I recently built an experimental framework called HyperFlow to explore the idea of self-improving AI agents. Usually, when an agent fails a task, we developers step in to manually tweak the prompt or adjust the code logic. I wanted to see if an agent could automate its own improvement loop. Built on LangChain and LangGraph, HyperFlow uses two agents: - A TaskAgent that solves the domain problem. - A MetaAgent that acts as the improver. The MetaAgent looks at the TaskAgent's evaluation logs, rewrites the underlying Python code, tools, and prompt files, and then tests the new version in an isolated sandbox (like Docker). Over several generations, it saves the versions that achieve the highest scores to an archive. It is highly experimental right now, but the architecture is heavily inspired by the recent HyperAgents paper (Meta Research, 2026). I would love to hear your feedback on the architecture, your thoughts on self-referential agents, or answer any questions you might have! Documentation: https://bit.ly/4mll1Eh GitHub: https://bit.ly/3PY51vP April 11, 2026 at 05:01AM
Show HN: Sash – tiny macOS utility to reliably cycle through app windows https://bit.ly/4cicPjc
Show HN: Sash – tiny macOS utility to reliably cycle through app windows macOS's built-in cycle window shortcut (⌘` / ⌘@) has always been flaky for me. Probably not a Show HN, but if it annoyed me this much it might be annoying some others. Only tested on the latest macOS — would appreciate any reports from other versions. https://bit.ly/4eddVPU April 11, 2026 at 12:02AM
Show HN: Unlegacy – document everything, from COBOL to AI generated code https://bit.ly/47RGizj
Show HN: Unlegacy – document everything, from COBOL to AI generated code https://bit.ly/4vskSD6 April 10, 2026 at 05:55PM
Thursday, 9 April 2026
Show HN: SmolVM – open-source sandbox for coding and computer-use agents https://bit.ly/4tD1tNQ
Show HN: SmolVM – open-source sandbox for coding and computer-use agents SmolVM is an open-source local sandbox for AI agents on macOS and Linux. I started building it because agent workflows need more than isolated code execution. They need a reusable environment: write files in one step, come back later, snapshot state, pause/resume, and increasingly interact with browsers or full desktop environments. Right now SmolVM is a Python SDK and CLI focused on local developer experience. Current features include: - local sandbox environments - macOS and Linux support - snapshotting - pause/resume - persistent environments across turns Install: ``` curl -sSL https://bit.ly/4edpkzh | bash smolvm ``` I’d love feedback from people building coding agents or computer-use agents. Interested in what feels missing, what feels clunky, and what you’d expect from a sandbox like this. https://bit.ly/4ckmAxC April 10, 2026 at 01:01AM
Show HN: Rust based eBook library for Python, with MIT license https://bit.ly/4mo24AT
Show HN: Rust based eBook library for Python, with MIT license https://bit.ly/4czpdg6 April 9, 2026 at 11:03PM
Show HN: I built Dirac, Hash Anchored AST native coding agent, costs -64.8 pct https://bit.ly/4cuJeo9
Show HN: I built Dirac, Hash Anchored AST native coding agent, costs -64.8 pct Fully open source, a hard fork of cline. Full evals on the github page that compares 7 agents (Cline, Kilo, Ohmypi, Opencode, Pimono, Roo, Dirac) on 8 medium complexity tasks. Each task, each diff and correctness + cost info on the github Dirac is 64.8% cheaper than the average of the other 6. https://bit.ly/4t0sefg April 9, 2026 at 01:06PM
Show HN: Homebutler – I manage my homelab from chat. AI never gets raw shell https://bit.ly/4c9xtlK
Show HN: Homebutler – I manage my homelab from chat. AI never gets raw shell https://bit.ly/4c5Wvlz April 9, 2026 at 01:09PM
Show HN: CSS Studio. Design by hand, code by agent https://bit.ly/48qpGPl
Show HN: CSS Studio. Design by hand, code by agent Hi HN! I've just released CSS Studio, a design tool that lives on your site, runs on your browser, sends updates to your existing AI agent, which edits any codebase. You can actually play around with the latest version directly on the site. Technically, the way this works is you view your site in dev mode and start editing it. In your agent, you can run /studio which then polls (or uses Claude Channels) an MCP server. Changes are streamed as JSON via the MCP, along with some viewport and URL information, and the skill has some instructions on how best to implement them. It contains a lot of the tools you'd expect from a visual editing tool, like text editing, styles and an animation timeline editor. https://bit.ly/4t4hwoe April 9, 2026 at 12:23PM
Show HN: Moon simulator game, ray-casting https://bit.ly/41UVw2W
Show HN: Moon simulator game, ray-casting Did this a few years ago. Seems apropos. Sources and more here: https://bit.ly/3Kb9MJJ https://bit.ly/421jFVz April 6, 2026 at 06:09PM
Wednesday, 8 April 2026
Show HN: A (marginally) useful x86-64 ELF executable in 301 bytes https://bit.ly/4t2iFww
Show HN: A (marginally) useful x86-64 ELF executable in 301 bytes https://bit.ly/4aziUph April 6, 2026 at 09:14PM
Show HN: LadderRank: Rank anything with ELO ratings https://bit.ly/4c0ocxC
Show HN: LadderRank: Rank anything with ELO ratings I built a pairwise ranking platform on Cloudflare Workers. You get two items, pick the better one, and ELO ratings sort out the rest. No more tier list arguments. Let the votes decide. I seeded it with a "Best Programming Language" ladder to settle the debate once and for all: https://bit.ly/3NVnRDb The stack: Hono + D1 + R2 on Cloudflare Workers, React frontend on Pages, Drizzle ORM. Anyone can create their own ladder and share it. Anonymous voting works too (at reduced weight). Curious to see what HN thinks is the best language, and whether the ELO rankings match your priors. https://bit.ly/4mjzuk0 April 9, 2026 at 01:47AM
Show HN: Android SSH client with full Terminal, server monitoring and runbooks https://bit.ly/4e9xI2E
Show HN: Android SSH client with full Terminal, server monitoring and runbooks https://bit.ly/3O5Mc9q April 8, 2026 at 11:44AM
Show HN: We built a camera only robot vacuum for less than 300$ (Well almost) https://bit.ly/4cc3ZDP
Show HN: We built a camera only robot vacuum for less than 300$ (Well almost) https://bit.ly/4mhTjId April 6, 2026 at 06:08AM
Tuesday, 7 April 2026
Show HN: Kerf-CLI – SQLite-backed cost analytics for Claude Code https://bit.ly/4ctmHIi
Show HN: Kerf-CLI – SQLite-backed cost analytics for Claude Code https://bit.ly/4ve1RnG April 8, 2026 at 01:20AM
Show HN: Brutalist Concrete Laptop Stand (2024) https://bit.ly/3QpFpb6
Show HN: Brutalist Concrete Laptop Stand (2024) https://bit.ly/4snjzT4 April 7, 2026 at 12:07PM
Show HN: LLMs as Planners, Not Reasoners https://bit.ly/4meg0gx
Show HN: LLMs as Planners, Not Reasoners https://bit.ly/4mbYd9H April 7, 2026 at 09:08AM
Monday, 6 April 2026
Show HN: Physical constants from 2 integers – MIT, 1225 tests, falsifiable https://bit.ly/4v8ZQsR
Show HN: Physical constants from 2 integers – MIT, 1225 tests, falsifiable https://bit.ly/4vgsBDZ April 7, 2026 at 12:52AM
Sunday, 5 April 2026
Show HN: Gemma Gem – AI model embedded in a browser – no API keys, no cloud https://bit.ly/4bSrfYy
Show HN: Gemma Gem – AI model embedded in a browser – no API keys, no cloud Gemma Gem is a Chrome extension that loads Google's Gemma 4 (2B) through WebGPU in an offscreen document and gives it tools to interact with any webpage: read content, take screenshots, click elements, type text, scroll, and run JavaScript. You get a small chat overlay on every page. Ask it about the page and it (usually) figures out which tools to call. It has a thinking mode that shows chain-of-thought reasoning as it works. It's a 2B model in a browser. It works for simple page questions and running JavaScript, but multi-step tool chains are unreliable and it sometimes ignores its tools entirely. The agent loop has zero external dependencies and can be extracted as a standalone library if anyone wants to experiment with it. https://bit.ly/4m9Rw8a April 6, 2026 at 01:14AM
Show HN: Mdarena – Benchmark your Claude.md against your own PRs https://bit.ly/4sT6q5f
Show HN: Mdarena – Benchmark your Claude.md against your own PRs https://bit.ly/4bQ2Fri April 6, 2026 at 12:35AM
Saturday, 4 April 2026
Show HN: SeekLink – Local hybrid search and link discovery for Obsidian vaults https://bit.ly/4sNp2Uc
Show HN: SeekLink – Local hybrid search and link discovery for Obsidian vaults https://bit.ly/4doOsmm April 5, 2026 at 01:18AM
Show HN: Contrapunk – Real-time counterpoint harmony from guitar input, in Rust https://bit.ly/4e1xlHo
Show HN: Contrapunk – Real-time counterpoint harmony from guitar input, in Rust https://bit.ly/3PIfGuu April 5, 2026 at 01:40AM
Friday, 3 April 2026
Show HN: AI agent skills for affiliate marketing (Markdown, works with any LLM) https://bit.ly/4sktB7v
Show HN: AI agent skills for affiliate marketing (Markdown, works with any LLM) https://bit.ly/3OkSTnZ April 3, 2026 at 10:28PM
Show HN: Travel Hacking Toolkit – Points search and trip planning with AI https://bit.ly/4sRO5W7
Show HN: Travel Hacking Toolkit – Points search and trip planning with AI I use points and miles for most of my travel. Every booking comes down to the same decision: use points or pay cash? To answer that, you need award availability across multiple programs, cash prices, your current balances, transfer partner ratios, and the math to compare them. I got tired of doing it manually across a dozen tabs. This toolkit teaches Claude Code and OpenCode how to do it. 7 skills (markdown files with API docs and curl examples) and 6 MCP servers (real-time tools the AI calls directly). It searches award flights across 25+ mileage programs (Seats.aero), compares cash prices (Google Flights, Skiplagged, Kiwi.com, Duffel), pulls your loyalty balances (AwardWallet), searches hotels (Trivago, LiteAPI, Airbnb, Booking.com), finds ferry routes across 33 countries, and looks up weird hidden gems near your destination (Atlas Obscura). Reference data is included: transfer partner ratios for Chase UR, Amex MR, Bilt, Capital One, and Citi TY. Point valuations sourced from TPG, Upgraded Points, OMAAT, and View From The Wing. Alliance membership, sweet spot redemptions, booking windows, hotel chain brand lookups. 5 of the 6 MCP servers need zero API keys. Clone, run setup.sh, start searching. Skills are, as usual, plain markdown. They work in OpenCode and Claude Code automatically (I added a tiny setup script), and they'll work in anything else that supports skills. PRs welcome! Help me expand the toolkit! :) https://bit.ly/47ObeAl https://bit.ly/47ObeAl April 4, 2026 at 03:26AM
Show HN: DotReader – connects ideas across your books automatically https://bit.ly/4bRRFK6
Show HN: DotReader – connects ideas across your books automatically https://bit.ly/3PR1TBN April 4, 2026 at 01:46AM
Show HN: Mtproto.zig – High-performance Telegram proxy with DPI evasion https://bit.ly/4dZeFbh
Show HN: Mtproto.zig – High-performance Telegram proxy with DPI evasion Hey everyone. I built an MTProto proxy for Telegram aimed at bypassing active DPI censorship like the Russian TSPU. I chose Zig because it's perfect for writing fast network daemons and makes it incredibly easy to port low-level C bypass techniques like TCP desync and packet fragmentation. Would love to get some feedback or contributors! https://bit.ly/4e3gDYd April 3, 2026 at 10:42PM
Thursday, 2 April 2026
Show HN: Minimal Brain Teaser Web Game (Handcrafted, No AI) https://bit.ly/4m5dvgp
Show HN: Minimal Brain Teaser Web Game (Handcrafted, No AI) Built and open-sourced in the era before AI. I’m sure you know where to find the code. https://bit.ly/47GWIul April 3, 2026 at 05:00AM
Show HN: SkiFlee (an HTML5 game) https://bit.ly/47AOdkr
Show HN: SkiFlee (an HTML5 game) This is a silly little multiplayer game I made for a gamejam that involves skiiing and not crashing. Some of you who are nostalgic for the 90s might like it :) https://bit.ly/47CDSEB April 3, 2026 at 12:30AM
Show HN: Made a little Artemis II tracker https://bit.ly/4cndWiY
Show HN: Made a little Artemis II tracker Made a little Artemis II tracker for anyone else who is unnecessarily invested in this mission: https://bit.ly/4drg4r8 For those of us who apparently need a dedicated place to monitor this mission instead of behaving like well-adjusted people. https://bit.ly/4drg4r8 April 3, 2026 at 12:16AM
Wednesday, 1 April 2026
Show HN: Linux Kernel Documentation Index-Every Page in the Linux Kernel's Docs https://bit.ly/48mglIa
Show HN: Linux Kernel Documentation Index-Every Page in the Linux Kernel's Docs https://bit.ly/48pUFLl April 2, 2026 at 03:39AM
Show HN: Semantic atlas of 188 constitutions in 3D (30k articles, embeddings) https://bit.ly/4sQE2Ro
Show HN: Semantic atlas of 188 constitutions in 3D (30k articles, embeddings) I built this after noticing that existing tools for comparing constitutional law either have steep learning curves or only support keyword search. By combining Gemini embeddings with UMAP projection, you can navigate 30,828 constitutional articles from 188 countries in 3D and find conceptually related provisions even when the wording differs. Feedback welcome, especially from legal researchers or comparative law folks. Source and pipeline: github.com/joaoli13/constitutional-map-ai https://bit.ly/41cQK0z April 2, 2026 at 03:40AM
Show HN: 65k AI voters predict UK local elections with 75% accuracy https://bit.ly/4bN1QQ7
Show HN: 65k AI voters predict UK local elections with 75% accuracy https://bit.ly/3NRITT9 April 2, 2026 at 12:37AM
Show HN: CLI to order groceries via reverse-engineered REWE API (Haskell) https://bit.ly/4m08tlg
Show HN: CLI to order groceries via reverse-engineered REWE API (Haskell) I just had the best time learning about the REWE (German supermarket chain) API, how they use mTLS and what the workflows are. Also `mitmproxy2swagger`[1] is a great tool to create OpenAPI spec automatically. And then 2026 feels like the perfect time writing Haskell. The code is handwritten, but whenever I got stuck with the build system or was just not getting the types right, I could fall back to ask AI to unblock me. It was never that smooth before. Finally the best side projects are the ones you actually use and this one will be used for all my future grocery shopping. [1] https://bit.ly/3FHG1j9 https://bit.ly/4didRhz March 30, 2026 at 07:45AM
Subscribe to:
Posts (Atom)