Friday, 24 April 2026

Show HN: VT Code – Rust TUI coding agent with multi-provider support https://bit.ly/4u8nJiS

Show HN: VT Code – Rust TUI coding agent with multi-provider support Hi HN, I built VT Code, a semantic coding agent. Supports all SOTA and open sources model. Anthropic, OpenAI, Gemini, Codex. Agent Skills, Model Context Protocol and Agent Client Protocol (ACP) ready. All open source models are support. Local inference via LM Studio and Ollama (experiment). Semantic context understanding is supported by ast-grep for structured code search and ripgrep for powered grep. I built VT Code in Rust on Ratatui. Architecture and agent loop documented in the README and DeepWiki. Repo: https://bit.ly/4sTIE8i DeepWiki: https://bit.ly/4cxi14r Happy to answer questions! I believe coding harnesses should be open, and everyone should have a choice of their preferred way to work in this agentic engineering era. https://bit.ly/4sTIE8i April 25, 2026 at 04:17AM

Show HN: RoboAPI – A unified REST API for robots, like Stripe but for hardware https://bit.ly/3Qv1DbY

Show HN: RoboAPI – A unified REST API for robots, like Stripe but for hardware Every robot manufacturer ships a different SDK and a different protocol. A Boston Dynamics Spot speaks nothing like a Universal Robots arm. Every team building on top of robots rewrites the same integration layer from scratch. This is a massive tax on the industry. RoboAPI is a unified API layer that abstracts all of that into one clean developer experience. One SDK, one API key, any robot — simulated or real hardware. You can connect a simulated robot and read live telemetry in under 5 minutes: pip install fastapi uvicorn roslibpy uvicorn api.main:app --reload curl -X POST localhost:8000/v1/robots/connect -d '{"robot_id":"bot-01","brand":"simulated"}' curl localhost:8000/v1/robots/bot-01/sense It also connects to real ROS2 robots via rosbridge — I tested it today controlling a turtlesim robot drawing circles through the API. The architecture is pluggable — each robot brand is a separate adapter implementing a common interface (like a payment gateway in Stripe). Adding a new brand means one file. Currently supports: simulated robots and any ROS2 robot. Boston Dynamics and Universal Robots adapters are next. Would love feedback from anyone working in robotics — especially on the API design and what's missing for real-world use. https://bit.ly/3QyRJWM April 25, 2026 at 12:16AM

Show HN: Nimbus – Browser with Claude Code UX https://bit.ly/41XYTX5

Show HN: Nimbus – Browser with Claude Code UX Hi HN, I'm Anil. Nimbus is a desktop browser with an AI agent built into it. The UX is shamelessly inspired by Claude Code: a chat bar at the bottom, an agent log above it, and the webpage itself when its needed. This is mainly a UX experiment for me. And also the reason it isn't a Chrome extension: once you have a chat bar that understands intent, the URL field is redundant. You shouldn't have two places to tell the browser what you want. I didn't want to bolt an agent onto an existing browser's chrome and end up with duplicated controls everywhere — I wanted full freedom to redesign the shell from scratch, decide what stays, what goes, and what a browser even looks like when the agent is the primary interface. Download for macOS: https://bit.ly/4sSJzpC Launch video: https://youtu.be/dj23-XIiB1o https://bit.ly/4sSJzpC April 24, 2026 at 09:01PM

Thursday, 23 April 2026

Show HN: SQL Protocol – learn SQL by running real queries, with 1v1 PvP https://bit.ly/4t3pEVi

Show HN: SQL Protocol – learn SQL by running real queries, with 1v1 PvP https://bit.ly/4tofRdc April 24, 2026 at 12:44AM

Show HN: I built a toy that plays grandma's stories when my daughter hugs it https://bit.ly/4d3RGLe

Show HN: I built a toy that plays grandma's stories when my daughter hugs it This was a project I built for my daughter's first birthday present. For context, I'm a surgical resident in the UK by background and am currently taking a year out of training to study a masters in computer science. My daughter just turned one. There are two things she really loves: the first is particular soft toy that she just can't live without, and the other is a good story book. Her grandparents live hours away and I didn't want her to forget what they sound like between visits. I wanted her to hear them whenever she missed them. My parents brought my brother and I up with incredible stories and books from all sorts of cultures, many of the stories being passed down from their parents before them. I didn't want my daughter to miss out on that. Finally, I was sick of missing storytime with her when I had to leave for night shifts. I wanted her to hear my voice before she slept every night. For all these reasons, I decided to build Storyfriend. It's her favourite soft toy with a custom made speaker-module inside. I combined my surgical skills with the skills I was learning as a CS student. Along the way I dipped my toes into the world of 3D printing, CAD and electronics design. When she hugs the toy, it plays stories read by her grandparents. She can take the toy with her anywhere and hear the stories anytime she wants - it works offline and has internal storage. It meets my wife's strict no-screen rule (which is getting harder to stick to as the days go by). I've recorded some of the stories that we would read together, so that on nights when I'm working she still has me there to read her a bedtime story. The bit I'm most pleased with: grandparents don't need an app. They just call a phone number. The audio routes through my server and pushes to the toy over WiFi. My own 86-year old grandmother in a rural village in another country can do it by just making a regular call via her landline, as she has done for many years - no help needed, no apps required, no smartphones involved. Hardware is a BLE/wifi module with a MAX98357 chip and custome battery management system, all soldered together, placed in a 3D printed enclosure and placed into a compartment that I stitched into her cuddly toy. Firmware pulls new messages when connected to WiFi and stores them on an SD card. So far I've sold a few hand-made units to parents and grandparents who resonated with the project. Site: https://bit.ly/4w3BEsy Would love feedback on the technical approach, the product itself, or anything else. Happy to answer questions about the build https://bit.ly/4u18OHd April 24, 2026 at 01:06AM

Wednesday, 22 April 2026

Show HN: Autobrowse – a self-improving harness for learning browser tasks https://bit.ly/4mKWqIU

Show HN: Autobrowse – a self-improving harness for learning browser tasks https://twitter.com/shreypandya/status/2047100550446280792 April 23, 2026 at 01:25AM

Show HN: Ghost Pepper Meet local meeting transcription and diarization https://bit.ly/491sT8c

Show HN: Ghost Pepper Meet local meeting transcription and diarization 100% local & private transcription engine for macOS. Captures & does speaker diarization. Originally was building as its own app, but can leverage same local models from my original push-to-talk voice transcription product so combined them into one app. https://bit.ly/4e3Ou3w April 22, 2026 at 08:19PM

Tuesday, 21 April 2026

Show HN: FMQL – graph query and bulk-edit CLI for Markdown and YAML frontmatter https://bit.ly/4tuazgq

Show HN: FMQL – graph query and bulk-edit CLI for Markdown and YAML frontmatter https://bit.ly/4tsH4vr April 21, 2026 at 09:08PM

Show HN: Almanac MCP, turn Claude Code into a Deep Research agent https://bit.ly/4sU5ZqB

Show HN: Almanac MCP, turn Claude Code into a Deep Research agent I am Rohan, and I have grown really frustrated with CC's search and read tools. They use Haiku to summarise all the search results, so it is really slow and often ends up being very lossy. I built this MCP that you can install into your coding agents so they can actually access the web properly. Right now it can: - search the general web - search Reddit - read and scrape basically any webpage Install it: npx openalmanac setup The MCP is completely free to use. We have also built a central store where you can contribute things you learned while exploring. If you find something useful, you can contribute it to the encyclopedia we're building at Almanac using the same MCP. https://bit.ly/3OUjo3W April 21, 2026 at 11:12PM

Show HN: A fake small claims court for petty complaints https://bit.ly/4sWCVio

Show HN: A fake small claims court for petty complaints https://bit.ly/4sRqKmT April 21, 2026 at 05:04AM

Monday, 20 April 2026

Show HN: Mediator.ai – Using Nash bargaining and LLMs to systematize fairness https://bit.ly/3OG1lhI

Show HN: Mediator.ai – Using Nash bargaining and LLMs to systematize fairness Eight years ago, my then-fiancée and I decided to get a prenup, so we hired a local mediator. The meetings were useful, but I felt there was no systematic process to produce a final agreement. So I started to think about this problem, and after a bit of research, I discovered the Nash bargaining solution. Yet if John Nash had solved negotiation in the 1950s, why did it seem like nobody was using it today? The issue was that Nash's solution required that each party to the negotiation provide a "utility function", which could take a set of deal terms and produce a utility number. But even experts have trouble producing such functions for non-trivial negotiations. A few years passed and LLMs appeared, and about a year ago I realized that while LLMs aren’t good at directly producing utility estimates, they are good at doing comparisons, and this can be used to estimate utilities of draft agreements. This is the basis for Mediator.ai, which I soft-launched over the weekend. Be interviewed by an LLM to capture your preferences and then invite the other party or parties to do the same. These preferences are then used as the fitness function for a genetic algorithm to find an agreement all parties are likely to agree to. An article with more technical detail: https://bit.ly/4ttPUcg https://bit.ly/48NXCph April 20, 2026 at 04:07PM

Show HN: Palmier – bridge your AI agents and your phone https://bit.ly/4d0ATb5

Show HN: Palmier – bridge your AI agents and your phone Hi HN — I built Palmier. Palmier bridges your AI agents and your phone. It does two things: 1. It lets you use your phone to directly control AI agents running on your computer, from anywhere. 2. It gives your AI agents access to your phone, wherever you are — including things like push notifications, SMS, calendar, contacts, sending email, creating calendar events, location, and more. A few details: * Supports 15+ agent CLIs * Supports Linux, Windows, and macOS * What runs on your computer and your phone is fully open source * Works out of the box — no need to set up GCP or API keys just to let agents use phone capabilities * Your phone can act as an agent remote: start tasks, check progress, review results, and respond to requests while away from your desk * Your phone can also act as an agent tool: agents can reach into phone capabilities directly when needed * Optional MCP server: if you want, Palmier exposes an MCP endpoint so your agent can access phone capabilities as native MCP tools. This is optional — you can also use Palmier directly from the phone app/PWA, with those capabilities already built in * Still in alpha stage, with bugs. Opinions and bug reports very welcome The basic idea is that AI agents become much more useful if they can both: * interact with the device you actually carry around all day * be controlled when you are away from your computer Palmier is my attempt at that bridge. It already works with agent CLIs like Claude Code, Gemini CLI, Codex CLI, Cursor CLI, OpenClaw, and others. You can run tasks on demand, on a schedule, or in response to events. Would especially love feedback on: * whether this feels genuinely useful * which phone capabilities are most valuable * which agent CLIs I should support next * what feels broken, awkward, or confusing Site: https://bit.ly/42omNLm Github: * https://bit.ly/48TuQn5 * https://bit.ly/3Qd8CGx Happy to answer questions. https://bit.ly/48TuQn5 April 21, 2026 at 03:31AM

Show HN: Mimi in the browser – hear the semantic/acoustic split https://bit.ly/4sJTH3O

Show HN: Mimi in the browser – hear the semantic/acoustic split https://bit.ly/4tvRric April 21, 2026 at 12:33AM

Sunday, 19 April 2026

Show HN: Brygga – A modern, fast, feature-rich IRC client for macOS https://bit.ly/4cTOrpF

Show HN: Brygga – A modern, fast, feature-rich IRC client for macOS Brygga is in early development. The core client works end-to-end (connect, join, send, receive, persist) but many features you'd expect from a mature IRC client are still missing. Repo: https://bit.ly/4mBr8UU April 20, 2026 at 12:11AM

Show HN: TRELLIS.2 image-to-3D running on Mac Silicon – no Nvidia GPU needed https://bit.ly/48LEND9

Show HN: TRELLIS.2 image-to-3D running on Mac Silicon – no Nvidia GPU needed I ported Microsoft's TRELLIS.2 (4B parameter image-to-3D model) to run on Apple Silicon via PyTorch MPS. The original requires CUDA with flash_attn, nvdiffrast, and custom sparse convolution kernels: none of which work on Mac. I replaced the CUDA-specific ops with pure-PyTorch alternatives: a gather-scatter sparse 3D convolution, SDPA attention for sparse transformers, and a Python-based mesh extraction replacing CUDA hashmap operations. Total changes are a few hundred lines across 9 files. Generates ~400K vertex meshes from single photos in about 3.5 minutes on M4 Pro (24GB). Not as fast as H100 (where it takes seconds), but it works offline with no cloud dependency. https://bit.ly/4cB0fvE https://bit.ly/4cB0fvE April 20, 2026 at 01:07AM

Show HN: How context engineering works, a runnable reference https://bit.ly/4sU6lxC

Show HN: How context engineering works, a runnable reference I've been presenting at local meetups about Context Engineering, RAG, Skills, etc.. I even have a vbrownbag coming up on LinkedIn about this topic so I figured I would make a basic example that uses bedrock so I can use it in my talks or vbrownbags. Hopefully it's useful. https://bit.ly/3OSFP9H April 17, 2026 at 07:20PM

Saturday, 18 April 2026

Show HN: Coelanox – auditable inference runtime in Rust (BERT runs today) https://bit.ly/3OMabe0

Show HN: Coelanox – auditable inference runtime in Rust (BERT runs today) PyTorch and ONNX Runtime tell you what came out. They can't tell you what actually ran to get there — which ops executed, in what order, on what inputs. A model gets packaged into a sealed .cnox container. SHA-256 is verified before a single op executes. Inference walks a fixed plan over a minimal opset. Every run can emit a per-op audit log: op type, output tensor hash, output sample — cryptographically linked to the exact container and input that produced it. If something goes wrong in production, you have a trail. Scalar backend today — reference implementation and permanent fallback when hardware acceleration isn't available. Audit and verification is identical across all backends. SIMD next, GPU after that. Input below is synthetic (all-ones) — pipeline is identical with real inputs. github.com/Coelanox/CLF Audit example: { "schema": 2, "run": { "run_id": "59144ede-5a27-4dff-bc25-94abade5b215", "started_at_unix_ms": 1776535116721, "container_path": "/home/shark/cnox/models/output/bert_base_uncased.cnox", "container_sha256_hex": "184c291595536e3ef69b9a6a324ad5ee4d0cef21cc95188e4cfdedb7f1f82740", "backend": "scalar" }, "input": { "len": 98304, "sha256_hex": "54ac99d2a36ac55b4619119ee26c36ec2868552933d27d519e0f9fd128b7319f", "sample_head": [ 1.0, 1.0, 1.0, 1.0 ] }, "ops": [ { "op_index": 0, "op_type": "Add", "out_len": 98304, "out_sample_head": [ 0.12242669, -4.970478, 2.8673656, 5.450008 ], "out_sha256_hex": "19f8aa0a618e5513aed4603a7aae2a333c3287368050e76d4aca0f83fb220e78" }, { "op_index": 1, "op_type": "Add", "out_len": 98304, "out_sample_head": [ 0.9650015, 0.23414998, 1.539839, 0.30231553 ], "out_sha256_hex": "7ae2f025c8acf67b8232e694dd43caf3b479eb078366787e4fdc16d651450ad4" }, { "op_index": 2, "op_type": "MatMul", "out_len": 98304, "out_sample_head": [ 1.0307425, 0.19207191, 1.5278282, 0.3000223 ], "out_sha256_hex": "44c28e64441987b8f0516d77f45ad892750b3e5b3916770d3baa5f2289e41bdd" }, { "op_index": 3, "op_type": "Gelu", "out_len": 393216, "out_sample_head": [ 0.68828076, -0.0033473556, 1.591219, -0.16837223 ], "audit_elided": "hash_skipped: len 393216 > max 262144" } https://bit.ly/4mEV1DY April 18, 2026 at 09:37PM

Show HN: Sostactic – polynomial inequalities using sums-of-squares in Lean https://bit.ly/4vAzfFm

Show HN: Sostactic – polynomial inequalities using sums-of-squares in Lean Current support for nonlinear inequalities in Lean is quite limited. This package attempts to solve this. It contains a collection of Lean4 tactics for proving polynomial inequalities via sum-of-squares (SOS) decompositions, powered by a Python backend. You can use it via Python or Lean. These tactics are significantly more powerful than `nlinarith` and `positivity` -- i.e., they can prove inequalities they cannot. In theory, they can be used to prove any of the following types of statements - prove that a polynomial is nonnegative globally - prove that a polynomial is nonnegative over a semialgebraic set (i.e., defined by a set of polynomial inequalities) - prove that a semialgebraic set is empty, i.e., that a system of polynomial inequalities is infeasible The underlying theory is based on the following observation: if a polynomial can be written as a sum of squares of other polynomials, then it is nonnegative everywhere. Theorems proving the existence of such decompositions were one of the landmark achievements of real algebraic geometry in the 20th century, and its connection to semidefinite programming in the 21st century made it a practical computational tool, and is what this software does in the background. https://bit.ly/4cSeiOP April 18, 2026 at 11:36PM

Friday, 17 April 2026

Show HN: Mind-OS – First free online AI dependency self‑assessment https://bit.ly/3Qh7L7A

Show HN: Mind-OS – First free online AI dependency self‑assessment https://bit.ly/4epeJkU April 17, 2026 at 10:40PM

Show HN: Ask your AI to start a business for you, resolved.sh https://bit.ly/4mAJc1z

Show HN: Ask your AI to start a business for you, resolved.sh Start with a FREE instant website for your AI on the open internet, then work with it to build a business that sells specialized datasets, files, premium reports, blogs, courses and more. https://bit.ly/4mx3h8Q April 17, 2026 at 04:31AM