Monday, 20 April 2026

Show HN: Mediator.ai – Using Nash bargaining and LLMs to systematize fairness https://bit.ly/3OG1lhI

Show HN: Mediator.ai – Using Nash bargaining and LLMs to systematize fairness Eight years ago, my then-fiancĂ©e and I decided to get a prenup, so we hired a local mediator. The meetings were useful, but I felt there was no systematic process to produce a final agreement. So I started to think about this problem, and after a bit of research, I discovered the Nash bargaining solution. Yet if John Nash had solved negotiation in the 1950s, why did it seem like nobody was using it today? The issue was that Nash's solution required that each party to the negotiation provide a "utility function", which could take a set of deal terms and produce a utility number. But even experts have trouble producing such functions for non-trivial negotiations. A few years passed and LLMs appeared, and about a year ago I realized that while LLMs aren’t good at directly producing utility estimates, they are good at doing comparisons, and this can be used to estimate utilities of draft agreements. This is the basis for Mediator.ai, which I soft-launched over the weekend. Be interviewed by an LLM to capture your preferences and then invite the other party or parties to do the same. These preferences are then used as the fitness function for a genetic algorithm to find an agreement all parties are likely to agree to. An article with more technical detail: https://bit.ly/4ttPUcg https://bit.ly/48NXCph April 20, 2026 at 04:07PM

Show HN: Palmier – bridge your AI agents and your phone https://bit.ly/4d0ATb5

Show HN: Palmier – bridge your AI agents and your phone Hi HN — I built Palmier. Palmier bridges your AI agents and your phone. It does two things: 1. It lets you use your phone to directly control AI agents running on your computer, from anywhere. 2. It gives your AI agents access to your phone, wherever you are — including things like push notifications, SMS, calendar, contacts, sending email, creating calendar events, location, and more. A few details: * Supports 15+ agent CLIs * Supports Linux, Windows, and macOS * What runs on your computer and your phone is fully open source * Works out of the box — no need to set up GCP or API keys just to let agents use phone capabilities * Your phone can act as an agent remote: start tasks, check progress, review results, and respond to requests while away from your desk * Your phone can also act as an agent tool: agents can reach into phone capabilities directly when needed * Optional MCP server: if you want, Palmier exposes an MCP endpoint so your agent can access phone capabilities as native MCP tools. This is optional — you can also use Palmier directly from the phone app/PWA, with those capabilities already built in * Still in alpha stage, with bugs. Opinions and bug reports very welcome The basic idea is that AI agents become much more useful if they can both: * interact with the device you actually carry around all day * be controlled when you are away from your computer Palmier is my attempt at that bridge. It already works with agent CLIs like Claude Code, Gemini CLI, Codex CLI, Cursor CLI, OpenClaw, and others. You can run tasks on demand, on a schedule, or in response to events. Would especially love feedback on: * whether this feels genuinely useful * which phone capabilities are most valuable * which agent CLIs I should support next * what feels broken, awkward, or confusing Site: https://bit.ly/42omNLm Github: * https://bit.ly/48TuQn5 * https://bit.ly/3Qd8CGx Happy to answer questions. https://bit.ly/48TuQn5 April 21, 2026 at 03:31AM

Show HN: Mimi in the browser – hear the semantic/acoustic split https://bit.ly/4sJTH3O

Show HN: Mimi in the browser – hear the semantic/acoustic split https://bit.ly/4tvRric April 21, 2026 at 12:33AM

Sunday, 19 April 2026

Show HN: Brygga – A modern, fast, feature-rich IRC client for macOS https://bit.ly/4cTOrpF

Show HN: Brygga – A modern, fast, feature-rich IRC client for macOS Brygga is in early development. The core client works end-to-end (connect, join, send, receive, persist) but many features you'd expect from a mature IRC client are still missing. Repo: https://bit.ly/4mBr8UU April 20, 2026 at 12:11AM

Show HN: TRELLIS.2 image-to-3D running on Mac Silicon – no Nvidia GPU needed https://bit.ly/48LEND9

Show HN: TRELLIS.2 image-to-3D running on Mac Silicon – no Nvidia GPU needed I ported Microsoft's TRELLIS.2 (4B parameter image-to-3D model) to run on Apple Silicon via PyTorch MPS. The original requires CUDA with flash_attn, nvdiffrast, and custom sparse convolution kernels: none of which work on Mac. I replaced the CUDA-specific ops with pure-PyTorch alternatives: a gather-scatter sparse 3D convolution, SDPA attention for sparse transformers, and a Python-based mesh extraction replacing CUDA hashmap operations. Total changes are a few hundred lines across 9 files. Generates ~400K vertex meshes from single photos in about 3.5 minutes on M4 Pro (24GB). Not as fast as H100 (where it takes seconds), but it works offline with no cloud dependency. https://bit.ly/4cB0fvE https://bit.ly/4cB0fvE April 20, 2026 at 01:07AM

Show HN: How context engineering works, a runnable reference https://bit.ly/4sU6lxC

Show HN: How context engineering works, a runnable reference I've been presenting at local meetups about Context Engineering, RAG, Skills, etc.. I even have a vbrownbag coming up on LinkedIn about this topic so I figured I would make a basic example that uses bedrock so I can use it in my talks or vbrownbags. Hopefully it's useful. https://bit.ly/3OSFP9H April 17, 2026 at 07:20PM

Saturday, 18 April 2026

Show HN: Coelanox – auditable inference runtime in Rust (BERT runs today) https://bit.ly/3OMabe0

Show HN: Coelanox – auditable inference runtime in Rust (BERT runs today) PyTorch and ONNX Runtime tell you what came out. They can't tell you what actually ran to get there — which ops executed, in what order, on what inputs. A model gets packaged into a sealed .cnox container. SHA-256 is verified before a single op executes. Inference walks a fixed plan over a minimal opset. Every run can emit a per-op audit log: op type, output tensor hash, output sample — cryptographically linked to the exact container and input that produced it. If something goes wrong in production, you have a trail. Scalar backend today — reference implementation and permanent fallback when hardware acceleration isn't available. Audit and verification is identical across all backends. SIMD next, GPU after that. Input below is synthetic (all-ones) — pipeline is identical with real inputs. github.com/Coelanox/CLF Audit example: { "schema": 2, "run": { "run_id": "59144ede-5a27-4dff-bc25-94abade5b215", "started_at_unix_ms": 1776535116721, "container_path": "/home/shark/cnox/models/output/bert_base_uncased.cnox", "container_sha256_hex": "184c291595536e3ef69b9a6a324ad5ee4d0cef21cc95188e4cfdedb7f1f82740", "backend": "scalar" }, "input": { "len": 98304, "sha256_hex": "54ac99d2a36ac55b4619119ee26c36ec2868552933d27d519e0f9fd128b7319f", "sample_head": [ 1.0, 1.0, 1.0, 1.0 ] }, "ops": [ { "op_index": 0, "op_type": "Add", "out_len": 98304, "out_sample_head": [ 0.12242669, -4.970478, 2.8673656, 5.450008 ], "out_sha256_hex": "19f8aa0a618e5513aed4603a7aae2a333c3287368050e76d4aca0f83fb220e78" }, { "op_index": 1, "op_type": "Add", "out_len": 98304, "out_sample_head": [ 0.9650015, 0.23414998, 1.539839, 0.30231553 ], "out_sha256_hex": "7ae2f025c8acf67b8232e694dd43caf3b479eb078366787e4fdc16d651450ad4" }, { "op_index": 2, "op_type": "MatMul", "out_len": 98304, "out_sample_head": [ 1.0307425, 0.19207191, 1.5278282, 0.3000223 ], "out_sha256_hex": "44c28e64441987b8f0516d77f45ad892750b3e5b3916770d3baa5f2289e41bdd" }, { "op_index": 3, "op_type": "Gelu", "out_len": 393216, "out_sample_head": [ 0.68828076, -0.0033473556, 1.591219, -0.16837223 ], "audit_elided": "hash_skipped: len 393216 > max 262144" } https://bit.ly/4mEV1DY April 18, 2026 at 09:37PM

Show HN: Sostactic – polynomial inequalities using sums-of-squares in Lean https://bit.ly/4vAzfFm

Show HN: Sostactic – polynomial inequalities using sums-of-squares in Lean Current support for nonlinear inequalities in Lean is quite limited. This package attempts to solve this. It contains a collection of Lean4 tactics for proving polynomial inequalities via sum-of-squares (SOS) decompositions, powered by a Python backend. You can use it via Python or Lean. These tactics are significantly more powerful than `nlinarith` and `positivity` -- i.e., they can prove inequalities they cannot. In theory, they can be used to prove any of the following types of statements - prove that a polynomial is nonnegative globally - prove that a polynomial is nonnegative over a semialgebraic set (i.e., defined by a set of polynomial inequalities) - prove that a semialgebraic set is empty, i.e., that a system of polynomial inequalities is infeasible The underlying theory is based on the following observation: if a polynomial can be written as a sum of squares of other polynomials, then it is nonnegative everywhere. Theorems proving the existence of such decompositions were one of the landmark achievements of real algebraic geometry in the 20th century, and its connection to semidefinite programming in the 21st century made it a practical computational tool, and is what this software does in the background. https://bit.ly/4cSeiOP April 18, 2026 at 11:36PM

Friday, 17 April 2026

Show HN: Mind-OS – First free online AI dependency self‑assessment https://bit.ly/3Qh7L7A

Show HN: Mind-OS – First free online AI dependency self‑assessment https://bit.ly/4epeJkU April 17, 2026 at 10:40PM

Show HN: Ask your AI to start a business for you, resolved.sh https://bit.ly/4mAJc1z

Show HN: Ask your AI to start a business for you, resolved.sh Start with a FREE instant website for your AI on the open internet, then work with it to build a business that sells specialized datasets, files, premium reports, blogs, courses and more. https://bit.ly/4mx3h8Q April 17, 2026 at 04:31AM

Thursday, 16 April 2026

Show HN: Free API and widget to look up US representatives https://bit.ly/4ciVtEs

Show HN: Free API and widget to look up US representatives https://bit.ly/4mAHLQt April 17, 2026 at 01:45AM

Show HN: Spice simulation → oscilloscope → verification with Claude Code https://bit.ly/488OVFT

Show HN: Spice simulation → oscilloscope → verification with Claude Code I built MCP servers for my oscilloscope and SPICE simulator so Claude Code can close the loop between simulation and real hardware. https://bit.ly/4cuNvqx April 17, 2026 at 01:37AM

Wednesday, 15 April 2026

Show HN: I built a Wikipedia based AI deduction game https://bit.ly/4vtN4pb

Show HN: I built a Wikipedia based AI deduction game I haven't seen anything like this so I decided to build it in a weekend. How it works: You see a bunch of things pulled from Wikipedia displayed on cards. You ask yes or no questions to figure out which card is the secret article. The AI model has access to the image and wiki text and it's own knowledge to answer your question. Happy to have my credits burned for the day but I'll probably have to make this paid at some point so enjoy. I found it's not easy to get cheap+fast+good responses but the tech is getting there. Most of the prompts are running through Groq infra or hitting a cache keyed by a normalization of the prompt. https://bit.ly/4muibN6 April 16, 2026 at 01:13AM

Tuesday, 14 April 2026

Show HN: StockFit API – structured SEC EDGAR data with a free tier https://bit.ly/3O7Ljx7

Show HN: StockFit API – structured SEC EDGAR data with a free tier https://bit.ly/4ct3e9A April 15, 2026 at 02:53AM

Show HN: Keynot – Kill PowerPoint with HTML https://bit.ly/4cm4on7

Show HN: Keynot – Kill PowerPoint with HTML https://bit.ly/4tPn0Db April 15, 2026 at 03:05AM

Show HN: OpenRig – agent harness that runs Claude Code and Codex as one system https://bit.ly/4812UgQ

Show HN: OpenRig – agent harness that runs Claude Code and Codex as one system I've been running Claude Code and Codex together every day. At some point I figured out you can use tmux to let them talk to each other, so I started doing that. Once they could coordinate, I kept adding more agents. Before long I had a whole team working together. But any time I rebooted my machine, the whole thing was gone. Not just the tabs. The way they were wired up, what each one was doing, all of it. Nothing I'd found treats your agent setup as a topology, as something with a shape you can save and bring back. So I built OpenRig, a multi-agent harness. A harness wraps a model. A "rig" wraps your harnesses. You describe your team in a YAML file, boot it with one command, and get a live topology you can see, click into, save, and bring back by name. Claude Code and Codex run together in the same rig. tmux is still doing the talking underneath. I didn't try to add a fancier messaging layer on top. The project is still early. My own setup uses the config layer extensively (YAML, Markdown, JSON) for prototyping functionality that outpace what's shipped in the repo and npm package. But the core primitives are there and the happy path in readme works. It's built to be driven by your agent, not by you typing commands by hand. README: https://bit.ly/4sy2c1O Demo: https://youtu.be/vndsXRBPGio https://bit.ly/4sy2c1O April 15, 2026 at 12:46AM

Monday, 13 April 2026

Show HN: Mcptube – Karpathy's LLM Wiki idea applied to YouTube videos https://bit.ly/4cbiR6A

Show HN: Mcptube – Karpathy's LLM Wiki idea applied to YouTube videos I watch a lot of Stanford/Berkeley lectures and YouTube content on AI agents, MCP, and security. Got tired of scrubbing through hour-long videos to find one explanation. Built v1 of mcptube a few months ago. It performs transcript search and implements Q&A as an MCP server. It got traction (34 stars, my first open-source PR, some notable stargazers like CEO of Trail of Bits). But v1 re-searched raw chunks from scratch every query. So I rebuilt it. v2 (mcptube-vision) follows Karpathy's LLM Wiki pattern. At ingest time, it extracts transcripts, detects scene changes with ffmpeg, describes key frames via a vision model, and writes structured wiki pages. Knowledge compounds across videos rather than being re-discovered. FTS5 + a two-stage agent (narrow then reason) for retrieval. MCPTube works both as CLI (BYOK) and MCP server. I tested MCPTube with Claude Code, Claude Desktop, VS Code Copilot, Cursor, and others. Zero API key needed server-side. Coming soon: I am also building SaaS platform. This platform supports playlist ingestion, team wikis, etc. I like to share early access signup: https://bit.ly/4c9lC8r Happy to discuss architecture tradeoffs — FTS5 vs vectors, file-based wiki vs DB, scene-change vs fixed-interval sampling. Give it a try via `pip install mcptube`. Also, please do star the repo if you enjoy my contribution ( https://bit.ly/4vthsjo ) https://bit.ly/4vthsjo April 13, 2026 at 05:34PM

Show HN: Lint-AI by RooAGI, a Rust CLI for AI Doc Retrieval https://bit.ly/4tMnxpr

Show HN: Lint-AI by RooAGI, a Rust CLI for AI Doc Retrieval We’re RooAGI. We built Lint-AI, a Rust CLI for indexing and retrieving evidence from large AI-generated corpora. As AI systems create more task notes, traces, and reports, storing documents isn’t the only challenge. The real problem is finding the right evidence when the same idea appears in multiple places, often with different wording. Lint-AI is our current retrieval layer for that problem. What Lint-AI does currently: * Indexes large documentation corpora. * Extracts lightweight entities and important terms. * Supports hybrid retrieval using lexical, entity, term, and graph-aware scoring * Returns chunk-level evidence with --llm-context for downstream reviewer / LLM * Use exports doc, chunk, and entity graphs. Example: * ./lint-ai /path/to/docs --llm-context "where docs describe the same concept differently" --result-count 8 --simplified That command does not decide whether documents are in contradiction. It retrieves the most relevant chunks so that a reviewer layer can compare them. Repo: https://bit.ly/48N8l3d We’d appreciate feedback on: * Retrieval/ranking design for documentation corpora. * How to evaluate evidence retrieval quality for alignment workflows. * What kinds of entity/relationship modeling would actually be useful here? Visit: https://bit.ly/3UklysB https://bit.ly/48N8l3d April 13, 2026 at 08:11PM

Sunday, 12 April 2026

Show HN: Bad Apple (Oscilloscope-Like) – one stroke per frame https://bit.ly/4sstEOA

Show HN: Bad Apple (Oscilloscope-Like) – one stroke per frame https://bit.ly/4dDKBSx April 13, 2026 at 06:01AM

Show HN: Local LLM on a Pi 4 controlling hardware via tool calling https://bit.ly/4cn6vHx

Show HN: Local LLM on a Pi 4 controlling hardware via tool calling https://bit.ly/3NYmxPZ April 13, 2026 at 12:14AM