Sunday, 16 November 2025

Show HN: My Side project a free email template builder for CRM, or any website https://bit.ly/4rb8dSX

Show HN: My Side project a free email template builder for CRM, or any website Hi Everyone, I built an email template builder embeddable plugin for CRM, Marketplace, or any website. Free and paid plans are included. Add a complete email builder to any SaaS app using a single script. What's included: - Easy Integration - AI Content & Template Generation - Add external image libraries - Add Merge Tags - Display Conditions - Custom Blocks - Choose your storage server - Dedicated support during integration Check it out, and please let us know if you have any feedback for me. TIA https://bit.ly/48gowpP November 16, 2025 at 11:26PM

Saturday, 15 November 2025

Show HN: SelenAI – Terminal AI pair-programmer with sandboxed Lua tools https://bit.ly/49Fxq1B

Show HN: SelenAI – Terminal AI pair-programmer with sandboxed Lua tools I’ve been building a terminal-first AI pair-programmer that tries to make every tool call transparent and auditable. It’s a Rust app with a Ratatui UI split into three panes (chat, tool activity, input). The agent loop streams LLM output, queues write-capable Lua scripts for manual approval, and records every run as JSONL logs under .selenai/logs. Key bits: Single tool, real guardrails – the LLM only gets a sandboxed Lua VM with explicit helpers (rust.read_file, rust.list_dir, rust.http_request, gated rust.write_file, etc.). Writes stay disabled unless you opt in and then approve each script via /tool run. Transparent workflow – the chat pane shows the conversation, tool pane shows every invocation + result, and streaming keeps everything responsive. CTRL shortcuts for scrolling, clearing logs, copy mode, etc., so it feels like a normal TUI app. Pluggable LLMs – there’s a stub client for offline hacking and an OpenAI streaming client behind a trait. Adding more providers should just be another module under src/llm/. Session history – every exit writes a timestamped log directory with full transcript, tool log, and metadata about whether Lua writes were allowed. Makes demoing, debugging, and sharing repros way easier. Lua ergonomics – plain io.* APIs and a tiny require("rust") module, so the model can write idiomatic scripts without shelling out. There’s even a /lua command if you want to run a snippet manually. Repo (MIT): https://bit.ly/47YwPp7 Would love feedback on: Other providers or local models you’d like to see behind the LLM trait. Additional sandbox helpers that feel safe but unlock useful workflows. Ideas for replaying those saved sessions (web viewer? CLI diff?). If you try it, cargo run, type, and you’ll see the ASCII banner + chat panes. Hit me with issues or PRs—there’s a CONTRIBUTING.md in the works and plenty of roadmap items (log viewer, theming, Lua helper packs) if you’re interested. https://bit.ly/47YwPp7 November 16, 2025 at 12:58AM

Show HN: High-Performance .NET Bindings for the Vello Sparse Strips CPU Renderer https://bit.ly/49nSoBM

Show HN: High-Performance .NET Bindings for the Vello Sparse Strips CPU Renderer https://bit.ly/49iiiXF November 11, 2025 at 11:09AM

Friday, 14 November 2025

Thursday, 13 November 2025

Show HN: Treasury – The personal finance app built for you (public beta) https://bit.ly/3JCQrai

Show HN: Treasury – The personal finance app built for you (public beta) Hi HN! I'm excited to share Treasury ( https://bit.ly/4i04qDv ), a personal finance app I've been building. We just opened up our public beta and would love your feedback. Currently, Treasury has a set of core building blocks that let you create financial setups as simple or as complex as you want. You can track your net worth, analyze spending, spot recurring transactions, and build budgets that actually match how you think about money. The app is live at https://bit.ly/4i04qDv . Sign up and let me know what you think. https://bit.ly/4i04qDv November 14, 2025 at 04:57AM

Show HN: TranscribeAndSplit – AI that transcribes audio and splits it by meaning https://bit.ly/3K3UWe1

Show HN: TranscribeAndSplit – AI that transcribes audio and splits it by meaning Hi HN, I built a small tool to solve a recurring pain when editing podcasts: scrubbing back and forth just to find where a sentence or idea actually ends. How it works: - Upload an audio file (MP3/WAV/M4A) - AI transcribes the audio and suggests cut points at sentence or paragraph boundaries - Automatically split and export segments, or adjust them manually if needed Website: https://bit.ly/4nYxMDD This came out of my own frustration with editing long recordings and manually hunting for the right cut points. I wanted something that actually understands the content before splitting it. I’d love feedback — especially on edge cases like interviews, lectures, or multi-speaker audio. What features would make this more useful? November 14, 2025 at 05:37AM

Show HN: I'm a CEO Coding with AI – Here's the Air Quality iOS App I Built https://bit.ly/43pj0Pa

Show HN: I'm a CEO Coding with AI – Here's the Air Quality iOS App I Built I’m the CEO of AirGradient, where we build open-source air-quality monitors. Two months ago I decided to build our first native iOS app myself. I’ve been coding on the side for ~15 years, but had never touched Swift or SwiftUI. Still, I went from empty repo to App Store approval in exactly 60 days, working on it only on the side. The app itself is a global PM2.5 map with detail views, charts, and integration with our open-source sensors -straightforward, but fully native with Swift and now live on both iOS and Android (native Kotlin version). The interesting part for me was actually not so much the result, but on the process that I settled on. Agentic coding let me work in parallel with the AI: while it generated code, I could switch to CEO work - replying to emails, commenting on tickets, working on proposals, and thinking through strategic planning. The context switching wasn’t always easy, but having the coding agent on one virtual desktop and company work on another made the rhythm surprisingly smooth. It felt less like traditional "coding time" and more like supervising a very fast (junior) developer who never pauses. At times I felt super human when the AI got a complex feature implemented correctly in the first shot (and obviously there were a few times when it was extremely frustrating). What helped tremendously was that I asked the AI to draft a full spec based on our existing web app, fed it screenshots and Figma mocks. Sometimes these specs were a few pages long for a simple feature including API, data models, localisations, UI mockups, and error handling. It produced consistent SwiftUI code far faster than any normal design-to-dev cycle. I still had to debug, make architectural decisions, and understand the tricky parts, but the heavy lifting moved to the tools. This experience changed my view on a long-standing question: Should CEOs code? The historical answer was usually "not really." But with agentic coding, I believe the calculus shifts. Understanding what AI can and can’t do, how engineering workflows will change, and how non-engineers can now contribute directly is becoming strategically important. You only get that understanding by building something end-to-end, and I believe it's important that CEOs experience this themselves (the positives & the frustrations). The bigger shift for me was realizing how this changes the entire software workflow. Designers can hand over mocks that agents turn directly into working components. PMs can produce detailed specs that generate real code instead of just guiding it. Even non-engineering teams can create small internal tools without blocking developers. Engineers don’t disappear—they move upward into architecture, debugging, constraints, and system-level reasoning. But for leadership to make good decisions about this future, it’s not enough to read about it. You have to feel the edges yourself: where the agents excel, where they fall apart, and what parts still demand deep human judgment. So yes, I now think CEOs should code. Not permanently - only a few hours a week. Not to ship production code forever, but to understand the new reality their teams will be working in, and how to support them in this new work environment. I’m sharing this partly to hear how others on HN approach the question of whether CEOs or technical leaders should still code. Has getting hands-on with AI tools changed your perspective on leadership, team structure, or strategy? Happy to answer questions and compare notes. Here are the apps: Apple App Store: https://apple.co/3JYCGma Android Play Store: https://bit.ly/3JTNYIo... (Keep in mind this is version 1, so lots of improvements will come in the coming weeks and months) November 14, 2025 at 01:23AM

Show HN: V0 for Svelte (svelte0), a Svelte UI generator https://bit.ly/3JV4Jmv

Show HN: V0 for Svelte (svelte0), a Svelte UI generator https://bit.ly/3JV4JD1 November 13, 2025 at 11:44PM

Wednesday, 12 November 2025

Show HN: I built a platform where audiences fund debates between public thinkers https://bit.ly/4hXC4Kk

Show HN: I built a platform where audiences fund debates between public thinkers Hey HN, I built Logosive because I want to see certain debates between my favorite thinkers (especially in health/wellness, tech, and public policy), but there's no way for regular people to make these happen. With Logosive, you propose a debate topic and debaters. We then handle outreach, ticket sales, and logistics. After the debate, ticket revenue is split between everyone involved, including the person that proposed the debate, the debaters, and the host. Logosive is built with Django, htmx, and Alpine.js. Claude generates the debate launch pages, including suggesting debaters or debate topics, all from a single prompt (but the debates happen between real debaters). I’m now looking for help launching new debates, so if you have any topics or people you really want to see debate, please submit them at https://bit.ly/4hWPiHh . Thanks! https://bit.ly/4hWPiHh November 12, 2025 at 09:35PM

Show HN: Invisitris a Tetris-like game, where the placed pieces become invisible https://bit.ly/43n0gQk

Show HN: Invisitris a Tetris-like game, where the placed pieces become invisible Hi Hackernews, I built a little Tetris-like game called Invisitris, where all but the last placed piece becomes invisible. The board becomes fully visible for a few seconds when you clear rows. And it also becomes visible when your stack becomes dangerously high. Try it and let me know what you think :) https://bit.ly/4nQG2FG November 13, 2025 at 12:39AM

Show HN: KV Marketplace – share LLM attention caches across GPUs like memcached https://bit.ly/4i0oANS

Show HN: KV Marketplace – share LLM attention caches across GPUs like memcached https://bit.ly/47NqUVb November 12, 2025 at 10:22PM

Tuesday, 11 November 2025

Show HN: AI is a DJ https://bit.ly/47MWXTK

Show HN: AI is a DJ Set the mood/genre and sit back, let the AI mix music for you! https://bit.ly/49aKVG8 November 12, 2025 at 06:01AM

Show HN: Project AELLA – Open LLMs for structuring 100M research papers https://bit.ly/43qBgHX

Show HN: Project AELLA – Open LLMs for structuring 100M research papers We're releasing Project AELLA - an open-science initiative to make scientific knowledge more accessible through AI-generated structured summaries of research papers. Blog: https://bit.ly/3WMXjEW Visualizer: https://bit.ly/3WQtL9y Models: https://bit.ly/4p77d0d , https://bit.ly/3LDNoPy Highlights: - Released 100K research paper summaries in standardized JSON format with interactive visualization. - Fine-tuned open models (Qwen 3 14B & Nemotron 12B) that match GPT-5/Claude 4.5 performance at 98% lower cost (~$100K vs $5M to process 100M papers) - Built on distributed "idle compute" infrastructure - think SETI@Home for LLM workloads Goal: Process ~100M papers total, then link to OpenAlex metadata and convert to copyright-respecting "Knowledge Units" The models are open, evaluation framework is transparent, and we're making the summaries publicly available. This builds on Project Alexandria's legal/technical foundation for extracting factual knowledge while respecting copyright. Technical deep-dive in the post covers our training pipeline, dual evaluation methods (LLM-as-judge + QA dataset), and economic comparison showing 50x cost reduction vs closed models. Happy to answer questions about the training approach, evaluation methodology, or infrastructure! https://bit.ly/43qWIwj November 11, 2025 at 07:38PM

Show HN: mDNS name resolution for Docker container names https://bit.ly/43t5Xw6

Show HN: mDNS name resolution for Docker container names I always wanted this: an easy way to reach "resolve docker container by name" -- e.g., to reach web servers running in Docker containers on my dev machine. Of course, I could export ports from all these containers, try to keep them out of each others hair on the host, and then use http://localhost:PORT. But why go through all that trouble? These containers already expose their respective ports on their own IP (e.g., 172.24.0.5:8123), so all I need is a convenient way to find them. mdns-docker allows you do to, e.g., "ping my-container.docker.local", where it will find the IP of a running container whose name fuzzily matches the host. The way it does it is by running a local mDNS service that listens to `*.docker.local` requests, finding a running container whose name contains the request (here: "my-container"), getting that container's local IP address, and responding to the mDNS query with that IP. Example: Start a ClickHouse service (as an example) `docker run --rm --name myclicky clickhouse:25.7` and then open ` https://bit.ly/4pb7Tln " to open the built-in dashboard -- no port mapping required! If you haven't played with mDNS yet, you've been missing a lot of fun. It's easy to use and the possibilities to make your life easier are endless. It's also what Spotify and chromecast use for local device discovery. https://bit.ly/3WNyDMD November 11, 2025 at 11:27PM

Monday, 10 November 2025

Show HN: Privacy Experiment – Rewriting HTTPS, TLS, and TCP/IP Packet Headers https://bit.ly/3JDb7Pn

Show HN: Privacy Experiment – Rewriting HTTPS, TLS, and TCP/IP Packet Headers The README: https://bit.ly/3JWsNpd Or the LP: https://404-nf/carrd.co Or read on... In a small enough group of people, your TLS-handshake can be enough to identify you as a unique client. Around six-months ago, I began learning about client-fingerprinting. I had understood that it was getting better and more precise, but did not realize the ease with which a server could fingerprint a user - after all, you're just giving up all the cookies! Fingerprinting, for the modern internet experience, has become almost a necessity. It was concerning to me that servers began using the very features that we rely on for security to identify and fingerprint clients. - JS - Collection of your JS property values - Font - Collection of your downloaded fonts - JA3/4 - TLS cipher-suite FP - JA4/T - TCP packet header FP (TTL, MSS, Window Size/Scale, TSval/ecr, etc.) - HTTPS - HTTPS header FP (UA, sec-ch, etc.) - Much more... So, I built a tool to give me control of my fingerprint at multiple layers: - Localhost mitmproxy handles HTTPS headers and TLS cipher-suite negotiation - eBPF + Linux TC rewrites TCP packet headers (TTL, window size, etc.) - Coordinated spoofing ensures all layers present a consistent, chosen fingerprint - (not yet cohesive) Current Status: This is a proof-of-concept that successfully spoofs JA3/JA4 (TLS), JA4T (TCP), and HTTP fingerprints. It's rough around the edges and requires some Linux knowledge to set up. When there are so many telemetry points collected from a single SYN/ACK interaction, the precision with which a server can identify a unique client becomes concerning. Certain individuals and organizations began to notice this and produced sources to help people better understand the amount of data they're leaving behind on the internet: amiunique.org, browserleaks.com, and coveryourtracks.eff.org to name a few. This is the bare bones, but it's a fight against server-side passive surveillance. Tools like nmap and p0f have been exploiting this for the last two-decades, and almost no tooling has been developed to fight it - with the viable options (burpsuite) not being marketed for privacy. Even beyond this, with all values comprehensively and cohesively spoofed, SSO tokens can still follow us around and reveal our identity. When the SDKs of the largest companies like Google are so deeply ingrained into development flows, this is a no-go. So, this project will evolve, I'm looking to add some sort of headless/headful swarm that pollutes your SSO history - legal hurdles be damned. I haven't shared this in a substantial way, and really just finished polishing up a prerelease, barely working version about a week ago. I am not a computer science or cysec engineer, just someone with a passion for privacy that is okay with computers. This is proof of concept for a larger tool. Due to the nature of TCP/IP packet headers, if this software were to run on a distributed mesh network, privacy could be distributed on a mixnet like they're trying to achieve at Nym Technologies. All of the pieces are there, they just haven't been put together in the right way. I think I can almost see the whole puzzle... November 11, 2025 at 02:27AM

Show HN: Nano Banana 2 – AI image generation SaaS powered by Google Gemini https://bit.ly/4icIIN3

Show HN: Nano Banana 2 – AI image generation SaaS powered by Google Gemini Hi HN! I built Nano Banana 2, a SaaS platform for AI-powered image using Google's Gemini 2.5 Flash. You can try it for free.Would love your feedback! https://bit.ly/4hWCfWf November 11, 2025 at 05:42AM

Show HN: A Free Instagram Story Viewer That Lets You Watch Anonymously https://bit.ly/47OebjL

Show HN: A Free Instagram Story Viewer That Lets You Watch Anonymously https://bit.ly/43TWLAU November 11, 2025 at 02:03AM

Sunday, 9 November 2025

Show HN: Trilogy Studio, open-source browser-based SQL editor and visualizer https://bit.ly/4oZgwiu

Show HN: Trilogy Studio, open-source browser-based SQL editor and visualizer SQL-first analytic IDE; similar to Redash/Metabase. Aims to solve reuse/composability at the code layer with modified syntax, Trilogy, that includes a semantic layer directly in the SQL-like language. Status: experiment; feedback and contributions welcome! Built to solve 3 problems I have with SQL as my primary iterative analysis language: 1. Adjusting queries/analysis takes a lot of boilerplate. Solve with queries that operate on the semantic layer, not tables. Also eliminates the need for CTEs. 2. Sources of truth change all the time. I hate updating reports to reference new tables. Also solved by the semantic layer, since data bindings can be updated without changing dashboards or queries. 3. Getting from SQL to visuals is too much work in many tools; make it as streamlined as possible. Surprise - solve with the semantic layer; add in more expressive typing to get better defaults;also use it to wire up automatic drilldowns/cross filtering. Supports: bigquery, duckdb, snowflake. Links [1] https://bit.ly/4oIcV8M (language info) Git links: [Frontend] https://bit.ly/4p34dSx [Language] https://bit.ly/4ot4xdf Previously: https://bit.ly/3He2JnN (significant UX/feature reworks since) https://bit.ly/3B9PkdE https://bit.ly/47NxSrY November 10, 2025 at 12:26AM

Show HN: Alignmenter – Measure brand voice and consistency across model versions https://bit.ly/4hMijoM

Show HN: Alignmenter – Measure brand voice and consistency across model versions I built a framework for measuring persona alignment in conversational AI systems. *Problem:* When you ship an AI copilot, you need it to maintain a consistent brand voice across model versions. But "sounds right" is subjective. How do you make it measurable? *Approach:* Alignmenter scores three dimensions: 1. *Authenticity*: Style similarity (embeddings) + trait patterns (logistic regression) + lexicon compliance + optional LLM Judge 2. *Safety*: Keyword rules + offline classifier (distilroberta) + optional LLM judge 3. *Stability*: Cosine variance across response distributions The interesting part is calibration: you can train persona-specific models on labeled data. Grid search over component weights, estimate normalization bounds, and optimize for ROC-AUC. *Validation:* We published a full case study using Wendy's Twitter voice: - Dataset: 235 turns, 64 on-brand / 72 off-brand (balanced) - Baseline (uncalibrated): 0.733 ROC-AUC - Calibrated: 1.0 ROC-AUC - 1.0 f1 - Learned: Style > traits > lexicon (0.5/0.4/0.1 weights) Full methodology: https://bit.ly/3XiazS3 There's a full walkthrough so you can reproduce the results yourself. *Practical use:* pip install alignmenter[safety] alignmenter run --model openai:gpt-4o --dataset my_data.jsonl It's Apache 2.0, works offline, and designed for CI/CD integration. GitHub: https://bit.ly/47vmphM Interested in feedback on the calibration methodology and whether this problem resonates with others. https://bit.ly/4nQMSeC November 10, 2025 at 12:53AM