Nigeria No1. Music site And Complete Entertainment portal for Music Promotion WhatsApp:- +2349077287056
Sunday, 30 November 2025
Show HN: Tracktrip, Travel Expense Tracker https://bit.ly/49NYwU5
Show HN: Tracktrip, Travel Expense Tracker Hello HN! I'm a European traveler, and during my last 6 months of travel I created an app to keep track of my expenses. I made it open-source and started to build a website and a documentation for other people to use it! It's a fairly simple PWA that you can install on mobile that can help you set a budget, keep track of expenses and analyse what you spend. Thanks for any feedback and keep traveling guys! https://bit.ly/48bgKOf November 30, 2025 at 10:29PM
Show HN: Real-time system that tracks how news spreads across 200k websites https://bit.ly/4alHJrG
Show HN: Real-time system that tracks how news spreads across 200k websites I built a system that monitors ~200,000 news RSS feeds in near real-time and clusters related articles to show how stories spread across the web. It uses Snowflake’s Arctic model for embeddings and HNSW for fast similarity search. Each “story cluster” shows who published first, how fast it propagated, and how the narrative evolved as more outlets picked it up. Would love feedback on the architecture, scaling approach, and any ways to make the clusters more accurate or useful. Live demo: https://bit.ly/48riula https://bit.ly/48riula November 26, 2025 at 02:27AM
Show HN: Let Claude Code call other LLMs when it runs in circles https://bit.ly/4ruDH6m
Show HN: Let Claude Code call other LLMs when it runs in circles https://bit.ly/4aikiPZ November 30, 2025 at 10:35AM
Saturday, 29 November 2025
Show HN: ClearHearAI-The Essential App for Hearing Impaired and Deaf Communities https://bit.ly/4ivx0Np
Show HN: ClearHearAI-The Essential App for Hearing Impaired and Deaf Communities I built ClearHearAI to help hearing impaired and deaf people. It is a transcription app that provides context indicators (e.g. when questions are asked, urgent keywords are detected, conversation is happening). All audio processing happens entirely on your device - your voice never leaves your computer. Conversation transcripts are stored locally on the device. Any feedback welcomed. https://bit.ly/3M98WDZ November 29, 2025 at 11:31PM
Show HN: I built Magiclip – an all-in-one AI studio https://bit.ly/48pQWN9
Show HN: I built Magiclip – an all-in-one AI studio Hi HN I’ve been working on a tool I personally needed as someone who edits a lot of video content. The problem is simple: Modern video editing requires 8+ different tools, all slow, all noisy, all repetitive. Subtitles here. Audio cleanup there. Silence removal elsewhere. Upscaling in another tool. AI voice in a different one. A clip extractor somewhere else… So I built Magiclip.io — a single interface that automates the most boring parts of editing. What it does today Auto-subtitles (fast & accurate) Silence removal AI voice-over Audio enhancement Image upscaling Clip extraction from long videos Thumbnail generation Quick TikTok/Reels format conversion And more coming The idea isn’t to replace full editors. It’s to remove the friction of things we repeat 100 times. Upload → Magic → Download. No timeline, no project files, no complexity. Why I built it I edit content frequently, and the workflow felt unnecessarily painful. Magiclip is my attempt to reduce editing from hours to seconds by batching the most common tasks behind simple endpoints. What I’d love feedback on What other tasks should be automated? Anything in the UX that feels off or slow? Any feature you’d want exposed through an API? Live link https://bit.ly/4pBy6cP Happy to answer anything about the architecture, the pipelines, or the reasoning behind the features. https://bit.ly/4pBy6cP November 29, 2025 at 01:04PM
Show HN: Explore what the browser exposes about you https://bit.ly/4oq8uib
Show HN: Explore what the browser exposes about you I built a tool that reveals the data your browser exposes automatically every time you visit a website. GitHub: https://bit.ly/4pErydE Demo: https://bit.ly/4p8EAjL Note: No data is sent anywhere. Everything runs in your browser. https://bit.ly/4p8EAjL November 24, 2025 at 07:05PM
Friday, 28 November 2025
Show HN: New VSCode extension: Objectify Params https://bit.ly/3LZdkpd
Show HN: New VSCode extension: Objectify Params Automatically refactor JavaScript or TypeScript functions to use object parameters instead of multiple positional parameters, improving readability and maintainability. https://bit.ly/4isyR5v November 29, 2025 at 05:47AM
Show HN: Mu – The Micro Network https://bit.ly/49HIxHa
Show HN: Mu – The Micro Network https://bit.ly/49LJ6zH November 24, 2025 at 03:06PM
Thursday, 27 November 2025
Show HN: Lissa Saver macOS Screen Saver https://bit.ly/3LWMPke
Show HN: Lissa Saver macOS Screen Saver Lissa Saver is a macOS Screensaver Hot off the Demo Scene with Gravity Simullation with Clifford Pickover Fractal and Lissajous animations. BREW INSTALL brew tap johnrpenner/tap; brew install --cask lissasaver https://bit.ly/44gCt55 November 28, 2025 at 12:45AM
Show HN: I built a website for games that catch my eye https://bit.ly/3K9B49v
Show HN: I built a website for games that catch my eye I built a website for games that catch my eye or have something interesting going on. I made it for fun but then updating became a habit. Maybe you'll find your next "must-play" game here? GitHub repo: https://bit.ly/488TuR2 https://bit.ly/4ipGmtZ November 28, 2025 at 03:30AM
Show HN: FounderPace – A leaderboard for founders who run https://bit.ly/48gEAqA
Show HN: FounderPace – A leaderboard for founders who run https://bit.ly/4853hr9 November 28, 2025 at 12:48AM
Show HN: I built a free astro and tailwind static site for GitHub pages https://bit.ly/485q3PB
Show HN: I built a free astro and tailwind static site for GitHub pages Using my GitHub pro+ with vs code setup This is a demonstration of how good of a site can I build essentially 100% for free + free hosting (if coded manually without a 50$ subscription) And I went completely overboard on purpose its 99% useless for a real production deployment im sure but for mini blogs probably might be useful idk I dont even use the new GitHub spark or whatever to slow compared to 1k+ line edits every couple minutes im obviously working on a ton of other things I won't make public yet but will in the future https://bit.ly/4rGSpaT November 27, 2025 at 11:17PM
Wednesday, 26 November 2025
Show HN: White-Box-Coder – AI that self-reviews and fixes its own code" https://bit.ly/3KjSsbt
Show HN: White-Box-Coder – AI that self-reviews and fixes its own code" Single-Shot Architecture: Optimized for speed and cost-efficiency using a single API call to handle the entire generation-review-fix cycle. https://bit.ly/48m265q November 26, 2025 at 10:57PM
Show HN: Ghostty-Web – Ghostty in the Browser https://bit.ly/4rll0C6
Show HN: Ghostty-Web – Ghostty in the Browser https://bit.ly/3K8cLZz November 26, 2025 at 06:36PM
Show HN: Infinite scroll AI logo generator built with Nano Banana https://bit.ly/483OrRL
Show HN: Infinite scroll AI logo generator built with Nano Banana https://bit.ly/3Kid8AE November 26, 2025 at 08:34PM
Show HN: Yolodex – real-time customer enrichment API https://bit.ly/4p4Ydt1
Show HN: Yolodex – real-time customer enrichment API hey hn, i’ve been working on an api to make it easy to know who your customers are, i would love your feedback. what it does send an email address, the api returns a json profile built from public data, things like: name, country, age, occupation, company, social handles and interests. It’s a single endpoint (you can hit this endpoint without auth to get a demo of what it looks like): curl https://bit.ly/49HZ3qI \ --request POST \ --header 'Content-Type: application/json' \ --data '{"email": "john.smith@example.com"}' everyone gets 100 free, pricing is per _enriched profile_: 1 email ~ $0.03, but if i don’t find anything i wont charge you. why i built it / what’s different i once built open source intelligence tooling to investigate financial crime but for a recent project i needed to find out more about some customers, i tried apollo, clearbit, lusha, clay, etc but i found: 1. outdated data - the data about was out-of-date and misleading, emails didn’t work, etc 2. dubious data - i found lots of data like personal mobile numbers that i’m pretty sure no-one shared publicly or knowingly opted into being sold on 3. aggressive pricing - monthly/annual commitments, large gaps between plans, pay the same for empty profiles 4. painful setup - hard to find the right api, set it up, test it out etc i used knowledge from criminal investigations to build an api that uses some of the same research patterns and entity resolution to find standardized information about people that is: 1. real-time 2. public info only (osint) 3. transparent simple pricing 4. 1 min to setup what i’d love feedback on * speed : are responses fast enough? would you trade-off speed for better data coverage? * coverage : which fields will you use (or others you need)? * pricing : is the pricing model sane? * use-cases : what you need this type data for (i.e. example use cases)? * accuracy : any examples where i got it badly wrong? happy to answer technical questions in the thread and give more free credits to help anyone test https://bit.ly/3LVvGYd November 24, 2025 at 03:02PM
Tuesday, 25 November 2025
Show HN: Parm – Install GitHub releases just like your favorite package manager https://bit.ly/44wKTVI
Show HN: Parm – Install GitHub releases just like your favorite package manager Hi all, I built a CLI tool that allows you to seamlessly install software from GitHub release assets, similar to how your system's package manager installs software. It works by exploiting common patterns among GitHub releases across different open-source software such as naming conventions and file layouts to fetch proper release assets for your system and then downloading the proper asset onto your machine via the GitHub API. Parm will then extract the files, find the proper binaries, and then add them to your PATH. Parm can also check for updates and uninstall software, and otherwise manages the entire lifecycle of all software installed by Parm. Parm is not meant to replace your system's package manager. It is instead meant as an alternative method to install prebuilt software off of GitHub in a more centralized and simpler way. It's currently in a pre-release stage, and there's a lot of features I want to add. I'm currently working (very slowly) on some new features, so if this sounds interesting to you, check it out! It's completely free and open-source and is currently released for Linux/macOS. I would appreciate any feedback. Link: https://bit.ly/44y0Ue5 https://bit.ly/44y0Ue5 November 26, 2025 at 01:14AM
Show HN: KiDoom – Running DOOM on PCB Traces https://bit.ly/483aQ1B
Show HN: KiDoom – Running DOOM on PCB Traces I got DOOM running in KiCad by rendering it with PCB traces and footprints instead of pixels. Walls are rendered as PCB_TRACK traces, and entities (enemies, items, player) are actual component footprints - SOT-23 for small items, SOIC-8 for decorations, QFP-64 for enemies and the player. How I did it: Started by patching DOOM's source code to extract vector data directly from the engine. Instead of trying to render 64,000 pixels (which would be impossibly slow), I grab the geometry DOOM already calculates internally - the drawsegs[] array for walls and vissprites[] for entities. Added a field to the vissprite_t structure to capture entity types (MT_SHOTGUY, MT_PLAYER, etc.) during R_ProjectSprite(). This lets me map 150+ entity types to appropriate footprint categories. The DOOM engine sends this vector data over a Unix socket to a Python plugin running in KiCad. The plugin pre-allocates pools of traces and footprints at startup, then just updates their positions each frame instead of creating/destroying objects. Calls pcbnew.Refresh() to update the display. Runs at 10-25 FPS depending on hardware. The bottleneck is KiCad's refresh, not DOOM or the data transfer. Also renders to an SDL window (for actual gameplay) and a Python wireframe window (for debugging), so you get three views running simultaneously. Follow-up: ScopeDoom After getting the wireframe renderer working, I wanted to push it somewhere more physical. Oscilloscopes in X-Y mode are vector displays - feed X coordinates to one channel, Y to the other. I didn't have a function generator, so I used my MacBook's headphone jack instead. The sound card is just a dual-channel DAC at 44.1kHz. Wired 3.5mm jack → 1kΩ resistors → scope CH1 (X) and CH2 (Y). Reused the same vector extraction from KiDoom, but the Python script converts coordinates to ±1V range and streams them as audio samples. Each wall becomes a wireframe box, the scope traces along each line. With ~7,000 points per frame at 44.1kHz, refresh rate is about 6 Hz - slow enough to be a slideshow, but level geometry is clearly recognizable. A 96kHz audio interface or analog scope would improve it significantly (digital scopes do sample-and-hold instead of continuous beam tracing). Links: KiDoom GitHub: https://bit.ly/4pRRvqj , writeup: https://bit.ly/4im0fC8 ScopeDoom GitHub: https://bit.ly/44wKQt0 , writeup: https://bit.ly/4ipKla4 https://bit.ly/4im0fC8 November 25, 2025 at 11:13PM
Show HN: Rs-Utcp, a Rust Implementation of the Universal Tool Calling Protocol https://bit.ly/4iwrk5R
Show HN: Rs-Utcp, a Rust Implementation of the Universal Tool Calling Protocol I’ve been working on a Rust implementation of UTCP, a vendor-neutral protocol for LLM tool calling. The goal is to avoid every model/vendor defining its own schema and instead make tool ↔ model interoperability predictable and boring. What works: - Full UTCP message parse/serialize - Strongly typed request/response model - Transport-agnostic (stdin/stdout, HTTP, WS, anything) - Minimal dependencies, straightforward API Still to do: - Validation helpers - Higher-level client/server wrappers - More real-world examples Repo: https://bit.ly/3Xf7ORE Feedback + contributions welcome! https://bit.ly/3Xf7ORE November 25, 2025 at 10:35PM
Show HN: I Figured It Out https://bit.ly/4pz4Ovv
Show HN: I Figured It Out https://bit.ly/4prHi3q November 26, 2025 at 12:56AM
Monday, 24 November 2025
Show HN: Hypercamera – a browser-based 4D camera simulator https://bit.ly/3M2rsxJ
Show HN: Hypercamera – a browser-based 4D camera simulator https://bit.ly/4rhsWEq November 19, 2025 at 02:54PM
Show HN: My first published app – track contraception ring cycle https://bit.ly/4p54AfK
Show HN: My first published app – track contraception ring cycle My wife said she wished there was a big widget on her phone that told her when to take her Nuvaring out. So I vibe coded one. What other problems can it solve? https://apple.co/4p0tflL November 25, 2025 at 12:43AM
Sunday, 23 November 2025
Show HN: SitStand – Control your standing desk from the command line https://bit.ly/3K6Ot28
Show HN: SitStand – Control your standing desk from the command line https://bit.ly/4iqLRsE November 24, 2025 at 05:17AM
Show HN: Wanted to Give Dang Appreciation https://bit.ly/49UiRaz
Show HN: Wanted to Give Dang Appreciation Reddit has drifted over time but HN has remained a source of high signal to noise. Just wanted to thank dang and the moderation team for making this community what it is. November 24, 2025 at 12:30AM
Show HN: I wrote a minimal memory allocator in C https://bit.ly/4p12eP4
Show HN: I wrote a minimal memory allocator in C A fun toy memory allocator (not thread safe, that's a future TODO). I also wanted to explain how I approached it, so I also wrote a tutorial blog post (~20 minute read) covering the code which you can find the link to in the README. https://bit.ly/4o9Gr6B November 23, 2025 at 11:25PM
Saturday, 22 November 2025
Show HN: Dank-AI – Ship production AI agents 10x faster https://bit.ly/3LTNmTX
Show HN: Dank-AI – Ship production AI agents 10x faster https://bit.ly/48clhyr November 23, 2025 at 06:54AM
Show HN: Eidos – AI IDE that generates and edits game prototypes instantly https://bit.ly/4p26BJC
Show HN: Eidos – AI IDE that generates and edits game prototypes instantly I built EIDOS because I wanted a faster and more flexible way to prototype game ideas without repeatedly writing the same boilerplate code. EIDOS is an AI-powered IDE that can: • Generate gameplay code from natural language descriptions • Edit existing code through an integrated AI assistant • Open the correct editor automatically (text editor, code editor, or video editor) depending on the file • Run prototypes instantly for quick iteration As a solo developer, I spent years building my own tools to speed up experimentation. This project is the result of trying to remove the setup time, folder structures, and repetitive coding that often slow down early-stage game development. I’d appreciate any feedback on usability, performance, or what features would be most helpful for indie developers or small teams. https://bit.ly/4ij0tdl November 23, 2025 at 02:15AM
Show HN: HN Buffer – A read-it-later site for your HN favorites https://bit.ly/49AHV65
Show HN: HN Buffer – A read-it-later site for your HN favorites Hello! I’ve been reading Hacker News for years and have a bad habit of favoriting articles but never actually going back to read them. I finally built hnbuffer to scratch my own itch and help me work through that backlog. I also mainly built this as an excuse to learn/use some of the languages/frameworks/tools since I have another backlog of things I wanted to try too. It syncs your favorites into a queue with a reader mode and swipe gestures. The streak tracking is something I'm still experimenting with. Note: It requires an HN login to scrape the favorites list (credentials are client-side only, not stored). Let me know what you think! https://bit.ly/48d3jfc November 22, 2025 at 10:00PM
Show HN: Host a Website from an Old Phone https://bit.ly/49BB9gC
Show HN: Host a Website from an Old Phone https://bit.ly/4ppH5Ob November 22, 2025 at 07:16PM
Show HN: I made an app to keep track of your sailboat maintenance https://bit.ly/4oZeWOi
Show HN: I made an app to keep track of your sailboat maintenance https://bit.ly/43HUUiO November 22, 2025 at 12:02PM
Friday, 21 November 2025
Show HN: Skedular, a Smart Booking and Workspace Management Platform https://bit.ly/48aNWUR
Show HN: Skedular, a Smart Booking and Workspace Management Platform Hi HN I have been working on Skedular a platform that helps organizations councils co working spaces and local businesses manage bookings shared spaces and multi location operations in a simple modern way What Skedular does - Manage rooms desks studios sports facilities meeting spaces and any kind of bookable asset - Handle multi location multi team scenarios - Provide public booking pages for venues - Offer a clean dashboard for operators to manage availability payments customers and schedules - API first design for easy integration with existing systems - Built with modern tooling including Nextjs NET backend PostGIS and Kafka events Why I built it Most booking platforms are either too simple or too enterprise heavy Skedular is meant to sit in the middle powerful enough for councils or large organisations but simple enough for a local venue owner to use without training. I am currently onboarding early users and would love feedback from this community especially around UX data modelling and scaling patterns. Links - Public website https://bit.ly/4gj86Oi - App website https://bit.ly/3EbLSAC Looking for feedback I would appreciate thoughts on the overall concept any edge cases I might be missing suggestions for UI and UX improvements and pain points you have experienced in managing bookings or shared resources Thanks for taking a look Morteza https://bit.ly/3EbLSAC November 22, 2025 at 06:04AM
Show HN: I made a Rust Terminal UI for OpenSnitch, a Linux application firewall https://bit.ly/4po23gn
Show HN: I made a Rust Terminal UI for OpenSnitch, a Linux application firewall I made a Terminal UI for OpenSnitch[1], an interactive application firewall for Linux inspired by Little Snitch. I’ve always wanted to create a TUI and found the perfect excuse to make this for usage on one of my headless servers. I wrote this in Rust to force myself to learn more, viz. async features. Super open to feedback and contributions! [1] https://bit.ly/4ppWONg https://bit.ly/3XG76No November 22, 2025 at 12:48AM
Show HN: Get Fat Slowly https://bit.ly/44bQLUk
Show HN: Get Fat Slowly I've been enjoying building calculators with ChatGPT to help me model various life decisions. When my friend shared he drinks 1-2 Starbucks Mochas per day, it made me wonder how that impacts his health over the course of the year (or several years). Drinking 2 mocha's per day adds 45.9 lb (20.8kg) body fat that your body either needs to burn or store per year. https://bit.ly/49yjUN0 November 21, 2025 at 01:18PM
Show HN: 32V TENS device from built from scratch under $100 https://bit.ly/3JNoJI1
Show HN: 32V TENS device from built from scratch under $100 https://bit.ly/48auiYS November 17, 2025 at 04:06PM
Thursday, 20 November 2025
Show HN: UsageFlow – API usage metering, rate-limits and usage reporting https://bit.ly/4oa6Dhl
Show HN: UsageFlow – API usage metering, rate-limits and usage reporting I’m launching UsageFlow, a simple tool for API owners who want automatic API usage metering and full control over their endpoints — all without any hassle. With just a few lines of code, the UsageFlow SDK gives you: Automatic discovery of API endpoints User identification Usage metering Rate-limits and automatic blocking Reporting usage events to your existing billing or metering system Supports: Go (Gin), Python (FastAPI, Flask), Node.js (Express, Fastify, NestJS) Perfect for AI APIs or SaaS platforms that want to scale fast — focus on building your product while UsageFlow handles usage tracking automatically. No developer skills required to: Update usage rules Apply limits Report usage Everything works with a few clicks — your entire usage platform is in your hands, instantly. I’m opening this for first testers. If you run an API and want to try UsageFlow, comment below or DM me — I will create your account and get you started in minutes. Learn more at: https://bit.ly/48ecKLG November 20, 2025 at 11:49PM
Wednesday, 19 November 2025
Show HN: An A2A-compatible, open-source framework for multi-agent networks https://bit.ly/47OuChc
Show HN: An A2A-compatible, open-source framework for multi-agent networks https://bit.ly/4idCmgf November 20, 2025 at 06:52AM
Show HN: F32 – An Extremely Small ESP32 Board https://bit.ly/3LMHo7k
Show HN: F32 – An Extremely Small ESP32 Board As part of a little research and also some fun I decided to try my hand at seeing how small of an ESP32 board I can make with functioning WiFi. https://bit.ly/3JXj6XI November 19, 2025 at 09:09PM
Tuesday, 18 November 2025
Show HN: Lumical – scan any meeting invite into your calendar in seconds https://bit.ly/4i9ni3g
Show HN: Lumical – scan any meeting invite into your calendar in seconds I built an iOS app that lets you point your phone at a paper invite or screenshot, review the parsed details, and drop the event straight into your calendar, so you can capture meetings in seconds instead of typing. https://bit.ly/4a1qV9f November 19, 2025 at 07:55AM
Show HN: Kk – A tiny Bash CLI that makes kubectl faster https://bit.ly/4o2QLNC
Show HN: Kk – A tiny Bash CLI that makes kubectl faster I built "kk", a small Bash wrapper around kubectl that makes common Kubernetes workflows faster. It's not a plugin or a compiled binary. It's just a single script you can drop into ~/bin. The goal is to reduce repetitive kubectl patterns without replacing kubectl itself. Some things it helps with: - pod selection by substring (auto-fzf if available) - multi-pod logs with prefixing and grep support - quick exec into pods - checking the actual images running in pods - restarting deployments with pattern matching - port-forwarding with pod auto-selection - quick describe/top/events - context switching shortcuts Examples: kk pods api kk sh api kk logs api -f -g ERROR kk images api kk restart api kk pf api 8080:80 kk desc api kk top api kk events kk ctx kk deploys Installation: curl -o ~/bin/kk https://bit.ly/4o1TaI8 chmod +x ~/bin/kk Repo: https://bit.ly/48ims0F Happy to hear feedback, suggestions, or ideas for small helpers to improve the kubectl experience. https://bit.ly/48ims0F November 19, 2025 at 07:22AM
Show HN: Startup Simulator https://bit.ly/4pD2Pqb
Show HN: Startup Simulator Vibe coded this startup simulator. It's not much, but I just wanted to know the potential. Also, I used Google's Antigravity IDE for this. Feel free to leave comments on here or at https://bit.ly/4pD2PXd https://bit.ly/4oneDvH November 19, 2025 at 04:43AM
Show HN: Browser-based interactive 3D Three-Body problem simulator https://bit.ly/4o3MeKL
Show HN: Browser-based interactive 3D Three-Body problem simulator Features include: - Several preset periodic orbits: the classic Figure-8, plus newly discovered 3D solutions from Li and Liao's recent database of 10,000+ orbits (https://bit.ly/3X4Af4K) - Full 3D camera controls (rotate/pan/zoom) with body-following mode - Force and velocity vector visualization - Timeline scrubbing to explore the full orbital period The 3D presets are particularly interesting. Try "O₂(1.2)" or "Piano O₆(0.6)" from the Load Presets menu to see configurations where bodies weave in and out of the orbital plane. Most browser simulators I've seen have been 2D. Built with Three.js. Open to suggestions for additional presets or features! https://bit.ly/48mDOJJ November 18, 2025 at 04:00PM
Show HN: Strawk – I implemented Rob Pike's forgotten Awk https://bit.ly/4pnad8Z
Show HN: Strawk – I implemented Rob Pike's forgotten Awk Rob Pike wrote a paper, Structural Regular Expressions ( https://bit.ly/4nYg7fD ), that criticized the Unix toolset for being excessively line oriented. Tools like awk and grep assume a regular record structure usually denoted by newlines. Unix pipes just stream the file from one command to another, and imposing the newline structure limits the power of the Unix shell. In the paper, Mr. Pike proposed an awk of the future that used structural regular expressions to parse input instead of line by line processing. As far as I know, it was never implemented. So I implemented it. I attempted to imitate AWK and it's standard library as much as possible, but some things are different because I used Golang under the hood. Live Demo: https://bit.ly/44eS0ly Github: https://bit.ly/4rbywbC November 18, 2025 at 02:55PM
Monday, 17 November 2025
Show HN: Discussion of ICT Model – Linking Information, Consciousness and Time https://bit.ly/3XxEEx5
Show HN: Discussion of ICT Model – Linking Information, Consciousness and Time Hi HN, I’ve been working on a conceptual framework that tries to formalize the relationship between: – informational states, – their minimal temporal stability (I_fixed), – the rate of informational change (dI/dT), – and the emergence of time, processes, and consciousness-like dynamics. This is not a final theory, and it’s not metaphysics. It’s an attempt to define a minimal, falsifiable vocabulary for describing how stable patterns persist and evolve in time. Core ideas: – I_fixed = any pattern that remains sufficiently stable across time to allow interaction/measurement. – dI/dT = the rate at which such patterns change. Time is defined as a relational metric of informational change (dI/dT), but the arrow of time does not arise from within the system — it emerges from an external temporal level, a basic temporal background. The model stays strictly physicalist: it doesn’t require spatial localization of information and doesn’t assume any “Platonic realm.” It simply reformulates what it means for a process to persist long enough to be part of reality. Why I’m posting here I’m looking for rigorous critique from physicists, computer scientists, mathematicians, and anyone interested in foundational models. If you see flaws, ambiguities, or missing connections — I’d really appreciate honest feedback. A full preprint (with equations, phenomenology, and testable criteria) and discussion is here: https://bit.ly/49pemED DOI: 10.5281/zenodo.17584782 Thanks in advance to anyone willing to take a look. https://bit.ly/49pemED November 18, 2025 at 03:25AM
Show HN: Agfs – Aggregated File System, a modern tribute to the spirit of Plan9 https://bit.ly/4oIWFEx
Show HN: Agfs – Aggregated File System, a modern tribute to the spirit of Plan9 https://bit.ly/3LLxW3Z November 18, 2025 at 01:05AM
Show HN: Parqeye – A CLI tool to visualize and inspect Parquet files https://bit.ly/4pfa5rM
Show HN: Parqeye – A CLI tool to visualize and inspect Parquet files I built a Rust-based CLI/terminal UI for inspecting Parquet files—data, metadata, and row-group-level structure—right from the terminal. If someone sent me a Parquet file, I used to open DuckDB or Polars just to see what was inside. Now I can do it with one command. Repo: https://bit.ly/4p8t3Ag https://bit.ly/4p8t3Ag November 18, 2025 at 12:45AM
Sunday, 16 November 2025
Show HN: Hirelens – AI Resume Analyzer for ESL and Global Job Seekers https://bit.ly/49nUOAl
Show HN: Hirelens – AI Resume Analyzer for ESL and Global Job Seekers I built Hirelens ( https://bit.ly/3LNcJGQ ) after seeing many ESL and international job seekers struggle with resumes that don’t match job descriptions or parse cleanly in ATS systems, even when they have strong experience. What it does: Extracts skills/experience from a resume Compares it to a target job description Flags unclear or “non-native” phrasing Suggests clearer rewrites Identifies ATS parsing issues Deletes files after processing (no storage) Tech: Next.js + FastAPI, lightweight CV parsing → embeddings → scoring logic, LLM-based suggestions, no data retention. I’d love feedback on: parsing edge cases rewriting clarity what features matter most for job seekers or hiring managers Try it here: https://bit.ly/3LNcJGQ https://bit.ly/47LgYLO November 17, 2025 at 12:37AM
Show HN: CUDA, Shmuda: Fold Proteins on a MacBook https://bit.ly/4oUV0Mc
Show HN: CUDA, Shmuda: Fold Proteins on a MacBook Alphafold3 used to be fodder for HPC clusters; now I've got a port running smoothly on Apple Silicon. If you have an M-series Mac (~2023-present), you can generate protein structures from sequences in minutes. Give it a try! GitHub repo: https://bit.ly/4r57KRU https://bit.ly/3LJvxqv November 17, 2025 at 01:08AM
Show HN: My Side project a free email template builder for CRM, or any website https://bit.ly/4rb8dSX
Show HN: My Side project a free email template builder for CRM, or any website Hi Everyone, I built an email template builder embeddable plugin for CRM, Marketplace, or any website. Free and paid plans are included. Add a complete email builder to any SaaS app using a single script. What's included: - Easy Integration - AI Content & Template Generation - Add external image libraries - Add Merge Tags - Display Conditions - Custom Blocks - Choose your storage server - Dedicated support during integration Check it out, and please let us know if you have any feedback for me. TIA https://bit.ly/48gowpP November 16, 2025 at 11:26PM
Saturday, 15 November 2025
Show HN: SelenAI – Terminal AI pair-programmer with sandboxed Lua tools https://bit.ly/49Fxq1B
Show HN: SelenAI – Terminal AI pair-programmer with sandboxed Lua tools I’ve been building a terminal-first AI pair-programmer that tries to make every tool call transparent and auditable. It’s a Rust app with a Ratatui UI split into three panes (chat, tool activity, input). The agent loop streams LLM output, queues write-capable Lua scripts for manual approval, and records every run as JSONL logs under .selenai/logs. Key bits: Single tool, real guardrails – the LLM only gets a sandboxed Lua VM with explicit helpers (rust.read_file, rust.list_dir, rust.http_request, gated rust.write_file, etc.). Writes stay disabled unless you opt in and then approve each script via /tool run. Transparent workflow – the chat pane shows the conversation, tool pane shows every invocation + result, and streaming keeps everything responsive. CTRL shortcuts for scrolling, clearing logs, copy mode, etc., so it feels like a normal TUI app. Pluggable LLMs – there’s a stub client for offline hacking and an OpenAI streaming client behind a trait. Adding more providers should just be another module under src/llm/. Session history – every exit writes a timestamped log directory with full transcript, tool log, and metadata about whether Lua writes were allowed. Makes demoing, debugging, and sharing repros way easier. Lua ergonomics – plain io.* APIs and a tiny require("rust") module, so the model can write idiomatic scripts without shelling out. There’s even a /lua command if you want to run a snippet manually. Repo (MIT): https://bit.ly/47YwPp7 Would love feedback on: Other providers or local models you’d like to see behind the LLM trait. Additional sandbox helpers that feel safe but unlock useful workflows. Ideas for replaying those saved sessions (web viewer? CLI diff?). If you try it, cargo run, type, and you’ll see the ASCII banner + chat panes. Hit me with issues or PRs—there’s a CONTRIBUTING.md in the works and plenty of roadmap items (log viewer, theming, Lua helper packs) if you’re interested. https://bit.ly/47YwPp7 November 16, 2025 at 12:58AM
Show HN: High-Performance .NET Bindings for the Vello Sparse Strips CPU Renderer https://bit.ly/49nSoBM
Show HN: High-Performance .NET Bindings for the Vello Sparse Strips CPU Renderer https://bit.ly/49iiiXF November 11, 2025 at 11:09AM
Friday, 14 November 2025
Show HN: Wikidive – AI guided deep diving into Wikipedia https://bit.ly/4p5FZqG
Show HN: Wikidive – AI guided deep diving into Wikipedia https://bit.ly/3LMbLdU November 15, 2025 at 02:17AM
Show HN: OpEx, an agentic LLM toolkit for Elixir https://bit.ly/4oGfGHM
Show HN: OpEx, an agentic LLM toolkit for Elixir https://bit.ly/4p9NE7g November 14, 2025 at 11:43PM
Thursday, 13 November 2025
Show HN: Treasury – The personal finance app built for you (public beta) https://bit.ly/3JCQrai
Show HN: Treasury – The personal finance app built for you (public beta) Hi HN! I'm excited to share Treasury ( https://bit.ly/4i04qDv ), a personal finance app I've been building. We just opened up our public beta and would love your feedback. Currently, Treasury has a set of core building blocks that let you create financial setups as simple or as complex as you want. You can track your net worth, analyze spending, spot recurring transactions, and build budgets that actually match how you think about money. The app is live at https://bit.ly/4i04qDv . Sign up and let me know what you think. https://bit.ly/4i04qDv November 14, 2025 at 04:57AM
Show HN: TranscribeAndSplit – AI that transcribes audio and splits it by meaning https://bit.ly/3K3UWe1
Show HN: TranscribeAndSplit – AI that transcribes audio and splits it by meaning Hi HN, I built a small tool to solve a recurring pain when editing podcasts: scrubbing back and forth just to find where a sentence or idea actually ends. How it works: - Upload an audio file (MP3/WAV/M4A) - AI transcribes the audio and suggests cut points at sentence or paragraph boundaries - Automatically split and export segments, or adjust them manually if needed Website: https://bit.ly/4nYxMDD This came out of my own frustration with editing long recordings and manually hunting for the right cut points. I wanted something that actually understands the content before splitting it. I’d love feedback — especially on edge cases like interviews, lectures, or multi-speaker audio. What features would make this more useful? November 14, 2025 at 05:37AM
Show HN: I'm a CEO Coding with AI – Here's the Air Quality iOS App I Built https://bit.ly/43pj0Pa
Show HN: I'm a CEO Coding with AI – Here's the Air Quality iOS App I Built I’m the CEO of AirGradient, where we build open-source air-quality monitors. Two months ago I decided to build our first native iOS app myself. I’ve been coding on the side for ~15 years, but had never touched Swift or SwiftUI. Still, I went from empty repo to App Store approval in exactly 60 days, working on it only on the side. The app itself is a global PM2.5 map with detail views, charts, and integration with our open-source sensors -straightforward, but fully native with Swift and now live on both iOS and Android (native Kotlin version). The interesting part for me was actually not so much the result, but on the process that I settled on. Agentic coding let me work in parallel with the AI: while it generated code, I could switch to CEO work - replying to emails, commenting on tickets, working on proposals, and thinking through strategic planning. The context switching wasn’t always easy, but having the coding agent on one virtual desktop and company work on another made the rhythm surprisingly smooth. It felt less like traditional "coding time" and more like supervising a very fast (junior) developer who never pauses. At times I felt super human when the AI got a complex feature implemented correctly in the first shot (and obviously there were a few times when it was extremely frustrating). What helped tremendously was that I asked the AI to draft a full spec based on our existing web app, fed it screenshots and Figma mocks. Sometimes these specs were a few pages long for a simple feature including API, data models, localisations, UI mockups, and error handling. It produced consistent SwiftUI code far faster than any normal design-to-dev cycle. I still had to debug, make architectural decisions, and understand the tricky parts, but the heavy lifting moved to the tools. This experience changed my view on a long-standing question: Should CEOs code? The historical answer was usually "not really." But with agentic coding, I believe the calculus shifts. Understanding what AI can and can’t do, how engineering workflows will change, and how non-engineers can now contribute directly is becoming strategically important. You only get that understanding by building something end-to-end, and I believe it's important that CEOs experience this themselves (the positives & the frustrations). The bigger shift for me was realizing how this changes the entire software workflow. Designers can hand over mocks that agents turn directly into working components. PMs can produce detailed specs that generate real code instead of just guiding it. Even non-engineering teams can create small internal tools without blocking developers. Engineers don’t disappear—they move upward into architecture, debugging, constraints, and system-level reasoning. But for leadership to make good decisions about this future, it’s not enough to read about it. You have to feel the edges yourself: where the agents excel, where they fall apart, and what parts still demand deep human judgment. So yes, I now think CEOs should code. Not permanently - only a few hours a week. Not to ship production code forever, but to understand the new reality their teams will be working in, and how to support them in this new work environment. I’m sharing this partly to hear how others on HN approach the question of whether CEOs or technical leaders should still code. Has getting hands-on with AI tools changed your perspective on leadership, team structure, or strategy? Happy to answer questions and compare notes. Here are the apps: Apple App Store: https://apple.co/3JYCGma Android Play Store: https://bit.ly/3JTNYIo... (Keep in mind this is version 1, so lots of improvements will come in the coming weeks and months) November 14, 2025 at 01:23AM
Show HN: V0 for Svelte (svelte0), a Svelte UI generator https://bit.ly/3JV4Jmv
Show HN: V0 for Svelte (svelte0), a Svelte UI generator https://bit.ly/3JV4JD1 November 13, 2025 at 11:44PM
Wednesday, 12 November 2025
Show HN: I built a platform where audiences fund debates between public thinkers https://bit.ly/4hXC4Kk
Show HN: I built a platform where audiences fund debates between public thinkers Hey HN, I built Logosive because I want to see certain debates between my favorite thinkers (especially in health/wellness, tech, and public policy), but there's no way for regular people to make these happen. With Logosive, you propose a debate topic and debaters. We then handle outreach, ticket sales, and logistics. After the debate, ticket revenue is split between everyone involved, including the person that proposed the debate, the debaters, and the host. Logosive is built with Django, htmx, and Alpine.js. Claude generates the debate launch pages, including suggesting debaters or debate topics, all from a single prompt (but the debates happen between real debaters). I’m now looking for help launching new debates, so if you have any topics or people you really want to see debate, please submit them at https://bit.ly/4hWPiHh . Thanks! https://bit.ly/4hWPiHh November 12, 2025 at 09:35PM
Show HN: Invisitris a Tetris-like game, where the placed pieces become invisible https://bit.ly/43n0gQk
Show HN: Invisitris a Tetris-like game, where the placed pieces become invisible Hi Hackernews, I built a little Tetris-like game called Invisitris, where all but the last placed piece becomes invisible. The board becomes fully visible for a few seconds when you clear rows. And it also becomes visible when your stack becomes dangerously high. Try it and let me know what you think :) https://bit.ly/4nQG2FG November 13, 2025 at 12:39AM
Show HN: KV Marketplace – share LLM attention caches across GPUs like memcached https://bit.ly/4i0oANS
Show HN: KV Marketplace – share LLM attention caches across GPUs like memcached https://bit.ly/47NqUVb November 12, 2025 at 10:22PM
Tuesday, 11 November 2025
Show HN: AI is a DJ https://bit.ly/47MWXTK
Show HN: AI is a DJ Set the mood/genre and sit back, let the AI mix music for you! https://bit.ly/49aKVG8 November 12, 2025 at 06:01AM
Show HN: Project AELLA – Open LLMs for structuring 100M research papers https://bit.ly/43qBgHX
Show HN: Project AELLA – Open LLMs for structuring 100M research papers We're releasing Project AELLA - an open-science initiative to make scientific knowledge more accessible through AI-generated structured summaries of research papers. Blog: https://bit.ly/3WMXjEW Visualizer: https://bit.ly/3WQtL9y Models: https://bit.ly/4p77d0d , https://bit.ly/3LDNoPy Highlights: - Released 100K research paper summaries in standardized JSON format with interactive visualization. - Fine-tuned open models (Qwen 3 14B & Nemotron 12B) that match GPT-5/Claude 4.5 performance at 98% lower cost (~$100K vs $5M to process 100M papers) - Built on distributed "idle compute" infrastructure - think SETI@Home for LLM workloads Goal: Process ~100M papers total, then link to OpenAlex metadata and convert to copyright-respecting "Knowledge Units" The models are open, evaluation framework is transparent, and we're making the summaries publicly available. This builds on Project Alexandria's legal/technical foundation for extracting factual knowledge while respecting copyright. Technical deep-dive in the post covers our training pipeline, dual evaluation methods (LLM-as-judge + QA dataset), and economic comparison showing 50x cost reduction vs closed models. Happy to answer questions about the training approach, evaluation methodology, or infrastructure! https://bit.ly/43qWIwj November 11, 2025 at 07:38PM
Show HN: mDNS name resolution for Docker container names https://bit.ly/43t5Xw6
Show HN: mDNS name resolution for Docker container names I always wanted this: an easy way to reach "resolve docker container by name" -- e.g., to reach web servers running in Docker containers on my dev machine. Of course, I could export ports from all these containers, try to keep them out of each others hair on the host, and then use http://localhost:PORT. But why go through all that trouble? These containers already expose their respective ports on their own IP (e.g., 172.24.0.5:8123), so all I need is a convenient way to find them. mdns-docker allows you do to, e.g., "ping my-container.docker.local", where it will find the IP of a running container whose name fuzzily matches the host. The way it does it is by running a local mDNS service that listens to `*.docker.local` requests, finding a running container whose name contains the request (here: "my-container"), getting that container's local IP address, and responding to the mDNS query with that IP. Example: Start a ClickHouse service (as an example) `docker run --rm --name myclicky clickhouse:25.7` and then open ` https://bit.ly/4pb7Tln " to open the built-in dashboard -- no port mapping required! If you haven't played with mDNS yet, you've been missing a lot of fun. It's easy to use and the possibilities to make your life easier are endless. It's also what Spotify and chromecast use for local device discovery. https://bit.ly/3WNyDMD November 11, 2025 at 11:27PM
Monday, 10 November 2025
Show HN: Privacy Experiment – Rewriting HTTPS, TLS, and TCP/IP Packet Headers https://bit.ly/3JDb7Pn
Show HN: Privacy Experiment – Rewriting HTTPS, TLS, and TCP/IP Packet Headers The README: https://bit.ly/3JWsNpd Or the LP: https://404-nf/carrd.co Or read on... In a small enough group of people, your TLS-handshake can be enough to identify you as a unique client. Around six-months ago, I began learning about client-fingerprinting. I had understood that it was getting better and more precise, but did not realize the ease with which a server could fingerprint a user - after all, you're just giving up all the cookies! Fingerprinting, for the modern internet experience, has become almost a necessity. It was concerning to me that servers began using the very features that we rely on for security to identify and fingerprint clients. - JS - Collection of your JS property values - Font - Collection of your downloaded fonts - JA3/4 - TLS cipher-suite FP - JA4/T - TCP packet header FP (TTL, MSS, Window Size/Scale, TSval/ecr, etc.) - HTTPS - HTTPS header FP (UA, sec-ch, etc.) - Much more... So, I built a tool to give me control of my fingerprint at multiple layers: - Localhost mitmproxy handles HTTPS headers and TLS cipher-suite negotiation - eBPF + Linux TC rewrites TCP packet headers (TTL, window size, etc.) - Coordinated spoofing ensures all layers present a consistent, chosen fingerprint - (not yet cohesive) Current Status: This is a proof-of-concept that successfully spoofs JA3/JA4 (TLS), JA4T (TCP), and HTTP fingerprints. It's rough around the edges and requires some Linux knowledge to set up. When there are so many telemetry points collected from a single SYN/ACK interaction, the precision with which a server can identify a unique client becomes concerning. Certain individuals and organizations began to notice this and produced sources to help people better understand the amount of data they're leaving behind on the internet: amiunique.org, browserleaks.com, and coveryourtracks.eff.org to name a few. This is the bare bones, but it's a fight against server-side passive surveillance. Tools like nmap and p0f have been exploiting this for the last two-decades, and almost no tooling has been developed to fight it - with the viable options (burpsuite) not being marketed for privacy. Even beyond this, with all values comprehensively and cohesively spoofed, SSO tokens can still follow us around and reveal our identity. When the SDKs of the largest companies like Google are so deeply ingrained into development flows, this is a no-go. So, this project will evolve, I'm looking to add some sort of headless/headful swarm that pollutes your SSO history - legal hurdles be damned. I haven't shared this in a substantial way, and really just finished polishing up a prerelease, barely working version about a week ago. I am not a computer science or cysec engineer, just someone with a passion for privacy that is okay with computers. This is proof of concept for a larger tool. Due to the nature of TCP/IP packet headers, if this software were to run on a distributed mesh network, privacy could be distributed on a mixnet like they're trying to achieve at Nym Technologies. All of the pieces are there, they just haven't been put together in the right way. I think I can almost see the whole puzzle... November 11, 2025 at 02:27AM
Show HN: Nano Banana 2 – AI image generation SaaS powered by Google Gemini https://bit.ly/4icIIN3
Show HN: Nano Banana 2 – AI image generation SaaS powered by Google Gemini Hi HN! I built Nano Banana 2, a SaaS platform for AI-powered image using Google's Gemini 2.5 Flash. You can try it for free.Would love your feedback! https://bit.ly/4hWCfWf November 11, 2025 at 05:42AM
Show HN: A Free Instagram Story Viewer That Lets You Watch Anonymously https://bit.ly/47OebjL
Show HN: A Free Instagram Story Viewer That Lets You Watch Anonymously https://bit.ly/43TWLAU November 11, 2025 at 02:03AM
Sunday, 9 November 2025
Show HN: Trilogy Studio, open-source browser-based SQL editor and visualizer https://bit.ly/4oZgwiu
Show HN: Trilogy Studio, open-source browser-based SQL editor and visualizer SQL-first analytic IDE; similar to Redash/Metabase. Aims to solve reuse/composability at the code layer with modified syntax, Trilogy, that includes a semantic layer directly in the SQL-like language. Status: experiment; feedback and contributions welcome! Built to solve 3 problems I have with SQL as my primary iterative analysis language: 1. Adjusting queries/analysis takes a lot of boilerplate. Solve with queries that operate on the semantic layer, not tables. Also eliminates the need for CTEs. 2. Sources of truth change all the time. I hate updating reports to reference new tables. Also solved by the semantic layer, since data bindings can be updated without changing dashboards or queries. 3. Getting from SQL to visuals is too much work in many tools; make it as streamlined as possible. Surprise - solve with the semantic layer; add in more expressive typing to get better defaults;also use it to wire up automatic drilldowns/cross filtering. Supports: bigquery, duckdb, snowflake. Links [1] https://bit.ly/4oIcV8M (language info) Git links: [Frontend] https://bit.ly/4p34dSx [Language] https://bit.ly/4ot4xdf Previously: https://bit.ly/3He2JnN (significant UX/feature reworks since) https://bit.ly/3B9PkdE https://bit.ly/47NxSrY November 10, 2025 at 12:26AM
Show HN: Alignmenter – Measure brand voice and consistency across model versions https://bit.ly/4hMijoM
Show HN: Alignmenter – Measure brand voice and consistency across model versions I built a framework for measuring persona alignment in conversational AI systems. *Problem:* When you ship an AI copilot, you need it to maintain a consistent brand voice across model versions. But "sounds right" is subjective. How do you make it measurable? *Approach:* Alignmenter scores three dimensions: 1. *Authenticity*: Style similarity (embeddings) + trait patterns (logistic regression) + lexicon compliance + optional LLM Judge 2. *Safety*: Keyword rules + offline classifier (distilroberta) + optional LLM judge 3. *Stability*: Cosine variance across response distributions The interesting part is calibration: you can train persona-specific models on labeled data. Grid search over component weights, estimate normalization bounds, and optimize for ROC-AUC. *Validation:* We published a full case study using Wendy's Twitter voice: - Dataset: 235 turns, 64 on-brand / 72 off-brand (balanced) - Baseline (uncalibrated): 0.733 ROC-AUC - Calibrated: 1.0 ROC-AUC - 1.0 f1 - Learned: Style > traits > lexicon (0.5/0.4/0.1 weights) Full methodology: https://bit.ly/3XiazS3 There's a full walkthrough so you can reproduce the results yourself. *Practical use:* pip install alignmenter[safety] alignmenter run --model openai:gpt-4o --dataset my_data.jsonl It's Apache 2.0, works offline, and designed for CI/CD integration. GitHub: https://bit.ly/47vmphM Interested in feedback on the calibration methodology and whether this problem resonates with others. https://bit.ly/4nQMSeC November 10, 2025 at 12:53AM
Saturday, 8 November 2025
Show HN: Serve 100 Large AI models on a single GPU with low impact to TTFT https://bit.ly/4qNmEME
Show HN: Serve 100 Large AI models on a single GPU with low impact to TTFT I wanted to build an inference provider for proprietary AI models, but I did not have a huge GPU farm. I started experimenting with Serverless AI inference, but found out that coldstarts were huge. I went deep into the research and put together an engine that loads large models from SSD to VRAM up to ten times faster than alternatives. It works with vLLM, and transformers, and more coming soon. With this project you can hot-swap entire large models (32B) on demand. Its great for: Serverless AI Inference Robotics On Prem deployments Local Agents And Its open source. Let me know if anyone wants to contribute :) https://bit.ly/4nKufsu November 9, 2025 at 12:48AM
Show HN: DeepShot – NBA game predictor with 70% accuracy using ML and stats https://bit.ly/4nMRgLB
Show HN: DeepShot – NBA game predictor with 70% accuracy using ML and stats I built DeepShot, a machine learning model that predicts NBA games using rolling statistics, historical performance, and recent momentum — all visualized in a clean, interactive web app. Unlike simple averages or betting odds, DeepShot uses Exponentially Weighted Moving Averages (EWMA) to capture recent form and momentum, highlighting the key statistical differences between teams so you can see why the model favors one side. It’s powered by Python, XGBoost, Pandas, Scikit-learn, and NiceGUI, runs locally on any OS, and relies only on free, public data from Basketball Reference. If you’re into sports analytics, machine learning, or just curious whether an algorithm can outsmart Vegas, check it out and let me know what you think: https://bit.ly/4j2dU0S https://bit.ly/4j2dU0S November 9, 2025 at 01:19AM
Show HN: Livestream of a coding agent controlled by public chat https://bit.ly/4hPRQH3
Show HN: Livestream of a coding agent controlled by public chat https://bit.ly/4qQPL1D November 8, 2025 at 06:10PM
Friday, 7 November 2025
Show HN: A DevTools-Level JavaScript API for DOM and CSS Style Rules https://bit.ly/3JObXsq
Show HN: A DevTools-Level JavaScript API for DOM and CSS Style Rules It is a wrapper around the Chrome DevTools Protocol (CDP), the same API that DevTools uses, to inspect elements programmatically and intuitively like accessing DOM. Why this? I have seen too many tools pretending they can get matched CSS style rules but actually only computed styles. The real DevTools data — CSS rules, selectors, and cascading order — is what we want to retrive programmatically, yet CDP is hard to use, full of undocumented quirks. One have to observe Devtools' behavior and check the huge DevTools frontend codebase to know how to use it. Having worked on a Chromium fork before, I feel it is time to solve this once and for all. What can we build around this? That's what I'd love to ask you all. Probably like many, MCP was what came to my mind first, but then I wondered that given this simple API, maybe agents could just write scripts directly? Need opinions. My own use case was CSS inlining. This library was actually split from my UI cloner project: https://bit.ly/4osrEEK I was porting a WordPress + Elementor site and wanted to automate the CSS translation from unreadable stylesheets. So, what do you think? Any ideas, suggestions, or projects to build upon? Would love to hear your thoughts — and feel free to share your own projects in the comments! https://bit.ly/47JOS21 November 8, 2025 at 05:13AM
Show HN: Find matching acrylic paints for any HEX color https://bit.ly/47QvAby
Show HN: Find matching acrylic paints for any HEX color https://bit.ly/4hUvuV2 November 3, 2025 at 04:20PM
Show HN: Rankly – The only AEO platform to track AI visibility and conversions https://bit.ly/3LvyNph
Show HN: Rankly – The only AEO platform to track AI visibility and conversions Most GEO/AEO tools stop at AI Visibility. Rankly goes further, we track the entire AI visibility funnel from mentions to conversions. As brands start showing up in LLM results the next question isn’t visibility, it’s traffic quality and conversions. Rankly builds dynamic data-driven journeys for high-intent LLM Traffic. https://bit.ly/3LueNU7 November 7, 2025 at 11:49PM
Thursday, 6 November 2025
Show HN: VT Code – Rust TUI coding agent with Tree-sitter and AST-grep https://bit.ly/3WFJbgL
Show HN: VT Code – Rust TUI coding agent with Tree-sitter and AST-grep I’ve been building VT Code, a Rust-based terminal coding agent that combines semantic code intelligence (Tree-sitter + ast-grep) with multi-provider LLMs and a defense-in-depth execution model. It runs in your terminal with a streaming TUI, and also integrates with editors via ACP and a VS Code extension. * Semantic understanding: parses your code with Tree-sitter and does structural queries with ast-grep. * Multi-LLM with failover: OpenAI, Anthropic, xAI, DeepSeek, Gemini, Z.AI, Moonshot, OpenRouter, MiniMax, and Ollama for local—swap by env var. * Security first: tool allowlist + per-arg validation, workspace isolation, optional Anthropic sandbox, HITL approvals, audit trail. * Editor bridges: Agent Conext Protocol supports (Zed); VS Code extension (also works in Open VSX-compatible editors like Cursor/Windsurf). * Configurable: vtcode.toml with tool policies, lifecycle hooks, context budgets, and timeouts. GitHub: https://bit.ly/4nZmJev https://bit.ly/4nZmJev November 7, 2025 at 02:14AM
Show HN: Auto-Adjust Keyboard and LCD Brightness via Ambient Light Sensor[Linux] https://bit.ly/3Xesw3R
Show HN: Auto-Adjust Keyboard and LCD Brightness via Ambient Light Sensor[Linux] I have always wanted cool features in Linux because I use it day to day as my OS. I have always wanted to implement this feature and do it properly: one that automatically adjusts keyboard and LCD backlights using data from the Ambient Light Sensor. I enjoy low-level programming a lot. I delved into writing this program in C. It came out well and worked seamlessly on my device. Currently, it only works for keyboard lights. I designed it in a way that the support for LCD will come in seamlessly in the future. But, in the real world, people have different kinds of devices. And I made sure to follow the iio implementation on the kernel through sysfs. I would like feedback. :) https://bit.ly/497NMzA November 2, 2025 at 12:03PM
Wednesday, 5 November 2025
Show HN: Data Formulator 0.5 – Vibe with your data (Microsoft Research) https://bit.ly/4qLWYA1
Show HN: Data Formulator 0.5 – Vibe with your data (Microsoft Research) Data Formulator 0.5 released! It's a new research prototype from Data Formulator team @ Microsoft Research. It's quite a leap since our first prototype last year -- we bring agent mode to interact with data, together with an online demo that you can play with ( https://bit.ly/3JwfE6b ). "Vibe with your data, in control" -- featuring agent mode + interactive control to play with data, it should be more fun than last time you discovered it! - Load whatever data - structured data, database connections, or extract from screenshots/messy text - Flexible AI exploration - full agent mode OR hybrid UI+NL control for precision - Data threads - branch, backtrack, and manage multiple exploration paths - Interpretable results - inspect charts, formulas, explanations, and generated code - Report generation - AI creates shareable insights grounded in your data * Online demo at: https://bit.ly/3JwfE6b * Github: https://bit.ly/48iS4l3 * new video: https://www.youtube.com/watch?v=GfTE2FLyMrs * take a look at our product hunt page: https://bit.ly/47XR6Mx https://bit.ly/3JwfE6b November 6, 2025 at 08:09AM
Show HN: SSH terminal multiplayer written in Golang https://bit.ly/4ojEMMu
Show HN: SSH terminal multiplayer written in Golang To play go here ssh web2u.org -p6996 The Rules: The goal is to claim the most space Secondary goal is to kill as many other Ourboroses as you can The how: To claim space you need to either eat your own tail or reach tiles you've already claimed, tiles that are enclosed when you do so become yours! To kill other snakes you hit their tails To watch out: Other players can kill you by hitting your exposed tail Other players can take your tiles. https://bit.ly/4ouicRx November 6, 2025 at 03:57AM
Show HN: Dynamic Code Execution with MCP: A More Efficient Approach https://bit.ly/3JErL11
Show HN: Dynamic Code Execution with MCP: A More Efficient Approach I've been working on a more efficient approach to code execution with MCP servers that eliminates the filesystem overhead described in Anthropic's recent blog post. The Anthropic post (https://bit.ly/4hOsF7N) showed how agents can avoid token bloat by writing code to call MCP tools instead of using direct tool calls. Their approach generates TypeScript files for each tool to enable progressive discovery. It works well but introduces complexity: you need to generate files for every tool, manage complex type schemas, rebuild when tools update, and handle version conflicts. At scale, 1000 MCP tools means maintaining 1000 generated files. I built codex-mcp using pure dynamic execution. Instead of generating files, we expose just two lightweight tools: list_mcp_tools() returns available tool names, and get_mcp_tool_details(name) loads definitions on demand. The agent explores tools as if navigating a filesystem, but nothing actually exists on disk. Code snippets are stored in-memory as strings in the chat session data. When you execute a snippet, we inject a callMCPTool function directly into the execution environment using AsyncFunction constructor. No imports, no filesystem dependencies, just runtime injection. The function calls mcpManager.tools directly, so you're always hitting the live MCP connection. This means tools are perpetually in sync. When a tool's schema changes on the server, you're already calling the updated version. No regeneration, no build step, no version mismatches. The agent gets all the same benefits of the filesystem approach (progressive discovery, context efficiency, complex control flow, privacy preservation) without any of the maintenance overhead. One caveat: the MCP protocol doesn't enforce output schemas, so chaining tool calls requires defensive parsing since the model can't predict output structure. This affects all MCP implementations though, not specific to our approach. The dynamic execution is made possible by Vercel AI SDK's MCP support, which provides the runtime infrastructure to call MCP tools directly from code. Project: https://bit.ly/47rUOOz Would love feedback from folks working with MCP at scale. Has anyone else explored similar patterns? https://bit.ly/47rUOOz November 6, 2025 at 02:23AM
Tuesday, 4 November 2025
Show HN: Free Quantum-Resistant Timestamping API (Dual-Signature and Bitcoin) https://bit.ly/47YJoBW
Show HN: Free Quantum-Resistant Timestamping API (Dual-Signature and Bitcoin) SasaSavic.ca recently launched a public cryptographic timestamping service designed to remain verifiable even in a post-quantum world. The platform uses SasaSavic Quantum Shield™, a dual-signature protocol combining classical and post-quantum security. Each submitted SHA-256 hash is: • Dual-signed with ECDSA P-256 and ML-DSA-65 (per NIST FIPS 204) • Anchored to the Bitcoin blockchain via OpenTimestamps • Recorded in a public, verifiable daily ledger API (beta, no auth required): https://bit.ly/4nCbVSo Example curl request: curl -X POST https://bit.ly/47oAJZl -H "Content-Type: application/json" -d '{"hash":"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"}' Verification and ledgers: https://bit.ly/4oWW0iQ https://bit.ly/3WHMUdA The goal is to make cryptographic proofs quantum-resistant and accessible, while preserving user privacy — only the hash ever leaves the client side. Feedback from developers, auditors, and researchers on PQC integration and verification speed is welcome. More details and documentation: https://bit.ly/4qQb0AP – The SasaSavic.ca Team November 5, 2025 at 05:51AM
Show HN: ReadMyMRI DICOM native preprocessor with multi model consensus/ML pipes https://bit.ly/47T7Kgp
Show HN: ReadMyMRI DICOM native preprocessor with multi model consensus/ML pipes I'm building ReadMyMRI to solve a problem I kept running into: getting medical imaging data (DICOM files) ready for machine learning without violating HIPAA or losing critical context. What it does: ReadMyMRI is a preprocessing pipeline that takes raw DICOM medical images (MRIs, CTs, etc.) and: Strips all Protected Health Information (PHI) automatically while preserving DICOM metadata integrity Compresses images to manageable sizes without destroying diagnostic quality Links deidentified scans to user-provided clinical context (symptoms, demographics, outcomes) Uses multi-model AI consensus analysis for both consumer facing 2nd opinions and clinical decision making support at bedside Outputs everything into a single dataframe ready for ML training using Daft (Eventual's distributed dataframe library) Technical approach: Built on pydicom for DICOM manipulation Uses Pillow/OpenCV for quality-preserving compression Daft integration for distributed processing of large medical imaging datasets Frontier models for multi model analysis (still debating this) What I'm looking for: Feedback from anyone working with medical imaging ML Edge cases I haven't thought about Whether the Daft integration actually makes sense for your use case or if plain pandas would be better HIPAA/privacy concerns I am not thinking about Happy to answer questions about the architecture, HIPAA considerations, or why medical imaging data is such a pain to work with. https://bit.ly/4qPXdKz November 4, 2025 at 11:47PM
Show HN: Barcable – We Built Agents That Automatically Load Test Your Back End https://bit.ly/4qPSzfB
Show HN: Barcable – We Built Agents That Automatically Load Test Your Back End Hey HN, we’re Iyan and Datta, founders of Barcable. Barcable connects to your backend (HTTP, gRPC, GraphQL) and uses autonomous agents to generate and run load tests directly inside your CI/CD. No configs, no scripts. It scans your repo, understands your API routes, and builds real test scenarios that hit your endpoints with realistic payloads. Docs: https://bit.ly/4qH6699 We built this out of frustration. Every team we’ve worked with ran into the same issue: reliability testing never kept up with development speed. Pipelines deploy faster than anyone can validate performance. Most “load tests” are brittle JMeter relics or one-off scripts that rot after the first refactor. Barcable is our attempt to automate that. It: - Parses your OpenAPI spec or code to discover endpoints automatically - Generates realistic load tests from PR diffs (no manual scripting) - Spins up isolated Cloud Run jobs to execute at scale - Reports latency, throughput, and error breakdowns directly in your dashboard - Hooks into your CI so tests run autonomously before deploys Each agent handles a part of the process—discovery, generation, execution, analysis—so testing evolves with your codebase rather than fighting against it. Right now it works best with Dockerized repos. You can onboard from GitHub, explore endpoints, generate tests, run them, and see metrics in a unified dashboard. It’s still a work in progress. We’ll create accounts manually and share credentials with anyone interested in trying it out. We’re keeping access limited for now because of Cloud Run costs. We’re not trying to replace performance engineers, just make it easier for teams to catch regressions and incidents before production without the setup tax. Would love feedback from anyone who’s been burned by flaky load testing pipelines or has solved reliability differently. We’re especially curious about gRPC edge cases and complex auth setups. HN has always been a huge source of inspiration for us, and we’d love to hear how you’d test it, break it, or make it better. — Iyan & Datta https://bit.ly/4hE8Qji https://bit.ly/4hE8Qji November 5, 2025 at 12:25AM
Show HN: Yourshoesmells.com – Find the most smelly boulder gym https://bit.ly/4nAhCAk
Show HN: Yourshoesmells.com – Find the most smelly boulder gym A crowdsourced map for ranking Boulder gym stinkiness and difficulty. Get a detailed view of the gym. “Is there toprope in the gym?” “Any training boards?” https://bit.ly/4qFZhEN November 4, 2025 at 10:11AM
Show HN: Glitch Text Generator – Create stunning unicode text effects https://bit.ly/4hMVzFu
Show HN: Glitch Text Generator – Create stunning unicode text effects https://bit.ly/4hDWF6a November 4, 2025 at 02:57AM
Monday, 3 November 2025
Show HN: MyTimers.app offline-first PWA with no build step and zero dependencies https://bit.ly/4oMgmuN
Show HN: MyTimers.app offline-first PWA with no build step and zero dependencies Hello, For quite some time, I've been unsatisfied with the built-in timers on both Android and iOS; especially for workouts, when I needed to set up a configurable number of series with rest periods in between. That's when I started thinking about building something myself. It was just a timer and I said to myself "how hard could it be?", I had no idea. The first iteration of the project worked "just fine", but the UI was an eyesore (even more than it is now), and the UX was quite awful as well. As you can probably guess, I'm not versed in design or front-end development. In fact, my last real experience with front-end work was back when jQuery was still a thing. However, I knew what I wanted to build, and over the last few days (and with the help of the infamous AI) I was able to wrap up the project for my needs. It required quite a lot of "hand holding" and "back and forth", but it helped me smooth out the rough edges and provided great suggestions about the latest ES6 features. The project is, as the title states, an offline-first PWA with zero dependencies; no build step, no cookies, no links, no analytics, nothing other than timers. It uses `Web Components` (a really nice feature, in my opinion, though I still don't get why we can't easily inherit styles from the global scope) and `localStorage` to save timers between uses. I'd appreciate any comments or suggestions, since I just want to keep learning new things. https://bit.ly/43QqyKG https://bit.ly/43QqyKG November 4, 2025 at 05:46AM
Show HN: Chess960v2 – Stockfish tournament with different starting positions https://bit.ly/47CFqNR
Show HN: Chess960v2 – Stockfish tournament with different starting positions https://bit.ly/47PpSaW November 4, 2025 at 05:27AM
Sunday, 2 November 2025
Show HN: Chatolia – create, train and deploy your own AI agents https://bit.ly/4nAUHF4
Show HN: Chatolia – create, train and deploy your own AI agents Hi everyone, I've built Chatolia, a platform that lets you create your own AI chatbots, train them with your own data, and deploy them to your website. It is super simple to get started: - Create your agent - Train it with your data - Deploy it anywhere You can start for free, includes 1 agent and 500 message credits per month. Would love to hear your thoughts, https://bit.ly/4ntJ56B https://bit.ly/4ntJ56B November 2, 2025 at 10:08PM
Show HN: I built a Raspberry Pi webcam to train my dog (using Claude) https://bit.ly/4nyemp6
Show HN: I built a Raspberry Pi webcam to train my dog (using Claude) Hey HN! I’m a Product Manager and made a DIY doggy cam (using Claude and a Raspberry Pi) to help train my dog with separation anxiety. I wrote up a blog post sharing my experience building this project with AI. https://bit.ly/4oQDubQ November 3, 2025 at 01:04AM
Saturday, 1 November 2025
Show HN: A simple drag and drop tool to document and label fuse boxes https://bit.ly/3JgHT8R
Show HN: A simple drag and drop tool to document and label fuse boxes https://bit.ly/4qETijw October 31, 2025 at 02:40PM
Show HN: Duper – The Format That's Super https://bit.ly/49yF8dv
Show HN: Duper – The Format That's Super An MIT-licensed human-friendly extension of JSON with quality-of-life improvements (comments, trailing commas, unquoted keys), extra types (tuples, bytes, raw strings), and semantic identifiers (think type annotations). Built in Rust, with bindings for Python and WebAssembly, as well as syntax highlighting in VSCode. I made it for those like me who hand-edit JSONs and want a breath of fresh air. It's at a good enough point that I felt like sharing it, but there's still plenty I wanna work on! Namely, I want to add (real) Node support, make a proper LSP with auto-formatting, and get it out there before I start thinking about stabilization. https://bit.ly/47n30iU November 2, 2025 at 12:41AM
Show HN: KeyLeak Detector – Scan websites for exposed API keys and secrets https://bit.ly/4nviWEr
Show HN: KeyLeak Detector – Scan websites for exposed API keys and secrets I built this after seeing multiple teams accidentally ship API keys in their frontend code. The problem: Modern web development moves fast. You're vibe-coding, shipping features, and suddenly your AWS keys are sitting in a
Show HN: Find and download fonts from any website (weekend project) https://bit.ly/3X43GUm
Show HN: Find and download fonts from any website (weekend project) https://bit.ly/3X9cc4o November 1, 2025 at 11:40PM
Subscribe to:
Comments (Atom)