Monday, 10 November 2025

Show HN: Privacy Experiment – Rewriting HTTPS, TLS, and TCP/IP Packet Headers https://bit.ly/3JDb7Pn

Show HN: Privacy Experiment – Rewriting HTTPS, TLS, and TCP/IP Packet Headers The README: https://bit.ly/3JWsNpd Or the LP: https://404-nf/carrd.co Or read on... In a small enough group of people, your TLS-handshake can be enough to identify you as a unique client. Around six-months ago, I began learning about client-fingerprinting. I had understood that it was getting better and more precise, but did not realize the ease with which a server could fingerprint a user - after all, you're just giving up all the cookies! Fingerprinting, for the modern internet experience, has become almost a necessity. It was concerning to me that servers began using the very features that we rely on for security to identify and fingerprint clients. - JS - Collection of your JS property values - Font - Collection of your downloaded fonts - JA3/4 - TLS cipher-suite FP - JA4/T - TCP packet header FP (TTL, MSS, Window Size/Scale, TSval/ecr, etc.) - HTTPS - HTTPS header FP (UA, sec-ch, etc.) - Much more... So, I built a tool to give me control of my fingerprint at multiple layers: - Localhost mitmproxy handles HTTPS headers and TLS cipher-suite negotiation - eBPF + Linux TC rewrites TCP packet headers (TTL, window size, etc.) - Coordinated spoofing ensures all layers present a consistent, chosen fingerprint - (not yet cohesive) Current Status: This is a proof-of-concept that successfully spoofs JA3/JA4 (TLS), JA4T (TCP), and HTTP fingerprints. It's rough around the edges and requires some Linux knowledge to set up. When there are so many telemetry points collected from a single SYN/ACK interaction, the precision with which a server can identify a unique client becomes concerning. Certain individuals and organizations began to notice this and produced sources to help people better understand the amount of data they're leaving behind on the internet: amiunique.org, browserleaks.com, and coveryourtracks.eff.org to name a few. This is the bare bones, but it's a fight against server-side passive surveillance. Tools like nmap and p0f have been exploiting this for the last two-decades, and almost no tooling has been developed to fight it - with the viable options (burpsuite) not being marketed for privacy. Even beyond this, with all values comprehensively and cohesively spoofed, SSO tokens can still follow us around and reveal our identity. When the SDKs of the largest companies like Google are so deeply ingrained into development flows, this is a no-go. So, this project will evolve, I'm looking to add some sort of headless/headful swarm that pollutes your SSO history - legal hurdles be damned. I haven't shared this in a substantial way, and really just finished polishing up a prerelease, barely working version about a week ago. I am not a computer science or cysec engineer, just someone with a passion for privacy that is okay with computers. This is proof of concept for a larger tool. Due to the nature of TCP/IP packet headers, if this software were to run on a distributed mesh network, privacy could be distributed on a mixnet like they're trying to achieve at Nym Technologies. All of the pieces are there, they just haven't been put together in the right way. I think I can almost see the whole puzzle... November 11, 2025 at 02:27AM

Show HN: Nano Banana 2 – AI image generation SaaS powered by Google Gemini https://bit.ly/4icIIN3

Show HN: Nano Banana 2 – AI image generation SaaS powered by Google Gemini Hi HN! I built Nano Banana 2, a SaaS platform for AI-powered image using Google's Gemini 2.5 Flash. You can try it for free.Would love your feedback! https://bit.ly/4hWCfWf November 11, 2025 at 05:42AM

Show HN: A Free Instagram Story Viewer That Lets You Watch Anonymously https://bit.ly/47OebjL

Show HN: A Free Instagram Story Viewer That Lets You Watch Anonymously https://bit.ly/43TWLAU November 11, 2025 at 02:03AM

Sunday, 9 November 2025

Show HN: Trilogy Studio, open-source browser-based SQL editor and visualizer https://bit.ly/4oZgwiu

Show HN: Trilogy Studio, open-source browser-based SQL editor and visualizer SQL-first analytic IDE; similar to Redash/Metabase. Aims to solve reuse/composability at the code layer with modified syntax, Trilogy, that includes a semantic layer directly in the SQL-like language. Status: experiment; feedback and contributions welcome! Built to solve 3 problems I have with SQL as my primary iterative analysis language: 1. Adjusting queries/analysis takes a lot of boilerplate. Solve with queries that operate on the semantic layer, not tables. Also eliminates the need for CTEs. 2. Sources of truth change all the time. I hate updating reports to reference new tables. Also solved by the semantic layer, since data bindings can be updated without changing dashboards or queries. 3. Getting from SQL to visuals is too much work in many tools; make it as streamlined as possible. Surprise - solve with the semantic layer; add in more expressive typing to get better defaults;also use it to wire up automatic drilldowns/cross filtering. Supports: bigquery, duckdb, snowflake. Links [1] https://bit.ly/4oIcV8M (language info) Git links: [Frontend] https://bit.ly/4p34dSx [Language] https://bit.ly/4ot4xdf Previously: https://bit.ly/3He2JnN (significant UX/feature reworks since) https://bit.ly/3B9PkdE https://bit.ly/47NxSrY November 10, 2025 at 12:26AM

Show HN: Alignmenter – Measure brand voice and consistency across model versions https://bit.ly/4hMijoM

Show HN: Alignmenter – Measure brand voice and consistency across model versions I built a framework for measuring persona alignment in conversational AI systems. *Problem:* When you ship an AI copilot, you need it to maintain a consistent brand voice across model versions. But "sounds right" is subjective. How do you make it measurable? *Approach:* Alignmenter scores three dimensions: 1. *Authenticity*: Style similarity (embeddings) + trait patterns (logistic regression) + lexicon compliance + optional LLM Judge 2. *Safety*: Keyword rules + offline classifier (distilroberta) + optional LLM judge 3. *Stability*: Cosine variance across response distributions The interesting part is calibration: you can train persona-specific models on labeled data. Grid search over component weights, estimate normalization bounds, and optimize for ROC-AUC. *Validation:* We published a full case study using Wendy's Twitter voice: - Dataset: 235 turns, 64 on-brand / 72 off-brand (balanced) - Baseline (uncalibrated): 0.733 ROC-AUC - Calibrated: 1.0 ROC-AUC - 1.0 f1 - Learned: Style > traits > lexicon (0.5/0.4/0.1 weights) Full methodology: https://bit.ly/3XiazS3 There's a full walkthrough so you can reproduce the results yourself. *Practical use:* pip install alignmenter[safety] alignmenter run --model openai:gpt-4o --dataset my_data.jsonl It's Apache 2.0, works offline, and designed for CI/CD integration. GitHub: https://bit.ly/47vmphM Interested in feedback on the calibration methodology and whether this problem resonates with others. https://bit.ly/4nQMSeC November 10, 2025 at 12:53AM

Saturday, 8 November 2025

Show HN: Serve 100 Large AI models on a single GPU with low impact to TTFT https://bit.ly/4qNmEME

Show HN: Serve 100 Large AI models on a single GPU with low impact to TTFT I wanted to build an inference provider for proprietary AI models, but I did not have a huge GPU farm. I started experimenting with Serverless AI inference, but found out that coldstarts were huge. I went deep into the research and put together an engine that loads large models from SSD to VRAM up to ten times faster than alternatives. It works with vLLM, and transformers, and more coming soon. With this project you can hot-swap entire large models (32B) on demand. Its great for: Serverless AI Inference Robotics On Prem deployments Local Agents And Its open source. Let me know if anyone wants to contribute :) https://bit.ly/4nKufsu November 9, 2025 at 12:48AM

Show HN: DeepShot – NBA game predictor with 70% accuracy using ML and stats https://bit.ly/4nMRgLB

Show HN: DeepShot – NBA game predictor with 70% accuracy using ML and stats I built DeepShot, a machine learning model that predicts NBA games using rolling statistics, historical performance, and recent momentum — all visualized in a clean, interactive web app. Unlike simple averages or betting odds, DeepShot uses Exponentially Weighted Moving Averages (EWMA) to capture recent form and momentum, highlighting the key statistical differences between teams so you can see why the model favors one side. It’s powered by Python, XGBoost, Pandas, Scikit-learn, and NiceGUI, runs locally on any OS, and relies only on free, public data from Basketball Reference. If you’re into sports analytics, machine learning, or just curious whether an algorithm can outsmart Vegas, check it out and let me know what you think: https://bit.ly/4j2dU0S https://bit.ly/4j2dU0S November 9, 2025 at 01:19AM

Show HN: Livestream of a coding agent controlled by public chat https://bit.ly/4hPRQH3

Show HN: Livestream of a coding agent controlled by public chat https://bit.ly/4qQPL1D November 8, 2025 at 06:10PM

Friday, 7 November 2025

Show HN: A DevTools-Level JavaScript API for DOM and CSS Style Rules https://bit.ly/3JObXsq

Show HN: A DevTools-Level JavaScript API for DOM and CSS Style Rules It is a wrapper around the Chrome DevTools Protocol (CDP), the same API that DevTools uses, to inspect elements programmatically and intuitively like accessing DOM. Why this? I have seen too many tools pretending they can get matched CSS style rules but actually only computed styles. The real DevTools data — CSS rules, selectors, and cascading order — is what we want to retrive programmatically, yet CDP is hard to use, full of undocumented quirks. One have to observe Devtools' behavior and check the huge DevTools frontend codebase to know how to use it. Having worked on a Chromium fork before, I feel it is time to solve this once and for all. What can we build around this? That's what I'd love to ask you all. Probably like many, MCP was what came to my mind first, but then I wondered that given this simple API, maybe agents could just write scripts directly? Need opinions. My own use case was CSS inlining. This library was actually split from my UI cloner project: https://bit.ly/4osrEEK I was porting a WordPress + Elementor site and wanted to automate the CSS translation from unreadable stylesheets. So, what do you think? Any ideas, suggestions, or projects to build upon? Would love to hear your thoughts — and feel free to share your own projects in the comments! https://bit.ly/47JOS21 November 8, 2025 at 05:13AM

Show HN: Find matching acrylic paints for any HEX color https://bit.ly/47QvAby

Show HN: Find matching acrylic paints for any HEX color https://bit.ly/4hUvuV2 November 3, 2025 at 04:20PM

Show HN: Rankly – The only AEO platform to track AI visibility and conversions https://bit.ly/3LvyNph

Show HN: Rankly – The only AEO platform to track AI visibility and conversions Most GEO/AEO tools stop at AI Visibility. Rankly goes further, we track the entire AI visibility funnel from mentions to conversions. As brands start showing up in LLM results the next question isn’t visibility, it’s traffic quality and conversions. Rankly builds dynamic data-driven journeys for high-intent LLM Traffic. https://bit.ly/3LueNU7 November 7, 2025 at 11:49PM

Thursday, 6 November 2025

Show HN: VT Code – Rust TUI coding agent with Tree-sitter and AST-grep https://bit.ly/3WFJbgL

Show HN: VT Code – Rust TUI coding agent with Tree-sitter and AST-grep I’ve been building VT Code, a Rust-based terminal coding agent that combines semantic code intelligence (Tree-sitter + ast-grep) with multi-provider LLMs and a defense-in-depth execution model. It runs in your terminal with a streaming TUI, and also integrates with editors via ACP and a VS Code extension. * Semantic understanding: parses your code with Tree-sitter and does structural queries with ast-grep. * Multi-LLM with failover: OpenAI, Anthropic, xAI, DeepSeek, Gemini, Z.AI, Moonshot, OpenRouter, MiniMax, and Ollama for local—swap by env var. * Security first: tool allowlist + per-arg validation, workspace isolation, optional Anthropic sandbox, HITL approvals, audit trail. * Editor bridges: Agent Conext Protocol supports (Zed); VS Code extension (also works in Open VSX-compatible editors like Cursor/Windsurf). * Configurable: vtcode.toml with tool policies, lifecycle hooks, context budgets, and timeouts. GitHub: https://bit.ly/4nZmJev https://bit.ly/4nZmJev November 7, 2025 at 02:14AM

Show HN: Auto-Adjust Keyboard and LCD Brightness via Ambient Light Sensor[Linux] https://bit.ly/3Xesw3R

Show HN: Auto-Adjust Keyboard and LCD Brightness via Ambient Light Sensor[Linux] I have always wanted cool features in Linux because I use it day to day as my OS. I have always wanted to implement this feature and do it properly: one that automatically adjusts keyboard and LCD backlights using data from the Ambient Light Sensor. I enjoy low-level programming a lot. I delved into writing this program in C. It came out well and worked seamlessly on my device. Currently, it only works for keyboard lights. I designed it in a way that the support for LCD will come in seamlessly in the future. But, in the real world, people have different kinds of devices. And I made sure to follow the iio implementation on the kernel through sysfs. I would like feedback. :) https://bit.ly/497NMzA November 2, 2025 at 12:03PM

Wednesday, 5 November 2025

Show HN: Data Formulator 0.5 – Vibe with your data (Microsoft Research) https://bit.ly/4qLWYA1

Show HN: Data Formulator 0.5 – Vibe with your data (Microsoft Research) Data Formulator 0.5 released! It's a new research prototype from Data Formulator team @ Microsoft Research. It's quite a leap since our first prototype last year -- we bring agent mode to interact with data, together with an online demo that you can play with ( https://bit.ly/3JwfE6b ). "Vibe with your data, in control" -- featuring agent mode + interactive control to play with data, it should be more fun than last time you discovered it! - Load whatever data - structured data, database connections, or extract from screenshots/messy text - Flexible AI exploration - full agent mode OR hybrid UI+NL control for precision - Data threads - branch, backtrack, and manage multiple exploration paths - Interpretable results - inspect charts, formulas, explanations, and generated code - Report generation - AI creates shareable insights grounded in your data * Online demo at: https://bit.ly/3JwfE6b * Github: https://bit.ly/48iS4l3 * new video: https://www.youtube.com/watch?v=GfTE2FLyMrs * take a look at our product hunt page: https://bit.ly/47XR6Mx https://bit.ly/3JwfE6b November 6, 2025 at 08:09AM

Show HN: SSH terminal multiplayer written in Golang https://bit.ly/4ojEMMu

Show HN: SSH terminal multiplayer written in Golang To play go here ssh web2u.org -p6996 The Rules: The goal is to claim the most space Secondary goal is to kill as many other Ourboroses as you can The how: To claim space you need to either eat your own tail or reach tiles you've already claimed, tiles that are enclosed when you do so become yours! To kill other snakes you hit their tails To watch out: Other players can kill you by hitting your exposed tail Other players can take your tiles. https://bit.ly/4ouicRx November 6, 2025 at 03:57AM

Show HN: Dynamic Code Execution with MCP: A More Efficient Approach https://bit.ly/3JErL11

Show HN: Dynamic Code Execution with MCP: A More Efficient Approach I've been working on a more efficient approach to code execution with MCP servers that eliminates the filesystem overhead described in Anthropic's recent blog post. The Anthropic post (https://bit.ly/4hOsF7N) showed how agents can avoid token bloat by writing code to call MCP tools instead of using direct tool calls. Their approach generates TypeScript files for each tool to enable progressive discovery. It works well but introduces complexity: you need to generate files for every tool, manage complex type schemas, rebuild when tools update, and handle version conflicts. At scale, 1000 MCP tools means maintaining 1000 generated files. I built codex-mcp using pure dynamic execution. Instead of generating files, we expose just two lightweight tools: list_mcp_tools() returns available tool names, and get_mcp_tool_details(name) loads definitions on demand. The agent explores tools as if navigating a filesystem, but nothing actually exists on disk. Code snippets are stored in-memory as strings in the chat session data. When you execute a snippet, we inject a callMCPTool function directly into the execution environment using AsyncFunction constructor. No imports, no filesystem dependencies, just runtime injection. The function calls mcpManager.tools directly, so you're always hitting the live MCP connection. This means tools are perpetually in sync. When a tool's schema changes on the server, you're already calling the updated version. No regeneration, no build step, no version mismatches. The agent gets all the same benefits of the filesystem approach (progressive discovery, context efficiency, complex control flow, privacy preservation) without any of the maintenance overhead. One caveat: the MCP protocol doesn't enforce output schemas, so chaining tool calls requires defensive parsing since the model can't predict output structure. This affects all MCP implementations though, not specific to our approach. The dynamic execution is made possible by Vercel AI SDK's MCP support, which provides the runtime infrastructure to call MCP tools directly from code. Project: https://bit.ly/47rUOOz Would love feedback from folks working with MCP at scale. Has anyone else explored similar patterns? https://bit.ly/47rUOOz November 6, 2025 at 02:23AM

Tuesday, 4 November 2025

Show HN: Free Quantum-Resistant Timestamping API (Dual-Signature and Bitcoin) https://bit.ly/47YJoBW

Show HN: Free Quantum-Resistant Timestamping API (Dual-Signature and Bitcoin) SasaSavic.ca recently launched a public cryptographic timestamping service designed to remain verifiable even in a post-quantum world. The platform uses SasaSavic Quantum Shield™, a dual-signature protocol combining classical and post-quantum security. Each submitted SHA-256 hash is: • Dual-signed with ECDSA P-256 and ML-DSA-65 (per NIST FIPS 204) • Anchored to the Bitcoin blockchain via OpenTimestamps • Recorded in a public, verifiable daily ledger API (beta, no auth required): https://bit.ly/4nCbVSo Example curl request: curl -X POST https://bit.ly/47oAJZl -H "Content-Type: application/json" -d '{"hash":"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"}' Verification and ledgers: https://bit.ly/4oWW0iQ https://bit.ly/3WHMUdA The goal is to make cryptographic proofs quantum-resistant and accessible, while preserving user privacy — only the hash ever leaves the client side. Feedback from developers, auditors, and researchers on PQC integration and verification speed is welcome. More details and documentation: https://bit.ly/4qQb0AP – The SasaSavic.ca Team November 5, 2025 at 05:51AM

Show HN: ReadMyMRI DICOM native preprocessor with multi model consensus/ML pipes https://bit.ly/47T7Kgp

Show HN: ReadMyMRI DICOM native preprocessor with multi model consensus/ML pipes I'm building ReadMyMRI to solve a problem I kept running into: getting medical imaging data (DICOM files) ready for machine learning without violating HIPAA or losing critical context. What it does: ReadMyMRI is a preprocessing pipeline that takes raw DICOM medical images (MRIs, CTs, etc.) and: Strips all Protected Health Information (PHI) automatically while preserving DICOM metadata integrity Compresses images to manageable sizes without destroying diagnostic quality Links deidentified scans to user-provided clinical context (symptoms, demographics, outcomes) Uses multi-model AI consensus analysis for both consumer facing 2nd opinions and clinical decision making support at bedside Outputs everything into a single dataframe ready for ML training using Daft (Eventual's distributed dataframe library) Technical approach: Built on pydicom for DICOM manipulation Uses Pillow/OpenCV for quality-preserving compression Daft integration for distributed processing of large medical imaging datasets Frontier models for multi model analysis (still debating this) What I'm looking for: Feedback from anyone working with medical imaging ML Edge cases I haven't thought about Whether the Daft integration actually makes sense for your use case or if plain pandas would be better HIPAA/privacy concerns I am not thinking about Happy to answer questions about the architecture, HIPAA considerations, or why medical imaging data is such a pain to work with. https://bit.ly/4qPXdKz November 4, 2025 at 11:47PM

Show HN: Barcable – We Built Agents That Automatically Load Test Your Back End https://bit.ly/4qPSzfB

Show HN: Barcable – We Built Agents That Automatically Load Test Your Back End Hey HN, we’re Iyan and Datta, founders of Barcable. Barcable connects to your backend (HTTP, gRPC, GraphQL) and uses autonomous agents to generate and run load tests directly inside your CI/CD. No configs, no scripts. It scans your repo, understands your API routes, and builds real test scenarios that hit your endpoints with realistic payloads. Docs: https://bit.ly/4qH6699 We built this out of frustration. Every team we’ve worked with ran into the same issue: reliability testing never kept up with development speed. Pipelines deploy faster than anyone can validate performance. Most “load tests” are brittle JMeter relics or one-off scripts that rot after the first refactor. Barcable is our attempt to automate that. It: - Parses your OpenAPI spec or code to discover endpoints automatically - Generates realistic load tests from PR diffs (no manual scripting) - Spins up isolated Cloud Run jobs to execute at scale - Reports latency, throughput, and error breakdowns directly in your dashboard - Hooks into your CI so tests run autonomously before deploys Each agent handles a part of the process—discovery, generation, execution, analysis—so testing evolves with your codebase rather than fighting against it. Right now it works best with Dockerized repos. You can onboard from GitHub, explore endpoints, generate tests, run them, and see metrics in a unified dashboard. It’s still a work in progress. We’ll create accounts manually and share credentials with anyone interested in trying it out. We’re keeping access limited for now because of Cloud Run costs. We’re not trying to replace performance engineers, just make it easier for teams to catch regressions and incidents before production without the setup tax. Would love feedback from anyone who’s been burned by flaky load testing pipelines or has solved reliability differently. We’re especially curious about gRPC edge cases and complex auth setups. HN has always been a huge source of inspiration for us, and we’d love to hear how you’d test it, break it, or make it better. — Iyan & Datta https://bit.ly/4hE8Qji https://bit.ly/4hE8Qji November 5, 2025 at 12:25AM

Show HN: Yourshoesmells.com – Find the most smelly boulder gym https://bit.ly/4nAhCAk

Show HN: Yourshoesmells.com – Find the most smelly boulder gym A crowdsourced map for ranking Boulder gym stinkiness and difficulty. Get a detailed view of the gym. “Is there toprope in the gym?” “Any training boards?” https://bit.ly/4qFZhEN November 4, 2025 at 10:11AM