Nigeria No1. Music site And Complete Entertainment portal for Music Promotion WhatsApp:- +2349077287056
Wednesday, 12 November 2025
Show HN: KV Marketplace – share LLM attention caches across GPUs like memcached https://bit.ly/4i0oANS
Show HN: KV Marketplace – share LLM attention caches across GPUs like memcached https://bit.ly/47NqUVb November 12, 2025 at 10:22PM
Tuesday, 11 November 2025
Show HN: AI is a DJ https://bit.ly/47MWXTK
Show HN: AI is a DJ Set the mood/genre and sit back, let the AI mix music for you! https://bit.ly/49aKVG8 November 12, 2025 at 06:01AM
Show HN: Project AELLA – Open LLMs for structuring 100M research papers https://bit.ly/43qBgHX
Show HN: Project AELLA – Open LLMs for structuring 100M research papers We're releasing Project AELLA - an open-science initiative to make scientific knowledge more accessible through AI-generated structured summaries of research papers. Blog: https://bit.ly/3WMXjEW Visualizer: https://bit.ly/3WQtL9y Models: https://bit.ly/4p77d0d , https://bit.ly/3LDNoPy Highlights: - Released 100K research paper summaries in standardized JSON format with interactive visualization. - Fine-tuned open models (Qwen 3 14B & Nemotron 12B) that match GPT-5/Claude 4.5 performance at 98% lower cost (~$100K vs $5M to process 100M papers) - Built on distributed "idle compute" infrastructure - think SETI@Home for LLM workloads Goal: Process ~100M papers total, then link to OpenAlex metadata and convert to copyright-respecting "Knowledge Units" The models are open, evaluation framework is transparent, and we're making the summaries publicly available. This builds on Project Alexandria's legal/technical foundation for extracting factual knowledge while respecting copyright. Technical deep-dive in the post covers our training pipeline, dual evaluation methods (LLM-as-judge + QA dataset), and economic comparison showing 50x cost reduction vs closed models. Happy to answer questions about the training approach, evaluation methodology, or infrastructure! https://bit.ly/43qWIwj November 11, 2025 at 07:38PM
Show HN: mDNS name resolution for Docker container names https://bit.ly/43t5Xw6
Show HN: mDNS name resolution for Docker container names I always wanted this: an easy way to reach "resolve docker container by name" -- e.g., to reach web servers running in Docker containers on my dev machine. Of course, I could export ports from all these containers, try to keep them out of each others hair on the host, and then use http://localhost:PORT. But why go through all that trouble? These containers already expose their respective ports on their own IP (e.g., 172.24.0.5:8123), so all I need is a convenient way to find them. mdns-docker allows you do to, e.g., "ping my-container.docker.local", where it will find the IP of a running container whose name fuzzily matches the host. The way it does it is by running a local mDNS service that listens to `*.docker.local` requests, finding a running container whose name contains the request (here: "my-container"), getting that container's local IP address, and responding to the mDNS query with that IP. Example: Start a ClickHouse service (as an example) `docker run --rm --name myclicky clickhouse:25.7` and then open ` https://bit.ly/4pb7Tln " to open the built-in dashboard -- no port mapping required! If you haven't played with mDNS yet, you've been missing a lot of fun. It's easy to use and the possibilities to make your life easier are endless. It's also what Spotify and chromecast use for local device discovery. https://bit.ly/3WNyDMD November 11, 2025 at 11:27PM
Monday, 10 November 2025
Show HN: Privacy Experiment – Rewriting HTTPS, TLS, and TCP/IP Packet Headers https://bit.ly/3JDb7Pn
Show HN: Privacy Experiment – Rewriting HTTPS, TLS, and TCP/IP Packet Headers The README: https://bit.ly/3JWsNpd Or the LP: https://404-nf/carrd.co Or read on... In a small enough group of people, your TLS-handshake can be enough to identify you as a unique client. Around six-months ago, I began learning about client-fingerprinting. I had understood that it was getting better and more precise, but did not realize the ease with which a server could fingerprint a user - after all, you're just giving up all the cookies! Fingerprinting, for the modern internet experience, has become almost a necessity. It was concerning to me that servers began using the very features that we rely on for security to identify and fingerprint clients. - JS - Collection of your JS property values - Font - Collection of your downloaded fonts - JA3/4 - TLS cipher-suite FP - JA4/T - TCP packet header FP (TTL, MSS, Window Size/Scale, TSval/ecr, etc.) - HTTPS - HTTPS header FP (UA, sec-ch, etc.) - Much more... So, I built a tool to give me control of my fingerprint at multiple layers: - Localhost mitmproxy handles HTTPS headers and TLS cipher-suite negotiation - eBPF + Linux TC rewrites TCP packet headers (TTL, window size, etc.) - Coordinated spoofing ensures all layers present a consistent, chosen fingerprint - (not yet cohesive) Current Status: This is a proof-of-concept that successfully spoofs JA3/JA4 (TLS), JA4T (TCP), and HTTP fingerprints. It's rough around the edges and requires some Linux knowledge to set up. When there are so many telemetry points collected from a single SYN/ACK interaction, the precision with which a server can identify a unique client becomes concerning. Certain individuals and organizations began to notice this and produced sources to help people better understand the amount of data they're leaving behind on the internet: amiunique.org, browserleaks.com, and coveryourtracks.eff.org to name a few. This is the bare bones, but it's a fight against server-side passive surveillance. Tools like nmap and p0f have been exploiting this for the last two-decades, and almost no tooling has been developed to fight it - with the viable options (burpsuite) not being marketed for privacy. Even beyond this, with all values comprehensively and cohesively spoofed, SSO tokens can still follow us around and reveal our identity. When the SDKs of the largest companies like Google are so deeply ingrained into development flows, this is a no-go. So, this project will evolve, I'm looking to add some sort of headless/headful swarm that pollutes your SSO history - legal hurdles be damned. I haven't shared this in a substantial way, and really just finished polishing up a prerelease, barely working version about a week ago. I am not a computer science or cysec engineer, just someone with a passion for privacy that is okay with computers. This is proof of concept for a larger tool. Due to the nature of TCP/IP packet headers, if this software were to run on a distributed mesh network, privacy could be distributed on a mixnet like they're trying to achieve at Nym Technologies. All of the pieces are there, they just haven't been put together in the right way. I think I can almost see the whole puzzle... November 11, 2025 at 02:27AM
Show HN: Nano Banana 2 – AI image generation SaaS powered by Google Gemini https://bit.ly/4icIIN3
Show HN: Nano Banana 2 – AI image generation SaaS powered by Google Gemini Hi HN! I built Nano Banana 2, a SaaS platform for AI-powered image using Google's Gemini 2.5 Flash. You can try it for free.Would love your feedback! https://bit.ly/4hWCfWf November 11, 2025 at 05:42AM
Show HN: A Free Instagram Story Viewer That Lets You Watch Anonymously https://bit.ly/47OebjL
Show HN: A Free Instagram Story Viewer That Lets You Watch Anonymously https://bit.ly/43TWLAU November 11, 2025 at 02:03AM
Sunday, 9 November 2025
Show HN: Trilogy Studio, open-source browser-based SQL editor and visualizer https://bit.ly/4oZgwiu
Show HN: Trilogy Studio, open-source browser-based SQL editor and visualizer SQL-first analytic IDE; similar to Redash/Metabase. Aims to solve reuse/composability at the code layer with modified syntax, Trilogy, that includes a semantic layer directly in the SQL-like language. Status: experiment; feedback and contributions welcome! Built to solve 3 problems I have with SQL as my primary iterative analysis language: 1. Adjusting queries/analysis takes a lot of boilerplate. Solve with queries that operate on the semantic layer, not tables. Also eliminates the need for CTEs. 2. Sources of truth change all the time. I hate updating reports to reference new tables. Also solved by the semantic layer, since data bindings can be updated without changing dashboards or queries. 3. Getting from SQL to visuals is too much work in many tools; make it as streamlined as possible. Surprise - solve with the semantic layer; add in more expressive typing to get better defaults;also use it to wire up automatic drilldowns/cross filtering. Supports: bigquery, duckdb, snowflake. Links [1] https://bit.ly/4oIcV8M (language info) Git links: [Frontend] https://bit.ly/4p34dSx [Language] https://bit.ly/4ot4xdf Previously: https://bit.ly/3He2JnN (significant UX/feature reworks since) https://bit.ly/3B9PkdE https://bit.ly/47NxSrY November 10, 2025 at 12:26AM
Show HN: Alignmenter – Measure brand voice and consistency across model versions https://bit.ly/4hMijoM
Show HN: Alignmenter – Measure brand voice and consistency across model versions I built a framework for measuring persona alignment in conversational AI systems. *Problem:* When you ship an AI copilot, you need it to maintain a consistent brand voice across model versions. But "sounds right" is subjective. How do you make it measurable? *Approach:* Alignmenter scores three dimensions: 1. *Authenticity*: Style similarity (embeddings) + trait patterns (logistic regression) + lexicon compliance + optional LLM Judge 2. *Safety*: Keyword rules + offline classifier (distilroberta) + optional LLM judge 3. *Stability*: Cosine variance across response distributions The interesting part is calibration: you can train persona-specific models on labeled data. Grid search over component weights, estimate normalization bounds, and optimize for ROC-AUC. *Validation:* We published a full case study using Wendy's Twitter voice: - Dataset: 235 turns, 64 on-brand / 72 off-brand (balanced) - Baseline (uncalibrated): 0.733 ROC-AUC - Calibrated: 1.0 ROC-AUC - 1.0 f1 - Learned: Style > traits > lexicon (0.5/0.4/0.1 weights) Full methodology: https://bit.ly/3XiazS3 There's a full walkthrough so you can reproduce the results yourself. *Practical use:* pip install alignmenter[safety] alignmenter run --model openai:gpt-4o --dataset my_data.jsonl It's Apache 2.0, works offline, and designed for CI/CD integration. GitHub: https://bit.ly/47vmphM Interested in feedback on the calibration methodology and whether this problem resonates with others. https://bit.ly/4nQMSeC November 10, 2025 at 12:53AM
Saturday, 8 November 2025
Show HN: Serve 100 Large AI models on a single GPU with low impact to TTFT https://bit.ly/4qNmEME
Show HN: Serve 100 Large AI models on a single GPU with low impact to TTFT I wanted to build an inference provider for proprietary AI models, but I did not have a huge GPU farm. I started experimenting with Serverless AI inference, but found out that coldstarts were huge. I went deep into the research and put together an engine that loads large models from SSD to VRAM up to ten times faster than alternatives. It works with vLLM, and transformers, and more coming soon. With this project you can hot-swap entire large models (32B) on demand. Its great for: Serverless AI Inference Robotics On Prem deployments Local Agents And Its open source. Let me know if anyone wants to contribute :) https://bit.ly/4nKufsu November 9, 2025 at 12:48AM
Show HN: DeepShot – NBA game predictor with 70% accuracy using ML and stats https://bit.ly/4nMRgLB
Show HN: DeepShot – NBA game predictor with 70% accuracy using ML and stats I built DeepShot, a machine learning model that predicts NBA games using rolling statistics, historical performance, and recent momentum — all visualized in a clean, interactive web app. Unlike simple averages or betting odds, DeepShot uses Exponentially Weighted Moving Averages (EWMA) to capture recent form and momentum, highlighting the key statistical differences between teams so you can see why the model favors one side. It’s powered by Python, XGBoost, Pandas, Scikit-learn, and NiceGUI, runs locally on any OS, and relies only on free, public data from Basketball Reference. If you’re into sports analytics, machine learning, or just curious whether an algorithm can outsmart Vegas, check it out and let me know what you think: https://bit.ly/4j2dU0S https://bit.ly/4j2dU0S November 9, 2025 at 01:19AM
Show HN: Livestream of a coding agent controlled by public chat https://bit.ly/4hPRQH3
Show HN: Livestream of a coding agent controlled by public chat https://bit.ly/4qQPL1D November 8, 2025 at 06:10PM
Friday, 7 November 2025
Show HN: A DevTools-Level JavaScript API for DOM and CSS Style Rules https://bit.ly/3JObXsq
Show HN: A DevTools-Level JavaScript API for DOM and CSS Style Rules It is a wrapper around the Chrome DevTools Protocol (CDP), the same API that DevTools uses, to inspect elements programmatically and intuitively like accessing DOM. Why this? I have seen too many tools pretending they can get matched CSS style rules but actually only computed styles. The real DevTools data — CSS rules, selectors, and cascading order — is what we want to retrive programmatically, yet CDP is hard to use, full of undocumented quirks. One have to observe Devtools' behavior and check the huge DevTools frontend codebase to know how to use it. Having worked on a Chromium fork before, I feel it is time to solve this once and for all. What can we build around this? That's what I'd love to ask you all. Probably like many, MCP was what came to my mind first, but then I wondered that given this simple API, maybe agents could just write scripts directly? Need opinions. My own use case was CSS inlining. This library was actually split from my UI cloner project: https://bit.ly/4osrEEK I was porting a WordPress + Elementor site and wanted to automate the CSS translation from unreadable stylesheets. So, what do you think? Any ideas, suggestions, or projects to build upon? Would love to hear your thoughts — and feel free to share your own projects in the comments! https://bit.ly/47JOS21 November 8, 2025 at 05:13AM
Show HN: Find matching acrylic paints for any HEX color https://bit.ly/47QvAby
Show HN: Find matching acrylic paints for any HEX color https://bit.ly/4hUvuV2 November 3, 2025 at 04:20PM
Show HN: Rankly – The only AEO platform to track AI visibility and conversions https://bit.ly/3LvyNph
Show HN: Rankly – The only AEO platform to track AI visibility and conversions Most GEO/AEO tools stop at AI Visibility. Rankly goes further, we track the entire AI visibility funnel from mentions to conversions. As brands start showing up in LLM results the next question isn’t visibility, it’s traffic quality and conversions. Rankly builds dynamic data-driven journeys for high-intent LLM Traffic. https://bit.ly/3LueNU7 November 7, 2025 at 11:49PM
Thursday, 6 November 2025
Show HN: VT Code – Rust TUI coding agent with Tree-sitter and AST-grep https://bit.ly/3WFJbgL
Show HN: VT Code – Rust TUI coding agent with Tree-sitter and AST-grep I’ve been building VT Code, a Rust-based terminal coding agent that combines semantic code intelligence (Tree-sitter + ast-grep) with multi-provider LLMs and a defense-in-depth execution model. It runs in your terminal with a streaming TUI, and also integrates with editors via ACP and a VS Code extension. * Semantic understanding: parses your code with Tree-sitter and does structural queries with ast-grep. * Multi-LLM with failover: OpenAI, Anthropic, xAI, DeepSeek, Gemini, Z.AI, Moonshot, OpenRouter, MiniMax, and Ollama for local—swap by env var. * Security first: tool allowlist + per-arg validation, workspace isolation, optional Anthropic sandbox, HITL approvals, audit trail. * Editor bridges: Agent Conext Protocol supports (Zed); VS Code extension (also works in Open VSX-compatible editors like Cursor/Windsurf). * Configurable: vtcode.toml with tool policies, lifecycle hooks, context budgets, and timeouts. GitHub: https://bit.ly/4nZmJev https://bit.ly/4nZmJev November 7, 2025 at 02:14AM
Show HN: Auto-Adjust Keyboard and LCD Brightness via Ambient Light Sensor[Linux] https://bit.ly/3Xesw3R
Show HN: Auto-Adjust Keyboard and LCD Brightness via Ambient Light Sensor[Linux] I have always wanted cool features in Linux because I use it day to day as my OS. I have always wanted to implement this feature and do it properly: one that automatically adjusts keyboard and LCD backlights using data from the Ambient Light Sensor. I enjoy low-level programming a lot. I delved into writing this program in C. It came out well and worked seamlessly on my device. Currently, it only works for keyboard lights. I designed it in a way that the support for LCD will come in seamlessly in the future. But, in the real world, people have different kinds of devices. And I made sure to follow the iio implementation on the kernel through sysfs. I would like feedback. :) https://bit.ly/497NMzA November 2, 2025 at 12:03PM
Wednesday, 5 November 2025
Show HN: Data Formulator 0.5 – Vibe with your data (Microsoft Research) https://bit.ly/4qLWYA1
Show HN: Data Formulator 0.5 – Vibe with your data (Microsoft Research) Data Formulator 0.5 released! It's a new research prototype from Data Formulator team @ Microsoft Research. It's quite a leap since our first prototype last year -- we bring agent mode to interact with data, together with an online demo that you can play with ( https://bit.ly/3JwfE6b ). "Vibe with your data, in control" -- featuring agent mode + interactive control to play with data, it should be more fun than last time you discovered it! - Load whatever data - structured data, database connections, or extract from screenshots/messy text - Flexible AI exploration - full agent mode OR hybrid UI+NL control for precision - Data threads - branch, backtrack, and manage multiple exploration paths - Interpretable results - inspect charts, formulas, explanations, and generated code - Report generation - AI creates shareable insights grounded in your data * Online demo at: https://bit.ly/3JwfE6b * Github: https://bit.ly/48iS4l3 * new video: https://www.youtube.com/watch?v=GfTE2FLyMrs * take a look at our product hunt page: https://bit.ly/47XR6Mx https://bit.ly/3JwfE6b November 6, 2025 at 08:09AM
Show HN: SSH terminal multiplayer written in Golang https://bit.ly/4ojEMMu
Show HN: SSH terminal multiplayer written in Golang To play go here ssh web2u.org -p6996 The Rules: The goal is to claim the most space Secondary goal is to kill as many other Ourboroses as you can The how: To claim space you need to either eat your own tail or reach tiles you've already claimed, tiles that are enclosed when you do so become yours! To kill other snakes you hit their tails To watch out: Other players can kill you by hitting your exposed tail Other players can take your tiles. https://bit.ly/4ouicRx November 6, 2025 at 03:57AM
Show HN: Dynamic Code Execution with MCP: A More Efficient Approach https://bit.ly/3JErL11
Show HN: Dynamic Code Execution with MCP: A More Efficient Approach I've been working on a more efficient approach to code execution with MCP servers that eliminates the filesystem overhead described in Anthropic's recent blog post. The Anthropic post (https://bit.ly/4hOsF7N) showed how agents can avoid token bloat by writing code to call MCP tools instead of using direct tool calls. Their approach generates TypeScript files for each tool to enable progressive discovery. It works well but introduces complexity: you need to generate files for every tool, manage complex type schemas, rebuild when tools update, and handle version conflicts. At scale, 1000 MCP tools means maintaining 1000 generated files. I built codex-mcp using pure dynamic execution. Instead of generating files, we expose just two lightweight tools: list_mcp_tools() returns available tool names, and get_mcp_tool_details(name) loads definitions on demand. The agent explores tools as if navigating a filesystem, but nothing actually exists on disk. Code snippets are stored in-memory as strings in the chat session data. When you execute a snippet, we inject a callMCPTool function directly into the execution environment using AsyncFunction constructor. No imports, no filesystem dependencies, just runtime injection. The function calls mcpManager.tools directly, so you're always hitting the live MCP connection. This means tools are perpetually in sync. When a tool's schema changes on the server, you're already calling the updated version. No regeneration, no build step, no version mismatches. The agent gets all the same benefits of the filesystem approach (progressive discovery, context efficiency, complex control flow, privacy preservation) without any of the maintenance overhead. One caveat: the MCP protocol doesn't enforce output schemas, so chaining tool calls requires defensive parsing since the model can't predict output structure. This affects all MCP implementations though, not specific to our approach. The dynamic execution is made possible by Vercel AI SDK's MCP support, which provides the runtime infrastructure to call MCP tools directly from code. Project: https://bit.ly/47rUOOz Would love feedback from folks working with MCP at scale. Has anyone else explored similar patterns? https://bit.ly/47rUOOz November 6, 2025 at 02:23AM
Subscribe to:
Comments (Atom)