Wednesday, 2 July 2025

Show HN: I made a Chrome extension to export web element to code https://bit.ly/3ZZkv4T

Show HN: I made a Chrome extension to export web element to code Recently I'm working on CopyUI which is an extension to copy UI element from websites and export html(or jsx) and css(or tailwind). I'm building this tool in order to create better landing pages because I'm really not good at layout and colors. So I hope to learn from others' design and innovate later, not to simply replicate. https://bit.ly/4esZk0O July 3, 2025 at 03:02AM

Show HN: I created a privacy respecting ad blocker for apps https://bit.ly/44nGPI5

Show HN: I created a privacy respecting ad blocker for apps Hey HN, I’ve been working on developing my ad blocker for the last number of years and am proud to share that I have now released a new feature that blocks ads directly in apps — not just in a web browser. What makes this app ad blocker feature special? - All ad blocking is done directly on device, - Using a fast, efficient Swift-based architecture (based upon Swift-NIO) - Follows a strict ZERO data collection and logging policy - Blocks ads in all apps on iPhones, iPads and Macs It works as a local VPN proxy, so it filters all of your traffic locally without going through any third-party servers. The app ad blocker works across News apps, Social media, Games and even browsers like Chrome and Firefox. After using ad blocking in Safari for a long time, it is eye-opening how many ads and trackers are also embedded in apps themselves. The app is available via the App Store, with a 30 day free trial, before an annual subscription is required. I know there are many other ad blockers available, but I hope the combination of performance, efficiency and respect for privacy will mean that this particular feature is a valuable option. It also took a LOT of work to get this working seamlessly within the App Store and iOS / macOS limitations, so am glad the app has been able to finally be released into the world. Full details on the feature are in the release post: https://bit.ly/4eCTzOw https://bit.ly/4eCTzOw July 3, 2025 at 02:04AM

Tuesday, 1 July 2025

Show HN: A local secrets manager with easy backup https://bit.ly/469jtXZ

Show HN: A local secrets manager with easy backup https://bit.ly/3I6x8ox June 29, 2025 at 10:30AM

Show HN: Just a Line: Resurrected https://bit.ly/4lae3R6

Show HN: Just a Line: Resurrected I always thought Google's Just a Line experiment[1] was crazy cool and recently wanted to revisit it. But it hadn't been updated in 7 years So I upgraded all of the dependencies (including the latest version of Swift 5), added SwiftLint and SwiftFormat, and got it (mostly) working again! Hope you have some fun with it- help welcome there's still more to do! [1] https://bit.ly/3GrWzQP https://bit.ly/4l91XaJ July 2, 2025 at 12:48AM

Monday, 30 June 2025

Show HN: Crush Check – AI relationship text analyzer https://bit.ly/46oT9Jh

Show HN: Crush Check – AI relationship text analyzer Hi HN, I over-thought one too many “lol sounds fun” texts and decided to teach a model to be my wingman instead. The result is Crush Check AI —export an iMessage / WhatsApp / Instagram thread and get a chat report with: * crush score (0-100) based on response latency, reciprocity, sentiment shifts. * red flags like like breadcrumbing / love-bombing * chat timeline. You can also ask questions about your conversation. Why post here I’d love feedback on: * whether this is something people need. * how was the user experience. * what features would you like to see. Thanks in advance for any roast, bug reports, or “this is useless because ___” takes. Happy to share more implementation details and happy to give away free Premium subscription in exchange for feedback! https://bit.ly/3ZWogYG July 1, 2025 at 02:51AM

Show HN: Praxos – Context Management for AI Agents https://bit.ly/3I4Ib1p

Show HN: Praxos – Context Management for AI Agents Hey HN! We're Lucas and Soheil, the founders of Praxos ( https://bit.ly/4lrdvpw ). Praxos is a context manager for AI Agents, providing everything you need to build stateful agents that don't break in production. Praxos can parse any data source, from unstructured PDFs and API streams to conversational messages, to structured databases, and transform them into a single Knowledge Graph. Everything in this graph is semantically typed and its relationships are made explicit, turning data into a clean, queryable universe of understanding that AI can use without making mistakes. Whether you need to query for the answer to a question or to extract data in a way that makes sense for the current use case, Praxos does it all, with no requerying needed. This enables AI apps to parse data end-to-end, and then act on it to deliver outputs across single-chain and multi-chain reasoning steps. Intermediate, final, and user-edited outputs can be added back to the knowledge graph, allowing Praxos to learn on the fly. When we were building in insurance, we often ran into two major problems deploying AI: First, LLMs would prove incapable of parsing documents such as property schedules and insurance policies. For reference, a property schedule may be a 50-page collection of Word, Excel, and PDF documents detailing construction, usage, and geographical information about a collection of physical properties. Recreating one object (a property) would mean combing through the files establish semantic, conceptual, spatial, and sometimes implicit linkages between the data. The outcome: relationship information would be lost, left blank, or hallucinated. Second, repeated calls to search, retrieve, and update information would sometimes lead to cascading errors. This became more frequent across complex tasks such as reading a document, fetching previous user information, performing a calculation, storing it, and then presenting it to the user. We realized that for AI to deliver more useful and accurate responses that correctly use relationships in the document, these relationships need to be made explicit. Much of the contextual information is represented without the usage of words. In turn, this means that we cannot directly interact with them programmatically, and LLMs are forced to interpret them themselves, every single time. That’s when we started building Praxos. We've set up a self-serve option with a free tier (up to a data cap) for hobbyists and early-adopters. For context (no pun intended), this should cover you for up to 200 document pages. You can register here: https://bit.ly/40xzenJ . Our first version is an SDK meant to cover you across all your data extraction, retrieval, and update needs. Here's how it works: Organizing information: Praxos sorts information into ontologies, which are structured schemas for storing data. These allow you to introduce predefined types, attributes, and relationships that guide how the knowledge graph is built and interpreted. Processing input data: Praxos can handle any data source, ranging from PDFs to tabular data, JSONs, and dialog-like exchanges. Extraction is performed end-to-end. You don't need to OCR, chunk, or pre-process your inputs. Processing is as simple as passing in your file and selecting an ontology. Retrieving information / memories: For each query, Praxos searches and retrieves related stored information by leveraging a combination of graph traversal techniques, vector similarity and key-value lookups. Search objects will return both the entities/their connections, as well as a sentence. We’d love to hear what you think! Please feel free to dive in, and share any thoughts or suggestions with us over Discord ( https://bit.ly/4l8JV8L ). Your feedback will help shape where we take Praxos from here! July 1, 2025 at 01:13AM

Show HN: Local LLM Notepad – run a GPT-style model from a USB stick https://bit.ly/4nsvttJ

Show HN: Local LLM Notepad – run a GPT-style model from a USB stick What it is A single 45 MB Windows .exe that embeds llama.cpp and a minimal Tk UI. Copy it (plus any .gguf model) to a flash drive, double-click on any Windows PC, and you’re chatting with an LLM—no admin rights, Cloud, or network. Why I built it Existing “local LLM” GUIs assume you can pip install, pass long CLI flags, or download GBs of extras. I wanted something my less-technical colleagues could run during a client visit by literally plugging in a USB drive. How it works PyInstaller one-file build → bundles Python runtime, llama_cpp_python, and the UI into a single PE. On first launch, it memory-maps the .gguf; subsequent prompts stream at ~20 tok/s on an i7-10750H with gemma-3-1b-it-Q4_K_M.gguf (0.8 GB). Tick-driven render loop keeps the UI responsive while llama.cpp crunches. A parser bold-underlines every token that originated in the prompt; Ctrl+click pops a “source viewer” to trace facts. (Helps spot hallucinations fast.) https://bit.ly/3I89nwr July 1, 2025 at 12:43AM

Show HN: Timezone converter that tells you if your meeting time sucks https://bit.ly/3I0vLrA

Show HN: Timezone converter that tells you if your meeting time sucks I work with a team spread across Sydney, London, and SF. Last month I accidentally called my Aussie colleague at 3am their time during what I thought was a "quick sync". The silence before "mate... do you know what time it is here?" still haunts me. Built this: https://bit.ly/3I85vLN It's a timezone converter but it tells you if your meeting time sucks for the other person: - Meeting quality ratings (excellent/good/fair/poor) - Visual indicators for day/night - Shows if it's a holiday in their country - Handles weird cases like Dubai's Sunday-Thursday workweek Technical bit: pre-generated 18k+ static pages for every city combination. Loads instantly because there's no backend calculations. Next.js 15, no database. Still figuring out monetization (ads? affiliate links for virtual meeting tools?) but keeping it free for now. What else would make this useful? Currently tracking holidays for ~20 countries but could add more. https://bit.ly/3I85vLN June 30, 2025 at 10:37PM

Sunday, 29 June 2025

Show HN: Cheesy Mamas: Local only code editor with Git and Bash support https://bit.ly/3Goc3Fu

Show HN: Cheesy Mamas: Local only code editor with Git and Bash support Cheesy Mamas is a local first, multi tab code editor written in Python using PyQt6. It is designed for Linux systems and built around simplicity, transparency, and control. There is no telemetry, no sync, and no accounts. The editor runs entirely on your local machine using standard system tools and stays out of your way unless you ask for help. The editor supports multiple files open at once, persistent tab state, live dirty tracking, and a dark UI. It includes syntax highlighting for Python, C, and LaTeX. A built in run button executes Python directly, compiles C with gcc, or runs pdflatex for LaTeX files. It also includes a Bash button to launch or edit a saved shell script. There is no plugin system and no background processes. All functionality is visible and inspectable in the interface. The Git integration is the core design focus. Unlike most editors, which treat Git as a sidebar or rely on an external staging panel, Cheesy Mamas embeds Git version history directly beside each open file. When you open a file, the editor checks if it is part of a Git repository. If not, the first commit you make will automatically initialize a new Git repository in the current folder. For each file, Cheesy Mamas retrieves its individual commit history using Git log limited to that path. This history appears in a vertical sidebar next to the editing pane. Selecting a commit loads that exact version of the file from Git and performs a diff against the current working version in memory. The editor highlights changed lines and overlays revert options directly into the document view. When you click a past commit, the editor compares that version against your current working file. All changed lines are visually marked. You can click a "revert line" button next to any highlighted block to immediately undo that change using the older version. These changes are local until you save. This allows for a granular, low effort recovery flow without affecting unrelated files or requiring a full diff tool. Right clicking a commit provides a context menu that lets you view the full unified diff, copy the full version of that commit to your clipboard, or revert the entire file to that point. These operations use standard Git plumbing internally and do not alter other files in the repository. Cheesy Mamas does not require you to commit or stage across all files. Each file's history and actions are isolated. The editor is single instance by default. Opening a file from the file browser or terminal reuses the existing window and opens the file in a new tab. This is handled via a relay system that passes the file path to the existing running instance. The UI is dark by default with soft gold highlights. There is no animation or decoration beyond what is needed for clarity. The editor warns on exit if any file is unsaved. Saving and Git commits are handled through dedicated buttons and keyboard shortcuts. The Bash button opens a terminal script from the config folder, or lets you write one if none exists. Cheesy Mamas was built to solve a personal problem. Most editors assume the user is syncing code to a cloud service or using Git externally. They require plugins or navigation panels to access version history and rarely show diffs in context. Cheesy Mamas was designed to treat versioning as a natural part of editing, and to bring Git history as close to the cursor as possible without overwhelming the UI. The project is fully offline, runs on Linux, and is installable via a simple shell script. It places the Python script and assets in `~/.local/share/CheesyMamas`, creates a `.desktop` entry, and integrates with your application menu. You can optionally set it as the default handler for `.py`, `.c`, `.tex`, and `.sh` files by editing the desktop file and uncommenting the `MimeType` field. There is no account system and no sync. It’s a local program, designed to live where you live, and let you undo what needs undoing. https://bit.ly/45NzkuV June 30, 2025 at 04:53AM

Show HN: BloomPilot – AI-Powered Overlay for Bloomberg Terminal https://bit.ly/4498JY7

Show HN: BloomPilot – AI-Powered Overlay for Bloomberg Terminal Hi HN, We just launched BloomPilot — a minimal AI-powered overlay designed for Bloomberg Terminal users. It's built for financial professionals who want faster GPT-enhanced insights, a lightweight terminal interface, and modern tooling on top of the Bloomberg infrastructure they already use. Key Features: GPT-4o analysis integrated into Bloomberg-style command line Built-in fallbacks (Alpha Vantage, Polygon, Finnhub) if Bloomberg API is unavailable One-time payment of 299 USDC via Phantom wallet (Solana) Terminal-style UI with keyboard-first design and command history Real-time data streaming, AI formatting, wallet-based access control It’s designed specifically for professional traders, analysts, and fintech builders who spend their day in BBG and want a smarter way to interact with it. We’re focused on performance, authenticity (BBG UI), and simplicity — no freemium models, no monthly billing, and no fluff. Would love your thoughts, questions, or ideas for features. https://bit.ly/448lGBl June 29, 2025 at 11:10PM

Show HN: AI-powered tracker of Trump executive orders https://bit.ly/4472TGD

Show HN: AI-powered tracker of Trump executive orders I built a tracker that automatically scrapes the White House website for new executive orders and uses GPT-4 to generate plain-English summaries. The system runs daily, finds new orders, feeds the full legal text to ChatGPT for summarization and auto-categorization, then generates individual pages and updates the main index. It even creates custom Open Graph images for social sharing. Currently tracking 158+ orders with automatic updates as new ones are signed. Features: - AI summaries of all executive orders in plain English - Auto-categorization by policy area (immigration, trade, AI, etc.) - Search by keyword, date, or category - Completely neutral - Individual pages for each order with full text - Auto-generated OG images I got tired of reading dense legal text to understand what's actually being signed. The AI does the heavy lifting of parsing government language into readable summaries. Link: https://bit.ly/44BCp05 Tech: Next.js/Tailwind frontend, Python scraper with BeautifulSoup, GPT-4 for summaries, automated OG image generation via headless chrome. https://bit.ly/44BCp05 June 30, 2025 at 12:51AM

Show HN: Tablr – Supabase with AI Features https://bit.ly/3GpaV4x

Show HN: Tablr – Supabase with AI Features https://bit.ly/45Oqop8 June 30, 2025 at 12:05AM

Saturday, 28 June 2025

Show HN: DNS at ludicrous speed for Go, powered by XDP sockets https://bit.ly/4lQfVyr

Show HN: DNS at ludicrous speed for Go, powered by XDP sockets https://bit.ly/3GnoQYM June 29, 2025 at 07:27AM

Show HN: SVG Lined Tile Generator https://bit.ly/4nrv1vH

Show HN: SVG Lined Tile Generator https://bit.ly/4kZvQKw June 26, 2025 at 02:13AM

Show HN: Leveraging Google ADK for Cyber Threat Intelligence https://bit.ly/4kdJAA0

Show HN: Leveraging Google ADK for Cyber Threat Intelligence The project is in an early state. I just had to recently reset the graph data store, but I figured now is a good time to share my post and project. The link is to my blog post, the tool is at https://bit.ly/3Ti9hVh https://bit.ly/4lyK7Ok June 28, 2025 at 09:19PM

Friday, 27 June 2025

Show HN: AIOps MCP – Log anomaly detection using Isolation Forest https://bit.ly/3ZTGs5n

Show HN: AIOps MCP – Log anomaly detection using Isolation Forest I built an open-source AIOps MCP (Monitoring & Control Plane) that detects anomalies in logs using Isolation Forest. It accepts logs from agents, apps, or collectors, parses and extracts features, and identifies unusual patterns in real time. Alerts can be sent to Slack, Webhooks, or PagerDuty. It’s lightweight, easy to deploy with Kubernetes & Helm, and designed to plug into existing observability stacks. I built this to experiment with combining ML-based anomaly detection and flexible alerting for DevOps/SRE teams. Most AIOps platforms are either too heavyweight or closed-source — I wanted something minimal yet effective. You can try it by running the FastAPI app locally or deploying with Helm. Contributions are welcome — I’d love feedback on features, detection accuracy, and real-world use cases! GitHub: https://bit.ly/3ZSDldO https://bit.ly/3ZSDldO June 28, 2025 at 07:09AM

Show HN: Self-host your data anonymization pipeline https://bit.ly/3I4C79d

Show HN: Self-host your data anonymization pipeline Needed this in my own work, anonymizing PII/PHI and decided to build this because presidio didn't really cut it for our use-case. Try it and maybe let me know if you have any feedback :) https://bit.ly/445JwO9 June 28, 2025 at 04:01AM

Show HN: Dungeon Master in Your Console https://bit.ly/3G0AhFV

Show HN: Dungeon Master in Your Console I don't normally share side projects here(or in general). Don't have much time to open them up to too much attention. I started this project while riding in a car last weekend. Mainly to explore OpenAI Codex. Using Github mobile I wrote the initial specifications into the readme, and using the ChatGPT iOS app, had Codex build a simple CLI based dungeon master. Switched back to Github for managing the PRs and back and forth for the whole car ride... It kinda got a little out of hand from there, and it's now a mix of AI(mostly AI) and myself making adjustments... The first version was entirely OAI and it worked OK but was too easy on the player. Thanks to HN I had heard about the Wayfarer model and I find that model to be pretty entertaining. In the end I thought this turned out pretty "cute" and makes a decent time waster that looks like work wink wink https://bit.ly/3GcRzj8 June 27, 2025 at 09:58PM

Thursday, 26 June 2025

Show HN: I wrote a GPU-less billion-vector DB for molecule search (live demo) https://bit.ly/4eHxXk5

Show HN: I wrote a GPU-less billion-vector DB for molecule search (live demo) Input a SMILES string (or pick one molecule from the examples) and it returns up to 100k molecules closest in 3-D shape or electrostatic similarity – from 10+ billion scale databases — typically in under 5-10 s. *Why it might interest HN* * Entire index lives on disk — no GPU at query-time, less than ~10 GB RAM total. * Built from scratch (no FAISS index / Milvus / Pinecone). * Index-build cost: one Nvidia T4 (~ 300USD) for one 5.5B database. * Open to anyone, predict ADMET, export results as CSV/SDF. Full write-up & benchmarks (DUD-E, LIT-PCBA, SVS) in the pre-print: https://bit.ly/4kbb3SW... https://bit.ly/46fhMrK June 27, 2025 at 12:51AM

Show HN: Listed – An agentic platform to rank your business on AI https://bit.ly/44AUBHc

Show HN: Listed – An agentic platform to rank your business on AI Hi HN, I’m Harrison, co-founder of Listed. Today we're launching our agentic platform to help your business win in the new age of AI. You can try the platform here: https://bit.ly/4nB9yAG And watch the launch video here: https://www.youtube.com/watch?v=MJUPo6H78z8 The idea for this came from pure frustration. I asked ChatGPT about my own company and it hallucinated, inventing features and getting basic facts wrong. I realized there was no mechanism for a business to provide a verified source of truth to these models. This problem is now existential. With Google's AI Overviews and the rise of answer engines, your website's unstructured HTML is a poor source for the rich, nuanced context that LLMs need. This leads to an army of AI bots from OpenAI, Google, Perplexity, etc., scraping your site, getting it wrong, and permanently baking those errors into their models. So, we built Listed. The simplest analogy is it's like Cursor, but for context. Instead of an AI helping you write code, our agent helps you build the comprehensive, structured context that allows LLMs to represent your business accurately and favorably. Here’s how our agentic system works: Automated Context Building: When you sign up, our agent scrapes your existing website to build a first draft of your AI Listing. It structures the data and identifies weak spots. Intelligent Workflows: Based on ongoing analytics, the agent initiates simple, chat-based workflows to help you enrich your listing and improve its accuracy and ranking potential. Performance Analytics & Feedback Loop: The agent constantly measures your AI Ranking (discoverability) and Recall Accuracy across all major models (GPT-4o, Claude 3, Gemini, etc.). This data feeds back into the system, generating new workflows to continuously improve your performance. The Connection: Your AI Listing is a hosted service. You add a simple code snippet to your website. When AI crawlers visit, this acts as a signpost, essentially "prompt injecting" and directing them to consume your clean, structured, AI-optimized data feed instead of trying to parse your messy site. The goal is to give every business an active role in the AI ecosystem. You provide the clean, verified data that AI companies desperately need, and in return, you get to control your narrative and rank higher in their answers. We are launching our free tier today. We’d love for you to try it out and hear your feedback. You can get started here: https://bit.ly/4nB9yAG I'll be here all day answering questions. Thanks! June 27, 2025 at 01:32AM