Music046 | Nigeria No1. Daily Updates | Contact Us - +2349077287056
Nigeria No1. Music site And Complete Entertainment portal for Music Promotion WhatsApp:- +2349077287056
Sunday, 15 March 2026
Show HN: Webassembly4J Run WebAssembly from Java https://bit.ly/41cf2aN
Show HN: Webassembly4J Run WebAssembly from Java I’ve released WebAssembly4J, along with two runtime bindings: Wasmtime4J – Java bindings for Wasmtime https://bit.ly/471hULh WAMR4J – Java bindings for WebAssembly Micro Runtime https://bit.ly/4blCCGY WebAssembly4J – a unified Java API that allows running WebAssembly across different engines https://bit.ly/40CvoJI The motivation was that Java currently has multiple emerging WebAssembly runtimes, but each exposes its own API. If you want to experiment with different engines, you have to rewrite the integration layer each time. WebAssembly4J provides a single API while allowing different runtime providers underneath. Goals of the project: Run WebAssembly from Java applications Allow cross-engine comparison of runtimes Make WebAssembly runtimes more accessible to Java developers Provide a stable interface while runtimes evolve Currently supported engines: Wasmtime WAMR Chicory GraalWasm To support both legacy and modern Java environments the project targets: Java 8 (JNI bindings) Java 11 Java 22+ (Panama support) Artifacts are published to Maven Central so they can be added directly to existing projects. I’d be very interested in feedback from people working on Java + WebAssembly integrations or runtime implementations. March 16, 2026 at 12:08AM
Show HN: Lockstep – A data-oriented programming language https://bit.ly/4lB6qEo
Show HN: Lockstep – A data-oriented programming language https://bit.ly/4lyvcF9 I want to share my work-in-progress systems language with a v0.1.0 release of Lockstep. It is a data-oriented systems programming language designed for high-throughput, deterministic compute pipelines. I built Lockstep to bridge the gap between the productivity of C and the execution efficiency of GPU compute shaders. Instead of traditional control flow, Lockstep enforces straight-line SIMD execution. You will not find any if, for, or while statements inside compute kernels; branching is entirely replaced by hardware-native masking and stream-splitting. Memory is handled via a static arena provided by the Host. There is no malloc, no hidden threads, and no garbage collection, which guarantees predictable performance and eliminates race conditions by construction. Under the hood, Lockstep targets LLVM IR directly to leverage industrial-grade optimization passes. It also generates a C-compatible header for easy integration with host applications written in C, C++, Rust, or Zig. v0.1.0 includes a compiler with LLVM IR and C header emission, a CLI simulator for validating pipeline wiring and cardinality on small datasets and an opt-in LSP server for real-time editor diagnostics, hover type info, and autocompletion. You can check out the repository to see the syntax, and the roadmap outlines where the project is heading next, including parameterized SIMD widths and multi-stage pipeline composition. I would love to hear feedback on the language semantics, the type system, and the overall architecture! https://bit.ly/4lyvcF9 March 16, 2026 at 01:14AM
Show HN: Open-source playground to red-team AI agents with exploits published https://bit.ly/4bawx1g
Show HN: Open-source playground to red-team AI agents with exploits published We build runtime security for AI agents. The playground started as an internal tool that we used to test our own guardrails. But we kept finding the same types of vulnerabilities because we think about attacks a certain way. At some point you need people who don't think like you. So we open-sourced it. Each challenge is a live agent with real tools and a published system prompt. Whenever a challenge is over, the full winning conversation transcript and guardrail logs get documented publicly. Building the general-purpose agent itself was probably the most fun part. Getting it to reliably use tools, stay in character, and follow instructions while still being useful is harder than it sounds. That alone reminded us how early we all are in understanding and deploying these systems at scale. First challenge was to get an agent to call a tool it's been told to never call. Someone got through in around 60 seconds without ever asking for the secret directly (which taught us a lot). Next challenge is focused on data exfiltration with harder defences: https://bit.ly/4b98dgc https://bit.ly/3PCKDjq March 15, 2026 at 11:29PM
Saturday, 14 March 2026
Show HN: Signet.js – A minimalist reactivity engine for the modern web https://bit.ly/3P7Oghg
Show HN: Signet.js – A minimalist reactivity engine for the modern web https://bit.ly/4uuhYwV March 15, 2026 at 03:58AM
Show HN: GrobPaint: Somewhere Between MS Paint and Paint.net https://bit.ly/472TcKq
Show HN: GrobPaint: Somewhere Between MS Paint and Paint.net https://bit.ly/47wryWg March 14, 2026 at 11:41PM
Show HN: Structural analysis of the D'Agapeyeff cipher (1939) https://bit.ly/4lwRcQA
Show HN: Structural analysis of the D'Agapeyeff cipher (1939) I am working on the D'Agapeyeff cipher, an unsolved cryptogram from 1939. Two findings that I haven't seen published before: 1. All 5 anomalous symbol values in the cipher cluster in the last column of a 14x14 grid. This turns out to be driven by a factor-of-2-and-7 positional pattern in the linear text. 2. Simulated annealing with Esperanto quadgrams (23M char Leipzig corpus) on a 2x98 columnar transposition consistently outscores English by 200+ points and recovers the same Esperanto vocabulary across independent runs. The cipher is not solved. But the combination of structural geometry and computational linguistics narrows the search space significantly. Work in progress, more to come! https://bit.ly/3PbwGc6 March 15, 2026 at 12:34AM
Friday, 13 March 2026
Show HN: Simple plugin to get Claude Code to listen to you https://bit.ly/4br70Qi
Show HN: Simple plugin to get Claude Code to listen to you Hey HN, My cofounder and I have gotten tired of CC ignoring our markdown files so we spent 4 days and built a plugin that automatically steers CC based on our previous sessions. The problem is usually post plan-mode. What we've tried: Heavily use plan mode (works great) CLAUDE.md, AGENTS.md, MEMORY.md Local context folder (upkeep is a pain) Cursor rules (for Cursor) claude-mem (OSS) -> does session continuity, not steering We use fusion search to find your CC steering corrections. - user prompt embeddings + bm25 - correction embeddings + bm25 - time decay - target query embeddings - exclusions - metadata hard filters (such as files) The CC plugin: - Automatically captures memories/corrections without you having to remind CC - Automatically injects corrections without you having to remind CC to do it. The plugin will merge, update, and distill your memories, and then inject the highest relevant ones after each of your own prompts. We're not sure if we're alone in this. We're working on some benchmarks to see how effective context injection actually is in steering CC and we know we need to keep improving extraction, search, and add more integrations. We're passionate about the real-time and personalized context layer for agents. Giving Agents a way to understand what you mean when you say "this" or "that". Bringing the context of your world, into a secure, structured, real-time layer all your agents can access. Would appreciate feedback on how you guys get CC to actually follow your markdown files, understand your modus operandi, feedback on the plugin, or anything else about real-time memory and context. - Ankur https://bit.ly/4bx6joK March 14, 2026 at 12:15AM
Show HN: Kube-pilot – AI engineer that lives in your Kubernetes cluster https://bit.ly/4lq1wK4
Show HN: Kube-pilot – AI engineer that lives in your Kubernetes cluster I built kube-pilot — an autonomous AI agent that runs inside your Kubernetes cluster and does the full dev loop: writes code, builds containers, deploys services, verifies they're healthy, and closes the ticket. You file a GitHub issue, it does the rest. What makes this different from AI coding tools: kube-pilot doesn't just generate code and hand it back to you. It lives inside the cluster with direct access to the entire dev stack — git, Tekton (CI/CD), Kaniko (container builds), ArgoCD (GitOps deployments), kubectl, Vault. Every tool call produces observable state that feeds into the next decision. The cluster isn't just where code runs — it's where the agent thinks. The safety model: all persistent changes go through git, so everything is auditable and reversible. ArgoCD is the only thing that writes to the cluster. Secrets stay behind Vault — the agent creates ExternalSecret references, never touches raw credentials. Credentials are scrubbed before reaching the LLM. Live demo: I filed GitHub issues asking it to build a 4-service office suite (auth, docs API, notification worker, API gateway). It built and deployed all of them autonomously. You can see the full agent loop — code, builds, deploys, verification, comments — on the closed issues: - https://bit.ly/4b8SihV... - https://bit.ly/4b8SiOX... - https://bit.ly/4lBAjEw... - https://bit.ly/4sPN6FP... One helm install gives you everything — the agent, Gitea (git + registry), Tekton, ArgoCD, Vault, External Secrets. No external dependencies. Coming next: Slack and Jira integrations (receive tasks and post updates where your team already works), Prometheus metrics and Grafana dashboards for agent observability, and Alertmanager integration so firing alerts automatically become issues that kube-pilot investigates and fixes. Early proof of concept. Rough edges. But it works. https://bit.ly/3Pk0p2x March 14, 2026 at 03:49AM
Show HN: I wrote my first neural network https://bit.ly/4ltOFGV
Show HN: I wrote my first neural network I have been interested in neural nets since the 90's. I've done quite a bit of reading, but never gotten around to writing code. I used Gemini in place of Wikipedia to fill in the gaps of my knowledge. The coolest part of this was learning about dual numbers. You can see in early commits that I did not yet know about auto-diff; I was thinking I'd have to integrate a CAS library or something. Now, I'm off to play with TensorFlow. https://bit.ly/4cGH7y9 March 14, 2026 at 01:21AM
Show HN: EdgeWhisper – On-device voice-to-text for macOS (Voxtral 4B via MLX) https://bit.ly/4cMsuJQ
Show HN: EdgeWhisper – On-device voice-to-text for macOS (Voxtral 4B via MLX) I built a macOS voice dictation app where zero bytes of audio ever leave your machine. EdgeWhisper runs Voxtral Mini 4B Realtime (Mistral AI, Apache 2.0) locally on Apple Silicon via the MLX framework. Hold a key, speak, release — text appears at your cursor in whatever app has focus. Architecture: - Native Swift (SwiftUI + AppKit). No Electron. - Voxtral 4B inference via MLX on the Neural Engine. ~3GB model, runs in ~2GB RAM on M1+. - Dual text injection: AXUIElement (preserves undo stack) with NSPasteboard+CGEvent fallback. - 6-stage post-processing pipeline: filler removal → dictionary → snippets → punctuation → capitalization → formatting. - Sliding window KV cache for unlimited streaming without latency degradation. - Configurable transcription delay (240ms–2.4s). Sweet spot at 480ms. What it does well: - Works in 20+ terminals/IDEs (VS Code, Xcode, iTerm2, Warp, JetBrains). Most dictation tools break in terminals — we detect them and switch injection strategy. - Removes filler words automatically ("um", "uh", "like"). - 13 languages with auto-detection. - Personal dictionary + snippet expansion with variable support (, ). - Works fully offline after model download. No accounts, no telemetry, no analytics. What it doesn't do (yet): - No file/meeting transcription (coming) - No translation (coming) - No Linux/Windows (macOS only, Apple Silicon required) Pricing: Free tier (5 min/day, no account needed). Pro at $7.99/mo or $79.99/yr. I'd love feedback on: 1. Would local LLM post-processing (e.g., Phi-4-mini via MLX) for grammar/tone be worth the extra ~1GB RAM? 2. For developers using voice→code workflows: what context would you want passed to your editor? 3. Anyone else building on Voxtral Realtime? Curious about your experience with the causal audio encoder. https://bit.ly/4bqT09c March 13, 2026 at 11:57PM
Show HN: What was the world listening to? Music charts, 20 countries (1940–2025) https://bit.ly/40pkPcZ
Show HN: What was the world listening to? Music charts, 20 countries (1940–2025) I built this because I wanted to know what people in Japan were listening to the year I was born. That question spiraled: how does a hit in Rome compare to what was charting in Lagos the same year? How did sonic flavors propagate as streaming made musical influence travel faster than ever? 88mph is a playable map of music history: 230 charts across 20 countries, spanning 8 decades (1940–2025). Every song is playable via YouTube or Spotify. It's open source and I'd love help expanding it — there's a link to contribute charts for new countries and years. The goal is to crowdsource a complete sonic atlas of the world. https://bit.ly/4s6DV3v March 10, 2026 at 05:18PM
Show HN: fftool – A Terminal UI for FFmpeg – Shows Command Before It Runs https://bit.ly/3NwJ71I
Show HN: fftool – A Terminal UI for FFmpeg – Shows Command Before It Runs https://bit.ly/4sCyUzG March 13, 2026 at 11:08AM
Thursday, 12 March 2026
Show HN: Global Maritime Chokepoints https://bit.ly/4sbPHdc
Show HN: Global Maritime Chokepoints https://bit.ly/4cLCnaE March 13, 2026 at 05:42AM
Show HN: Slop or not – can you tell AI writing from human in everyday contexts? https://bit.ly/4uwqqvW
Show HN: Slop or not – can you tell AI writing from human in everyday contexts? I’ve been building a crowd-sourced AI detection benchmark. Two responses to the same prompt — one from a real human (pre-2022, provably pre prevalence of AI slop on the internet), one generated by AI. You pick the slop. Three wrong and you’re out. The dataset: 16K human posts from Reddit, Hacker News, and Yelp, each paired with AI generations from 6 models across two providers (Anthropic and OpenAI) at three capability tiers. Same prompt, length-matched, no adversarial coaching — just the model’s natural voice with platform context. Every vote is logged with model, tier, source, response time, and position. Early findings from testing: Reddit posts are easy to spot (humans are too casual for AI to mimic), HN is significantly harder. I'll be releasing the full dataset on HuggingFace and I'll publish a paper if I can get enough data via this crowdsourced study. If you play the HN-only mode, you’re helping calibrate how detectable AI is on here specifically. Would love feedback on the pairs — are any trivially obvious? Are some genuinely hard? https://bit.ly/4upYoBV March 12, 2026 at 10:53PM
Wednesday, 11 March 2026
Show HN: A context-aware permission guard for Claude Code https://bit.ly/4cNXEk1
Show HN: A context-aware permission guard for Claude Code We needed something like --dangerously-skip-permissions that doesn’t nuke your untracked files, exfiltrate your keys, or install malware. Claude Code's permission system is allow-or-deny per tool, but that doesn’t really scale. Deleting some files is fine sometimes. And git checkout is sometimes not fine. Even when you curate permissions, 200 IQ Opus can find a way around it. Maintaining a deny list is a fool's errand. nah is a PreToolUse hook that classifies every tool call by what it actually does, using a deterministic classifier that runs in milliseconds. It maps commands to action types like filesystem_read, package_run, db_write, git_history_rewrite, and applies policies: allow, context (depends on the target), ask, or block. Not everything can be classified, so you can optionally escalate ambiguous stuff to an LLM, but that’s not required. Anything unresolved you can approve, and configure the taxonomy so you don’t get asked again. It works out of the box with sane defaults, no config needed. But you can customize it fully if you want to. No dependencies, stdlib Python, MIT. pip install nah && nah install https://bit.ly/4uo6cnR https://bit.ly/3PvHlOS March 12, 2026 at 12:26AM
Show HN:Conduit–Headless browser with SHA-256 hash chain - Ed25519 audit trails https://bit.ly/40qLGW8
Show HN:Conduit–Headless browser with SHA-256 hash chain - Ed25519 audit trails I've been building AI agent tooling and kept running into the same problem: agents browse the web, take actions, fill out forms, scrape data -- and there's zero proof of what actually happened. Screenshots can be faked. Logs can be edited. If something goes wrong, you're left pointing fingers at a black box. So I built Conduit. It's a headless browser (Playwright under the hood) that records every action into a SHA-256 hash chain and signs the result with Ed25519. Each action gets hashed with the previous hash, forming a tamper-evident chain. At the end of a session, you get a "proof bundle" -- a JSON file containing the full action log, the hash chain, the signature, and the public key. Anyone can independently verify the bundle without trusting the party that produced it. The main use cases I'm targeting: - *AI agent auditing* -- You hand an agent a browser. Later you need to prove what it did. Conduit gives you cryptographic receipts. - *Compliance automation* -- SOC 2, GDPR data subject access workflows, anything where you need evidence that a process ran correctly. - *Web scraping provenance* -- Prove that the data you collected actually came from where you say it did, at the time you say it did. - *Litigation support* -- Capture web content with a verifiable chain of custody. It also ships as an MCP (Model Context Protocol) server, so Claude, GPT, and other LLM-based agents can use the browser natively through tool calls. The agent gets browse, click, fill, screenshot, and the proof bundle builds itself in the background. Free, MIT-licensed, pure Python. No accounts, no API keys, no telemetry. GitHub: https://bit.ly/40mFlLj Install: `pip install conduit-browser` Would love feedback on the proof bundle format and the MCP integration. Happy to answer questions about the cryptographic design. March 12, 2026 at 12:15AM
Tuesday, 10 March 2026
Show HN: CryptoFlora – Visualize SHA256 to a flower using Rose curves https://bit.ly/4lkcFMp
Show HN: CryptoFlora – Visualize SHA256 to a flower using Rose curves I made this side tool to visualize SHA-256 while building a loyalty card wallet application to easily identify if a collected stamp is certified by the issuer by simply seeing it, instead of scanning something like a QR code or matching a serial number. I think there are more potential use cases, like creating a random avatar based on an email address or something else. Feel free to share your feedback :) source code: https://bit.ly/3Ngkfeo https://bit.ly/4cBM2jV March 11, 2026 at 04:52AM
Show HN: Readhn – AI-Native Hacker News MCP Server (Discover, Trust, Understand) https://bit.ly/3Nw8u3F
Show HN: Readhn – AI-Native Hacker News MCP Server (Discover, Trust, Understand) I felt frustrated finding high-signal discussions on HN, and I started this project to better understand how this community actually works. That led me to build readhn, an MCP server that helps with three things: - Discover: find relevant stories/comments by keyword, score, and time window - Trust: identify credible voices using EigenTrust-style propagation from seed experts - Understand: show why each result is ranked, with explicit signals instead of a black-box score It includes 6 tools: discover_stories, search, find_experts, expert_brief, story_brief, and thread_analysis. I also added readhn setup so AI agents can auto-configure it (Claude Code, Codex, Cursor, and others) after pip install. I’d love feedback on: 1) whether these ranking signals match how you evaluate HN quality, 2) trust-model tradeoffs, 3) what would make this useful in your daily workflow. If this is useful to you, starring the repo helps others discover it: https://bit.ly/40pmNKh https://bit.ly/40pmNKh March 11, 2026 at 01:49AM
Show HN: Claude Code Token Elo https://bit.ly/4s44FSx
Show HN: Claude Code Token Elo https://bit.ly/4ddLBfJ March 10, 2026 at 05:29AM
Show HN: Modulus – Cross-repository knowledge orchestration for coding agents https://bit.ly/3P3vAPB
Show HN: Modulus – Cross-repository knowledge orchestration for coding agents Hello HN, we're Jeet and Husain from Modulus ( https://bit.ly/4s9fGBW ) - a desktop app that lets you run multiple coding agents with shared project memory. We built it to solve two problems we kept running into: - Cross-repo context is broken. When working across multiple repositories, agents don't understand dependencies between them. Even if we open two repos in separate Cursor windows, we still have to manually explain the backend API schema while making changes in the frontend repo. - Agents lose context. Switching between coding agents often means losing context and repeating the same instructions again. Modulus shares memory across agents and repositories so they can understand your entire system. It's an alternative to tools like Conductor for orchestrating AI coding agents to build product, but we focused specifically on multi-repo workflows (e.g., backend repo + client repo + shared library repo + AI agents repo). We built our own Memory and Context Engine from the ground up specifically for coding agents. Why build another agent orchestration tool? It came from our own problem. While working on our last startup, Husain and I were working across two different repositories. Working across repos meant manually pasting API schemas between Cursor windows — telling the frontend agent what the backend API looked like again and again. So we built a small context engine to share knowledge across repos and hooked it up to Cursor via MCP. This later became Modulus. Soon, Modulus will allow teams to share knowledge with others to improve their workflows with AI coding agents - enabling team collaboration in the era of AI coding. Our API will allow developers to switch between coding agents or IDEs without losing any context. If you wanna see a quick demo before trying out, here is our launch post - https://bit.ly/3NtfI8E We'd greatly appreciate any feedback you have and hope you get the chance to try out Modulus. https://bit.ly/4s9fGBW March 10, 2026 at 07:52PM
Subscribe to:
Comments (Atom)