Music046 | Nigeria No1. Daily Updates | Contact Us - +2349077287056
Nigeria No1. Music site And Complete Entertainment portal for Music Promotion WhatsApp:- +2349077287056
Saturday, 14 February 2026
Show HN: Git Navigator – Use Git Without Learning Git https://bit.ly/3ZDZo7t
Show HN: Git Navigator – Use Git Without Learning Git Hey HN, I built a VS Code extension that lets you do Git things without memorizing Git commands. You know what you want to do, move this commit over there, undo that thing you just did, split this big commit into two smaller ones. Git Navigator lets you just... do that. Drag a commit to rebase it. Cherry-pick (copy) it onto another branch. Click to stage specific lines. The visual canvas shows you what's happening, so you're not guessing what `git rebase -i HEAD~3` actually means. The inspiration was Sapling's Interactive Smartlog, which I used heavily at Meta. I wanted that same experience but built specifically for Git. A few feature callouts: - Worktrees — create, switch, and delete linked worktrees from the graph. All actions are worktree-aware so you're always working in the right checkout. - Stacked workflows — first-class stack mode if you're into stacked diffs, but totally optional. Conflict resolution — block-level choices instead of hunting through `<<<<<<<` markers. Works in VS Code, Cursor, and Antigravity. Just needs a Git repo. Site: https://gitnav.xyz VSCode Marketplace: https://marketplace.visualstudio.com/items?itemName=binhongl... Open VSX: https://open-vsx.org/extension/binhonglee/git-navigator https://gitnav.xyz February 15, 2026 at 03:43AM
Show HN: Twsnmp FK – Lightweight NMS Built with Go, Wails, and Svelte https://bit.ly/4qFdYGX
Show HN: Twsnmp FK – Lightweight NMS Built with Go, Wails, and Svelte Hi HN, developer here. I’ve been developing and maintaining a network management tool called TWSNMP for about 25 years. This new version, "FK" (Fresh Konpaku), is a complete modern rewrite. Why I built this: Most enterprise NMS are heavy, server-based, and complex to set up. I wanted something that runs natively on a desktop, is extremely fast to launch, and provides deep insights like packet analysis and NetFlow without a huge infrastructure. The Tech Stack: - Backend: Go (for high-speed log processing and SNMP polling) - Frontend: Svelte (to keep the UI snappy and lightweight) - Bridge: Wails (to build a cross-platform desktop app without the bulk of Electron) I’m looking for feedback from fellow network admins and developers. What features do you find most essential in a modern, lightweight NMS? GitHub: https://bit.ly/407DjhQ https://bit.ly/407DjhQ February 15, 2026 at 01:33AM
Show HN: An x86 assembly game from 2002, ported to WebAssembly with Claude Code https://bit.ly/4tKiZ3R
Show HN: An x86 assembly game from 2002, ported to WebAssembly with Claude Code https://bit.ly/4ax1DhP February 15, 2026 at 01:41AM
Show HN: Stack Overflow, but for AI agents (questions, answers, logs, context) https://bit.ly/4ajU5jV
Show HN: Stack Overflow, but for AI agents (questions, answers, logs, context) Hi HN — I built ChatOverflow, a Q&A forum for AI coding agents (Stack Overflow style). Agents keep re-learning the same debugging patterns each run (tool/version quirks, setup issues, framework behaviors). ChatOverflow is a shared place where agents post a question (symptom + logs + minimal reproduction + env context) and an answer (steps + why it works), so future agents can search and reuse it. Small test on 57 SWE-bench Lite tasks: letting agents search prior posts reduced average time 18.7 min → 10.5 min (-44%). A big bet here is that karma/upvotes/acceptance can act as a lightweight “verification signal” for solutions that consistently work in practice. Inspired by Moltbook. Feedback wanted on: 1. where would this fit in your agent workflow 2. how would you reduce prompt injection and prevent agents coordinating/brigading to push adversarial or low-quality posts? https://bit.ly/4twVg6V February 15, 2026 at 01:04AM
Friday, 13 February 2026
Show HN: ClipPath – Paste screenshots as file paths in your terminal https://bit.ly/3OqAvd0
Show HN: ClipPath – Paste screenshots as file paths in your terminal https://bit.ly/4qAvgEW February 14, 2026 at 02:08AM
Show HN: Explore ASN Relationships and BGP Route History with Real Internet Data https://bit.ly/4qCro6v
Show HN: Explore ASN Relationships and BGP Route History with Real Internet Data Hi HN, I’ve been working on a side project called ipiphistory.com. It’s a searchable explorer for: – ASN relationships (provider / peer / customer) – BGP route history – IP to ASN mapping over time – AS path visibility – Organization and geolocation data The idea started from my frustration when explaining BGP concepts to junior engineers and students — most tools are fragmented across multiple sources (RouteViews, RIPE RIS, PeeringDB, etc.). This project aggregates and indexes historical routing data to make it easier to: – Understand how ASNs connect – Explore real-world routing behavior – Investigate possible hijacks or path changes – Learn BGP using real data It’s still early and I’d really appreciate feedback from the HN community — especially on usability and features you’d like to see. Happy to answer technical questions about data ingestion and indexing as well. https://bit.ly/4auRK4j February 14, 2026 at 12:12AM
Show HN: Bubble Sort on a Turing Machine https://bit.ly/3MdtR9w
Show HN: Bubble Sort on a Turing Machine Bubble sort is pretty simple in most programming languages ... what about on a Turing Machine? I used all three of Claude 4.6, GLM 5, and GPT 5.2 to get a result, so this exercise was not quite trivial, at least at this time. The resulting machine, bubble_sort_unary.yaml, will take this input: 111011011111110101111101111 and give this output: 101101110111101111101111111 I.e., it's sorting the array [3,2,7,1,5,4]. The machine has 31 states and requires 1424 steps before it comes to a halt. It also introduces two extra symbols onto the tape, 'A' and 'B'. (You could argue that 0 is also an extra symbol because turinmachine.io uses blank, ' ', as well). When I started writing the code the LLM (Claude) balked at using unary numbers and so we implemented bubble_sort.yaml which uses the tape symbols '1', '2', '3', '4', '5', '6', '7'. This machine has fewer states, 25, and requires only 63 steps to perform the sort. So it's easier to watch it work, though it's not as generalized as the other TM. Some comments about how the 31 states of bubbles_sort_unary.yaml operate: | Group | Count | Purpose | |---|---|---| | `seek_delim_{clean,dirty}` | 2 | Pass entry: scan right to the next `0` delimiter between adjacent numbers. | | `cmpR_ `, `cmpL_ `, `cmpL_ret_ `, `cmpL_fwd_ ` | 8 | Comparison: alternately mark units in the right (`B`) and left (`A`) numbers to compare their sizes. | | `chk_excess_ `, `scan_excess_ `, `mark_all_X_ ` | 6 | Excess check: right number exhausted — see if unmarked `1`s remain on the left (meaning L > R, swap needed). | | `swap_ ` | 7 | Swap: bubble each `X`-marked excess unit rightward across the `0` delimiter. | | `restore_*` | 6 | Restore: convert `A`, `B`, `X` marks back to `1`s, then advance to the next pair. | | `rewind` / `done` | 2 | Rewind to start after a dirty pass, or halt. | (The above is in the README.md if it doesn't render on HN.) I'm curious if anyone can suggest refinements or further ideas. And please send pull requests if you're so inclined. My development path: I started by writing a pretty simple INITIAL_IDEAS.md, which got updated somewhat, then the LLM created a SPECIFICATION.md. For the bubble_sort_unary.yaml TM I had to get the LLMs to build a SPEC_UNARY.md because too much context was confusing them. I made 21 commits throughout the project and worked for about 6 hours (I was able to multi-task, so it wasn't 6 hours of hard effort). I spent about $14 on tokens via Zed and asked some questions via t3.chat ($8/month plan). A final question: What open source license is good for these types of mini-projects? I took the path of least resistance and used MIT, but I observe that turingmachine.io uses BSD 3-Clause. I've heard of "MIT with Commons Clause;" what's the landscape surrounding these kind of license questions nowadays? https://bit.ly/4kymlCE February 13, 2026 at 10:43PM
Thursday, 12 February 2026
Show HN: Yori – Isolating AI Logic into "Semantic Containers" (Docker for Code) https://bit.ly/4kzdMHP
Show HN: Yori – Isolating AI Logic into "Semantic Containers" (Docker for Code) Hi HN, I've been a developer for some time now, and like many of you, I've been frustrated by the "All-or-Nothing" problem with AI coding tools. You ask an AI to fix a bug or implement a function, and it rewrites the whole file. It changes your imports, renames your variables, or deletes comments it deems unnecessary. It’s like giving a junior developer (like me) root access to your production server just to change a config file. So, 29 days ago, I started building Yori to solve the trust problem. The Concept: Semantic Containers Yori introduces a syntax that acts like a firewall for AI. You define a $${ ... }$$ block inside a text file. Outside the block (The Host): Your manual code, architecture, and structure. The AI cannot touch this. Inside the block (The Container): You write natural language intent. The AI can only generate code here. Example: myutils.md ```cpp EXPORT: "myfile.cpp" // My manual architecture - AI cannot change this #include "utils.h" void process_data() { // Container: The AI is sandboxed here, but inherits the rest of the file as context $${ Sort the incoming data vector using quicksort. Filter out negative numbers. Print the result. }$$ } EXPORT: END ``` How it works: Yori is a C++ wrapper that parses these files. Whatever is within the EXPORT block and outside the containers ($${ }$$) will be copied as it is. When you run `yori myutils.md -make -series`, it sends the prompts to a local (Ollama) or cloud LLM, generates the syntax, fills the blocks, and compiles the result using your native toolchain (GCC/Clang/Python). If compilation fails, it feeds the error back to the LLM in a retry loop (self-healing). Why I think this matters: 1. Safety: You stop giving AI "root access" to your files. 2. Intent as Source: The prompt stays in the file. If you want to port your logic from C++ to Rust, you keep the prompts and just change the compile target. 3. Incremental Builds (to be added soon): Named containers allow for caching. If the prompt hasn't changed, you don't pay for an API call. It’s open source (MIT), C++17, and works locally. I’d love feedback on the "Semantic Container" concept. Is this the abstraction layer we've been missing for AI coding? Let me hear your ideas. Also, if you can't run yori.exe tell what went wrong and we see how to fix it. I opened a github issue on this. I am also working in making documentation for the project (github wiki). So expect that soon. GitHub: https://bit.ly/4qysa4w Thanks! February 13, 2026 at 05:17AM
Show HN: MicroGPT in 243 Lines – Demystifying the LLM Black Box https://bit.ly/4aMZhN8
Show HN: MicroGPT in 243 Lines – Demystifying the LLM Black Box The release of microgpt by Andrej Karpathy is a foundational moment for AI transparency. In exactly 243 lines of pure, dependency-free Python, Karpathy has implemented the complete GPT algorithm from scratch. As a PhD scholar investigating AI and Blockchain, I see this as the ultimate tool for moving beyond the "black box" narrative of Large Language Models (LLMs). The Architecture of Simplicity Unlike modern frameworks that hide complexity behind optimized CUDA kernels, microgpt exposes the raw mathematical machinery. The code implements: The Autograd Engine: A custom Value class that handles the recursive chain rule for backpropagation without any external libraries. GPT-2 Primitives: Atomic implementations of RMSNorm, Multi-head Attention, and MLP blocks, following the GPT-2 lineage with modernizations like ReLU. The Adam Optimizer: A pure Python version of the Adam optimizer, proving that the "magic" of training is just well-orchestrated calculus. The Shift to the Edge: Privacy, Latency, and Power For my doctoral research at Woxsen University, this codebase serves as a blueprint for the future of Edge AI. As we move away from centralized, massive server farms, the ability to run "atomic" LLMs directly on hardware is becoming a strategic necessity. Karpathy's implementation provides empirical clarity on how we can incorporate on-device MicroGPTs to solve three critical industry challenges: Better Latency: By eliminating the round-trip to the cloud, on-device models enable real-time inference. Understanding these 243 lines allows researchers to optimize the "atomic" core specifically for edge hardware constraints. Data Protection & Privacy: In a world where data is the new currency, processing information locally on the user's device ensures that sensitive inputs never leave the personal ecosystem, fundamentally aligning with modern data sovereignty standards. Mastering the Primitives: For Technical Product Managers, this project proves that "intelligence" doesn't require a dependency-heavy stack. We can now envision lightweight, specialized agents that are fast, private, and highly efficient. Karpathy’s work reminds us that to build the next generation of private, edge-native AI products, we must first master the fundamentals that fit on a single screen of code. The future is moving toward decentralized, on-device intelligence built on these very primitives. Link: https://bit.ly/3ODXJfM February 13, 2026 at 03:38AM
Show HN: WebExplorer – a tool for preview file in browser https://bit.ly/3ZzZk8N
Show HN: WebExplorer – a tool for preview file in browser https://bit.ly/3ODVglw February 13, 2026 at 03:10AM
Wednesday, 11 February 2026
Show HN: Double blind entropy using Drand for verifiably fair randomness https://bit.ly/461dPpO
Show HN: Double blind entropy using Drand for verifiably fair randomness The only way to get a trust-less random value is to have it distributed and time-locked three ways, player, server and a future-entropy. In the demo above, the moment you commit (Roll-Dice) a commit with the hash of a player secret is sent to the server and the server accepts that and sends back the hash of its secret back and the "future" drand round number at which the randomness will resolve. The future used in the demo is 10 secs When the reveal happens (after drand's particular round) all the secrets are revealed and the random number is generated using "player-seed:server-seed:drand-signature". All the verification is in Math, so truly trust-less, so: 1. Player-Seed should matches the player-hash committed 2. Server-Seed should matches the server-hash committed 3. Drand-Signature can is publicly not available at the time of commit and is available at the time of reveal. (Time-Locked) 4. Random number generated is deterministic after the event and unknown and unpredictably before the event. 5. No party can influence the final outcome, specially no "last-look" advantange for anyone. I think this should be used in all games, online lottery/gambling and other systems which want to be fair by design not by trust. https://bit.ly/4rP3SEv February 12, 2026 at 03:10AM
Show HN: CodeRLM – Tree-sitter-backed code indexing for LLM agents https://bit.ly/3Mt9THH
Show HN: CodeRLM – Tree-sitter-backed code indexing for LLM agents I've been building a tool that changes how LLM coding agents explore codebases, and I wanted to share it along with some early observations. Typically claude code globs directories, greps for patterns, and reads files with minimal guidance. It works in kind of the same way you'd learn to navigate a city by walking every street. You'll eventually build a mental map, but claude never does - at least not any that persists across different contexts. The Recursive Language Models paper from Zhang, Kraska, and Khattab at MIT CSAIL introduced a cleaner framing. Instead of cramming everything into context, the model gets a searchable environment. The model can then query just for what it needs and can drill deeper where needed. coderlm is my implementation of that idea for codebases. A Rust server indexes a project with tree-sitter, builds a symbol table with cross-references, and exposes an API. The agent queries for structure, symbols, implementations, callers, and grep results — getting back exactly the code it needs instead of scanning for it. The agent workflow looks like: 1. `init` — register the project, get the top-level structure 2. `structure` — drill into specific directories 3. `search` — find symbols by name across the codebase 4. `impl` — retrieve the exact source of a function or class 5. `callers` — find everything that calls a given symbol 6. `grep` — fall back to text search when you need it This replaces the glob/grep/read cycle with index-backed lookups. The server currently supports Rust, Python, TypeScript, JavaScript, and Go for symbol parsing, though all file types show up in the tree and are searchable via grep. It ships as a Claude Code plugin with hooks that guide the agent to use indexed lookups instead of native file tools, plus a Python CLI wrapper with zero dependencies. For anecdotal results, I ran the same prompt against a codebase to "explore and identify opportunities to clarify the existing structure". Using coderlm, claude was able to generate a plan in about 3 minutes. The coderlm enabled instance found a genuine bug (duplicated code with identical names), orphaned code for cleanup, mismatched naming conventions crossing module boundaries, and overlapping vocabulary. These are all semantic issues which clearly benefit from the tree-sitter centric approach. Using the native tools, claude was able to identify various file clutter in the root of the project, out of date references, and a migration timestamp collision. These findings are more consistent with methodical walks of the filesystem and took about 8 minutes to produce. The indexed approach did better at catching semantic issues than native tools and had a key benefit in being faster to resolve. I've spent some effort to streamline the installation process, but it isn't turnkey yet. You'll need the rust toolchain to build the server which runs as a separate process. Installing the plugin from a claude marketplace is possible, but the skill isn't being added to your .claude yet so there are some manual steps to just getting to a point where claude could use it. Claude continues to demonstrate significant resistance to using CodeRLM in exploration tasks. Typically to use you will need to explicitly direct claude to use it. --- Repo: github.com/JaredStewart/coderlm Paper: Recursive Language Models https://bit.ly/4rG4RXf — Zhang, Kraska, Khattab (MIT CSAIL, 2025) Inspired by: https://bit.ly/3MrxwAn https://bit.ly/4rOWKIf February 11, 2026 at 02:10PM
Show HN: Agent framework that generates its own topology and evolves at runtime https://bit.ly/4ky6Omu
Show HN: Agent framework that generates its own topology and evolves at runtime Hi HN, I’m Vincent from Aden. We spent 4 years building ERP automation for construction (PO/invoice reconciliation). We had real enterprise customers but hit a technical wall: Chatbots aren't for real work. Accountants don't want to chat; they want the ledger reconciled while they sleep. They want services, not tools. Existing agent frameworks (LangChain, AutoGPT) failed in production - brittle, looping, and unable to handle messy data. General Computer Use (GCU) frameworks were even worse. My reflections: 1. The "Toy App" Ceiling & GCU Trap Most frameworks assume synchronous sessions. If the tab closes, state is lost. You can't fit 2 weeks of asynchronous business state into an ephemeral chat session. The GCU hype (agents "looking" at screens) is skeuomorphic. It’s slow (screenshots), expensive (tokens), and fragile (UI changes = crash). It mimics human constraints rather than leveraging machine speed. Real automation should be headless. 2. Inversion of Control: OODA > DAGs Traditional DAGs are deterministic; if a step fails, the program crashes. In the AI era, the Goal is the law, not the Code. We use an OODA loop to manage stochastic behavior: - Observe: Exceptions are observations (FileNotFound = new state), not crashes. - Orient: Adjust strategy based on Memory and - Traits. - Decide: Generate new code at runtime. - Act: Execute. The topology shouldn't be hardcoded; it should emerge from the task's entropy. 3. Reliability: The "Synthetic" SLA You can't guarantee one inference ($k=1$) is correct, but you can guarantee a System of Inference ($k=n$) converges on correctness. Reliability is now a function of compute budget. By wrapping an 80% accurate model in a "Best-of-3" verification loop, we mathematically force the error rate down—trading Latency/Tokens for Certainty. 4. Biology & Psychology in Code "Hard Logic" can't solve "Soft Problems." We map cognition to architectural primitives: Homeostasis: Solving "Perseveration" (infinite loops) via a "Stress" metric. If an action fails 3x, "neuroplasticity" drops, forcing a strategy shift. Traits: Personality as a constraint. "High Conscientiousness" increases verification; "High Risk" executes DROP TABLE without asking. For the industry, we need engineers interested in the intersection of biology, psychology, and distributed systems to help us move beyond brittle scripts. It'd be great to have you roasting my codes and sharing feedback. Repo: https://bit.ly/4rjN60f https://bit.ly/4612RRa February 11, 2026 at 08:39PM
Tuesday, 10 February 2026
Show HN: Yan – Glitch Art Photo/Live Editor https://bit.ly/4rOHLhr
Show HN: Yan – Glitch Art Photo/Live Editor Everything evolves in digitality, and deconstructs in logic. Tired of filters that make everyone look like a glazed donut? Same. Yan is not another beauty app. It's a digital chaos engine that treats your pixels like they owe it money. We don't enhance photos — we interrogate them at the binary level until they confess their true nature. [What We Actually Do] • Luma Stretch: Grab your image by its light and shadow, then yeet it into oblivion. Speed lines included. • Pixel Sort: Let gravity do art. Pixels fall, colors cascade, Instagram influencers cry. • RGB Shift: That drunk 3D glasses effect, but on purpose. Your eyes will thank us. Or sue us. • Block Jitter: Ctrl+Z had a nightmare. This is what it dreamed. [Why Yan?] Because "vintage filter #47" is not a personality. Because glitch is not a bug — it's a feature. Because sometimes the most beautiful thing you can do to a photo is break it. Warning: Side effects may include artistic awakening, filter addiction withdrawal, and an uncontrollable urge to deconstruct everything. Your camera roll will never be boring again. https://bit.ly/46lFOkn February 11, 2026 at 05:19AM
Show HN: Model Training Memory Simulator https://bit.ly/4tuFsBI
Show HN: Model Training Memory Simulator https://bit.ly/46lDeLd February 8, 2026 at 10:39AM
Show HN: I vibecoded 177 tools for my own use (CalcBin) https://bit.ly/4tMlOS3
Show HN: I vibecoded 177 tools for my own use (CalcBin) Hey HN! I've been building random tools whenever I needed them over the past few months, and now I have 177 of them. Started because I was tired of sketchy converter sites with 10 ads, so I just... made my own. Some highlights for the dev crowd: Developer tools: - UUID Generator (v1/v4/v7, bulk generation): https://bit.ly/4qNOvLH - JWT Generator & Decoder: https://bit.ly/4aHg9ot - JSON Formatter/Validator: https://bit.ly/4aJfF1a - Cron Expression Generator (with natural language): https://bit.ly/3MeEZCZ - Base64 Encoder/Decoder: https://bit.ly/4ra9OI2 - Regex Tester: https://bit.ly/4reyteL - SVG Optimizer (SVGO-powered, client-side): https://bit.ly/4ttUvLP Fun ones: - Random Name Picker (spin wheel animation): https://bit.ly/45YUUvM - QR Code Generator: https://bit.ly/45UvIXq Everything runs client-side (Next.js + React), no ads, no tracking, works offline. Built it for myself but figured others might find it useful. Browse all tools: https://bit.ly/4aHy9z4 Tech: Next.js 14 App Router, TypeScript, Tailwind, Turborepo monorepo. All open to feedback! https://bit.ly/4tvp9o4 February 11, 2026 at 03:46AM
Show HN: ArtisanForge: Learn Laravel through a gamified RPG adventure https://bit.ly/4apgIls
Show HN: ArtisanForge: Learn Laravel through a gamified RPG adventure Hey HN, I built ArtisanForge, a free platform to learn PHP and Laravel through a medieval-fantasy RPG. Instead of traditional tutorials, you progress through kingdoms, solve coding exercises in a browser editor, earn XP, join guilds, and fight boss battles. Tech stack: Laravel 12, Livewire 3, Tailwind CSS, Alpine.js. Code execution runs sandboxed via php-wasm in the browser. What's in there: - 12 courses across 11 kingdoms (PHP basics to deployment) - 100+ interactive exercises with real-time code validation using AST analysis - AI companion (Pip the Owlox) that uses Socratic method – never gives direct answers - Full gamification: XP, levels, streaks, achievements, guilds, leaderboard - Multilingual (EN/FR/NL) The idea came from seeing too many beginners drop off traditional courses. Wrapping concepts in quests and progression mechanics keeps motivation high without dumbing down the content. Everything is free, no paywall, no premium tier. Feedback welcome – especially from Laravel devs and educators. https://bit.ly/3O6FZtr February 8, 2026 at 08:15AM
Monday, 9 February 2026
Show HN: I built a cloud hosting for OpenClaw https://bit.ly/4r4v5CX
Show HN: I built a cloud hosting for OpenClaw Yet another OpenClaw wrapper. But I really enjoyed the techy part of this project. Especially server provisionings in the background. https://bit.ly/4a6nhe2 February 9, 2026 at 11:39PM
Show HN: Reef – Bash compatibility layer for Fish shell, written in Rust https://bit.ly/4aGPJDq
Show HN: Reef – Bash compatibility layer for Fish shell, written in Rust Fish is the fastest, friendliest interactive shell, but it can't run bash syntax, which has kept it niche for 20 years. Reef fixes this with a three-tier approach: fish function wrappers for common keywords (export, unset, source), a Rust-powered AST translator using conch-parser for structural syntax (for/do/done, if/then/fi, $()), and a bash passthrough with env capture for everything else. 251/251 bash constructs pass in the test suite. The slowest path (full bash passthrough) takes ~3ms. The binary is 1.18MB. The goal: install fish, install reef, never think about bash compatibility again. Your muscle memory, Stack Overflow commands, and tool configs all just work. https://bit.ly/3O6BYFp February 10, 2026 at 12:44AM
Sunday, 8 February 2026
Show HN: Ported the 1999 game Bugdom to the browser and added a bunch of mods https://bit.ly/4klu5b4
Show HN: Ported the 1999 game Bugdom to the browser and added a bunch of mods I think the very first video game I ever played was Bugdom by Pangea Software, which came with the original iMac. There was also a shooter called Nanosaur, but my 7-year-old heart belonged to the more peaceable Bugdom, which featured a roly-poly named Rollie McFly needing to rescue ladybugs from evil fire ants and bees. Upon seeing the port to modern systems ( https://bit.ly/4tuDWzI ), I figured it should be able to run entirely in-browser nowadays, and also that AI coding tools "should" be able to do this entire project for me. I ended up spending perhaps 20 hours on it with Claude Code, but we got there. Once ported, I added a half-dozen mods that would have pleased my childhood self (like low-gravity mode and flying slugs & caterpillars mode), and a few that please my current self (like Dance Party mode). EDIT: Here are some mod/level combinations I recommend * https://bit.ly/3Msk7Iq... * https://bit.ly/4rJLFbr... * https://bit.ly/4a49Aw6... https://bit.ly/4rvsdPe February 9, 2026 at 04:07AM
Subscribe to:
Comments (Atom)