Saturday, 10 February 2024

Show HN: I made an AI powered job description generator https://bit.ly/49slBYV

Show HN: I made an AI powered job description generator https://bit.ly/49r7YcD February 10, 2024 at 08:43AM

Friday, 9 February 2024

Show HN: Mukette, a Markdown Pager for Unix-Based Systems https://bit.ly/3SSa7bI

Show HN: Mukette, a Markdown Pager for Unix-Based Systems I really apologize if I am submitting this twice. I am new to these fora. I just discovered Show HN. The other thread did not have Show HN so it did not get any traction. This is a useful little tool so I guess people will like it. You're eithre like me, and do all your work in the terminal emulator. If they literally replace my displays with a VT100 I won't complain (but have to watch Youtube on my phone -_-). The other kind of use is someone who uses X (I know emulators are X too!) extensively, and only often needs to use the terminal This can be useful for both groups. Imagine you wanna read the README.md file of a repository. This happened to me what I wanted to read PackCC's REAMDE. And I had to do a pipline from Pandoc to Philadelphia! This nifty little tool will page the markdown file. I want to improve it in the future. I have made some leeway to adding more features to it. I want people to execute the code listings (in a safe environment) by navigating to them. It's all right these in the code, I just got tired of picking at this like an old wound and releassed it. It's been rand through ASAN and Valgrind. Some errors were fixed. If anything remains that I missed please tell me. I always initialize pointer ssto NULL so it should not complain much? Anyways thanks. https://bit.ly/42GZ7By February 10, 2024 at 01:15AM

Show HN: AutoBashCraft – a tool for automated Markdown screencast generation https://bit.ly/42BIXct

Show HN: AutoBashCraft – a tool for automated Markdown screencast generation Hello HN, Around New Year's I started a personal project called AutoBashCraft (ABC), aiming to simplify the creation of screencasts from bash code blocks in markdown files. I needed to create a lot of markdown files explaining workflows and code. I wanted to make the text-heavy files more interesting and thought of screencasts. There are many great tools to record terminal sessions. Then I found the asciinema-rec-script ( https://bit.ly/42AZLQX ) by Chris Ottrey that creates screencasts of bash scripts that look like a record of a terminal session including typing out the commands. I wanted the markdown files to be the same documents you would use as training or documentation material so I picked HTML comments to mark the codeblocks that are supposed to be executed. It would then execute the commands and save the screencast next to the markdown file in an asset folder so they can be easily embedded just under the codeblock. I went on from there and more commands were added such as 'create' to create a file with the contents of the codeblock, 'browse' to create a screenshot of a website, 'spawn' to spawn a background process, 'snapshot' to create a docker container of the current environment state, 'init' to initialize a new environment e.g. from a snapshot created in another file and 'config' to change configuration for the following commands. ABC is very much a work-in-progress and a proof of concept at this stage. I am also thinking of adding an editor to create a similar experience to jupyter notebooks, with automatic snapshots between each command in dev mode. There is so much work left to be done but for now I am back at my day job. The repository includes some examples that also work as (currently manually executed tests) including more complex workflows like automating MBTiles generation and usage or integrating private GPT, showcasing what I hope it can become. Running it is as simple as using NPX, with Docker and Node.js being the only requirements. Please only run your code or code you trust (esp. with docker activated). I later realized it also serves as a validator for the accuracy of instructional content. It happens so often that training material is missing important steps and it is very frustrating for beginners to follow a guide step by step and it just won't work. I'd be incredibly grateful if you could take a moment to check it out on GitHub, give it a star if you find it interesting, and maybe even contribute or fork it. I'm looking forward to your feedback and suggestions. Let's make something great together! Links: https://bit.ly/3ul4wl5 https://bit.ly/42CEJ4A... https://bit.ly/42Q3JFD... https://bit.ly/42Q3JFD... https://bit.ly/42Q3JFD... https://bit.ly/3ul4wl5 February 9, 2024 at 11:01PM

Show HN: A "Comments Layer" for the Internet https://bit.ly/42BDNxc

Show HN: A "Comments Layer" for the Internet SwearBy is an iOS app that provides a "comments section" and live chat for every URL. Load SwearBy in Safari, Chrome, and many other apps (Airbnb, Spotify, Redfin, Amazon...) Just tap the "Share" button from your current page. Basically - you get a "Twitter thread" and a "Youtube Live Chat" on every URL. LMK what you think :) https://bit.ly/3P17AKj February 9, 2024 at 09:41PM

Show HN: Presentations for your webcam, not a projector https://bit.ly/3HVJpIY

Show HN: Presentations for your webcam, not a projector CueCam Presenter is my Mac app (actually a suite of Mac and iOS apps) to run better presentations on your webcam. Editing cards should feel natural to anybody used to Markdown. I came to create CueCam as an "embedded entrepreneur". I had some success with my camera app "Shoot Pro Webcam" back in 2020 and built on this by creating squares.tv. As I talked to more and more users, I discovered more opportunities to make their lives easier. I started with features in Shoot (camera options, pausing, drawing etc..).. Then I created Video Pencil (which connects to your computer and lets you draw on your webcam using your iPad). Then I created "Beat Sheet" which lets you run through "smart scripts", controlling Ecamm Live, OBS and mimoLive. CueCam Presenter is how I'm connecting all these elements. It gives you a virtual webcam, virtual mic, and seamlessly connects to Shoot and Video Pencil running on other devices. There are various ways you can use it as a teleprompter while maintaining eye contact. It's taken a lot to get it to this stage. The video pipeline has been through two major iterations and the audio pipeline even more. The UI has evolved and developed to cover the different ways it is understood by different people. Educational discounts are a must for me, as I want to help improve the quality of remote teaching around the world. For other professionals, I believe it transforms the way you interact with people on video calls. It's useful for recording software demos and running live streams. https://bit.ly/3w9McvN February 9, 2024 at 01:37PM

Thursday, 8 February 2024

Show HN: Querying DNS Without Trusting the Resolver in 1000 Lines of Rust https://bit.ly/499LhtH

Show HN: Querying DNS Without Trusting the Resolver in 1000 Lines of Rust https://bit.ly/485cAnG February 8, 2024 at 10:17PM

Show HN: Program will only compile if the grid on line 30 is a magic square https://bit.ly/49ubdQI

Show HN: Program will only compile if the grid on line 30 is a magic square https://bit.ly/3SSAJcH February 9, 2024 at 01:29AM

Show HN: SnapCode – a real Java IDE in the browser https://bit.ly/42zsYeT

Show HN: SnapCode – a real Java IDE in the browser https://bit.ly/3OFVcir February 8, 2024 at 02:17PM

Show HN: Audiocate – a Haskell library for combating audio deepfake misuse https://bit.ly/3uit6TF

Show HN: Audiocate – a Haskell library for combating audio deepfake misuse Audiocate is a Haskell library for audio verification and source validation to attempt to combat AI generated audio deepfake misuse. It's currently just a MSc dissertation project but hoping to make it actually usable in the near future https://bit.ly/3OA4wnY February 8, 2024 at 11:49AM

Wednesday, 7 February 2024

Show HN: Open-source code editor with autocomplete built-in https://bit.ly/3HTii1c

Show HN: Open-source code editor with autocomplete built-in https://bit.ly/3HUzGTr February 7, 2024 at 07:57PM

Show HN: Directory of All LLM Models(Closed and Open Source) https://bit.ly/3w9oG1M

Show HN: Directory of All LLM Models(Closed and Open Source) https://bit.ly/496agy9 February 8, 2024 at 12:40AM

Show HN: Bluesky Hacker News Bot https://bit.ly/3w41BO1

Show HN: Bluesky Hacker News Bot Hello there! After Bluesky opened its doors to everyone, I jumped straight into the API to build something. Here is a bot that posts top stories from HN. https://bit.ly/42D42Dp February 7, 2024 at 11:00AM

Show HN: DynamiCrafter: Animating Open-Domain Images with Video Diffusion Priors https://bit.ly/3SyWnRM

Show HN: DynamiCrafter: Animating Open-Domain Images with Video Diffusion Priors Hello HN! We have released a major update of our image-to-video diffusion model, DynamiCrafter, with better dynamic, higher resolution, and stronger coherence. DynamiCrafter can animate open-domain still images based on text prompt by leveraging the pre-trained video diffusion priors. Please check our project page and paper for more information. We will continue to improve the model's performance. Comparisons with Stable Video Diffusion and PikaLabs can be found at https://www.youtube.com/watch?v=0NfmIsNAg-g Online demo: https://bit.ly/3uzEnie Our project page: https://bit.ly/3OBc7Tc Arxiv link: https://bit.ly/4bvjxRI https://bit.ly/485S7iI February 7, 2024 at 08:12AM

Show HN: I made a local wrapper for Automatic 1111 https://bit.ly/3SrGZa5

Show HN: I made a local wrapper for Automatic 1111 I made an open-source Python library for the Stable Diffusion Web UI. It's a direct alternative to Huggingface Diffusers except it has more features + runs the same scripts as A1111 so the results are replicable. Please give it a star on Github! https://bit.ly/3SxSaOk https://bit.ly/3SxSaOk February 7, 2024 at 06:55AM

Tuesday, 6 February 2024

Show HN: Kirby-like platformer game made in TypeScript https://bit.ly/49sHasB

Show HN: Kirby-like platformer game made in TypeScript https://bit.ly/49fCVjY February 5, 2024 at 10:45PM

Monday, 5 February 2024

Show HN: How we got fine-tuning Mistral-7B to not suck https://bit.ly/3SPTopL

Show HN: How we got fine-tuning Mistral-7B to not suck https://bit.ly/42vSmCl February 6, 2024 at 08:12AM

Show HN: CPU Prices on eBay https://bit.ly/49lQXR9

Show HN: CPU Prices on eBay Tech stack: Go + templ + htmx There are some rough edges but this combo is quite refreshing after React. The best thing is that I could omit npm from my stack. Having just a monolith (Go) server greatly simplifies things if you're an indie dev. https://bit.ly/49p5cVm February 5, 2024 at 04:43PM

Sunday, 4 February 2024

Show HN: An opinionated TS package build toolchain with typed configuration https://bit.ly/3SIZsAi

Show HN: An opinionated TS package build toolchain with typed configuration https://bit.ly/3SK3TdX February 5, 2024 at 02:47AM

Show HN: ReadToMe (iOS) turns paper books into audio https://bit.ly/492kzDf

Show HN: ReadToMe (iOS) turns paper books into audio I'm launching something that started as a side project publicly today: ReadToMe, which is an iPhone app that turns paper books and other printed text into audio. Originally this was a Christmas present for my fiancée, who loves books but has an eye problem that makes it hard for her to read more than a few pages at a time. She mostly listens to audiobooks while following along with the paper book, but some books aren't available in audiobook or even e-book form, and all of the existing apps we tried were surprisingly bad at scanning paper books into audio — they make lots of mistakes, include footnotes and page numbers, etc., in a way that really degrades the experience. Being an AI-oriented engineer by training, I had a crack at solving the problem myself, and was pleasantly surprised at how well the proof of concept worked. I then had some time free while shutting down my previous company (Mezli, YC W21), during which I polished up the app to the point you see it at now. The way it works: On the front end, it's a SwiftUI app (mostly written by ChatGPT!) that consists mostly of a document scanner (VNDocumentCameraViewController) and a custom-built audio player. The back end is more complex — book photos are first sent to an OCR API, then some custom code I wrote does a first pass at stitching together and correcting the results. Then, the corrected OCR results are sent to GPT-3.5-turbo for further post-processing and re-stitching together, and finally to a text-to-speech API for conversion to audio. The hardest part of this process was actually getting the GPT calls right — I ended up writing a custom LLM eval framework for making sure the LLM wasn't making edits relative to the true text of the book. A few issues remain, which I'll work on fixing if the app gets a significant amount of traction, including: 1) It can take multiple minutes to get audio back from a scan, especially if it's on the longer side (10+ pages). I'll be able to bring this down by spinning up dedicated servers for the OCR and TTS back-end. 2) The LLM sometimes does TOO good of a job at correcting "mistakes" in book text. This issue crops up particularly often when an author deliberately uses improper grammar, e.g. in dialogue. The app is priced at $9.99/month for up to 250 pages/month right now, which I estimate will just about cover the costs of API calls. I'll be bringing the price point down as the pricing of the required AI APIs comes down. There's also a 3-day free trial if you want to try it out. If you do find this useful, or know somebody who might, I'd appreciate you giving it a try or letting them know! And please let me know if you have any feedback, including issues or feature requests. https://bit.ly/484Yd2A February 5, 2024 at 12:56AM

Show HN: Letlang, written in Rust, targeting Rust, now has a specification https://bit.ly/48joCKt

Show HN: Letlang, written in Rust, targeting Rust, now has a specification https://bit.ly/3ubLjSR February 4, 2024 at 02:17PM