Tuesday 2 February 2021

Show HN: Incidents.sh – A modern “days since last incident” timer https://bit.ly/3cxFTGd

Show HN: Incidents.sh – A modern “days since last incident” timer https://bit.ly/36R3m1N February 2, 2021 at 11:50AM

Show HN: PrivaNote – offline first, end-to-end encrypted notes https://bit.ly/3awFnpE

Show HN: PrivaNote – offline first, end-to-end encrypted notes https://bit.ly/36xazna February 2, 2021 at 09:21AM

Show HN: Swap Faces with Emojis (With AI) https://bit.ly/3cFvXKO

Show HN: Swap Faces with Emojis (With AI) https://bit.ly/39CGqos February 2, 2021 at 09:19AM

Show HN: An Index for Data Sets on Ethereum https://bit.ly/2YCK2k4

Show HN: An Index for Data Sets on Ethereum https://bit.ly/2YzztOC February 2, 2021 at 09:14AM

Monday 1 February 2021

Show HN: Time tracking with plain text files https://bit.ly/2LdWALQ

Show HN: Time tracking with plain text files https://bit.ly/3oCEzVa February 2, 2021 at 12:59AM

Show HN: I wrote a rust program to translate images into textual line art https://bit.ly/3thfv9w

Show HN: I wrote a rust program to translate images into textual line art https://bit.ly/39Bxi3E January 31, 2021 at 08:19AM

Show HN: Blogline – A blogging app with privacy in mind. Should I bother? https://bit.ly/3oIX5eh

Show HN: Blogline – A blogging app with privacy in mind. Should I bother? https://bit.ly/3an4lre February 2, 2021 at 12:29AM

Show HN: Gistfinder. CLI tool to fuzzy-search your gists from the terminal https://bit.ly/3pAVmsS

Show HN: Gistfinder. CLI tool to fuzzy-search your gists from the terminal https://bit.ly/3asMCyL February 2, 2021 at 12:27AM

Show HN: Only Sheets, Sell Access to Any Google Sheet https://bit.ly/39Chai5

Show HN: Only Sheets, Sell Access to Any Google Sheet https://bit.ly/2MhkJ4N February 1, 2021 at 11:59PM

Show HN: LogoMor: a 3D Logo interpreter and visualizer https://bit.ly/3rasJmK

Show HN: LogoMor: a 3D Logo interpreter and visualizer https://bit.ly/3ao8xqT February 1, 2021 at 11:49PM

Show HN: ZeroAndy – Watch clips from small Twitch streamers https://bit.ly/3aEcmsb

Show HN: ZeroAndy – Watch clips from small Twitch streamers https://bit.ly/3thM9Ih February 1, 2021 at 11:02PM

Show HN: ASL Classifier Built with CoreML and Roboflow https://bit.ly/3r8H1Ev

Show HN: ASL Classifier Built with CoreML and Roboflow https://bit.ly/3pDxg0M February 1, 2021 at 08:56PM

Show HN: Webhook delivery as a Service (built by ex-YC founder/Stripe engineer) https://bit.ly/3aDPXLB

Show HN: Webhook delivery as a Service (built by ex-YC founder/Stripe engineer) https://bit.ly/3cuE64X February 1, 2021 at 08:47PM

Show HN: Pipupgrade (Like yarn upgrade, but for pip) https://bit.ly/3pCZGb2

Show HN: Pipupgrade (Like yarn upgrade, but for pip) https://bit.ly/36wEoo8 February 1, 2021 at 08:15PM

Show HN: Sierpiński and Other Kronecker Graphs with the GraphBLAS https://bit.ly/3pI2kML

Show HN: Sierpiński and Other Kronecker Graphs with the GraphBLAS https://bit.ly/2L8NY97 February 1, 2021 at 07:17PM

Launch HN: Opstrace (YC S19) – open-source Datadog https://bit.ly/3pG5hxc

Launch HN: Opstrace (YC S19) – open-source Datadog Hi HN! Seb here, with my co-founder Mat. We are building an open-source observability platform aimed at the end user. We assemble what we consider the best open source APIs and interfaces such as Prometheus and Grafana, but make them as easy to use and featureful as Datadog, with for example TLS and authentication by default. It's scalable (horizontally and vertically) and upgradable without a team of experts. Check it out here: https://bit.ly/3tgkuYe & https://bit.ly/39zlTRL About us: I co-founded dotCloud which became Docker, and was also an early employee at Cloudflare where I built their monitoring system back when there was no Prometheus (I had to use OpenTSDB :-). I have since been told it's all been replaced with modern stuff—thankfully! Mat and I met at Mesosphere where, after building DC/OS, we led the teams that would eventually transition the company to Kubernetes. In 2019, I was at RedHat and Mat was still at Mesosphere. A few months after IBM announced purchasing RedHat, Mat and I started brainstorming problems that we could solve in the infrastructure space. We started interviewing a lot of companies, always asking them the same questions: "How do you build and test your code? How do you deploy? What technologies do you use? How do you monitor your system? Logs? Outages?" A clear set of common problems emerged. Companies that used external vendors—such as CloudWatch, Datadog, SignalFX—grew to a certain size where cost became unpredictable and wildly excessive. As a result (one of many downsides we would come to uncover) they monitored less (i.e. just error logs, no real metrics/logs in staging/dev and turning metrics off in prod to reduce cost). Companies going the opposite route—choosing to build in-house with open source software—had different problems. Building their stack took time away from their product development, and resulted in poorly maintained, complicated messes. Those companies are usually tempted to go to SaaS but at their scale, the cost is often prohibitive. It seemed crazy to us that we are still stuck in this world where we have to choose between these two paths. As infrastructure engineers, we take pride in building good software for other engineers. So we started Opstrace to fix it. Opstrace started with a few core principles: (1) The customer should always own their data; Opstrace runs entirely in your cloud account and your data never leaves your network. (2) We don’t want to be a storage vendor—that is, we won’t bill customers by data volume because this creates the wrong incentives for us. (AWS and GCP are already pretty good at storage.) (3) Transparency and predictability of costs—you pay your cloud provider for the storage/network/compute for running Opstrace and can take advantage of any credits/discounts you negotiate with them. We are incentivized to help you understand exactly where you are spending money because you pay us for the value you get from our product with per-user pricing. (For more about costs, see our recent blog post here: https://bit.ly/3rcNWfI ). (4) It should be REAL Open Source with the Apache License, Version 2.0. To get started, you install Opstrace into your AWS or GCP account with one command: `opstrace create`. This installs Opstrace in your account, creates a domain name and sets up authentication for you for free. Once logged in you can create tenants that each contain APIs for Prometheus, Fluentd/Loki and more. Each tenant has a Grafana instance you can use. A tenant can be used to logically separate domains, for example, things like prod, test, staging or teams. Whatever you prefer. At the heart of Opstrace runs a Cortex ( https://bit.ly/36xE0pl ) cluster to provide the above-mentioned scalable Prometheus API, and a Loki ( https://bit.ly/3arXywz ) cluster for the logs. We front those with authenticated endpoints (all public in our repo). All the data ends up stored only in S3 thanks to the amazing work of the developers on those projects. An "open source Datadog" requires more than just metrics and logs. We are actively working on a new UI for managing, querying and visualizing your data and many more features, like automatic ingestion of logs/metrics from cloud services (CloudWatch/Stackdriver), Datadog compatible API endpoints to ease migrations and side by side comparisons and synthetics (e.g. Pingdom). You can follow along on our public roadmap: https://bit.ly/39Dc5q3 . We will always be open source, and we make money by charging a per-user subscription for our commercial version which will contain fine-grained authz, bring-your-own OIDC and custom domains. Check out our repo ( https://bit.ly/39zlTRL ) and give it a spin ( https://bit.ly/3j47iRy ). We’d love to hear what your perspective is. What are your experiences related to the problems discussed here? Are you all happy with the tools you’re using today? February 1, 2021 at 07:16PM

Show HN: Provose is the easiest way to deploy to AWS (using Terraform) https://bit.ly/2NVfckK

Show HN: Provose is the easiest way to deploy to AWS (using Terraform) https://bit.ly/2NNvk7G February 1, 2021 at 06:28PM

Show HN: I made a vegan recipe aggregator to learn GO and GCP https://bit.ly/3cv7Cr3

Show HN: I made a vegan recipe aggregator to learn GO and GCP https://bit.ly/3pECNUK February 1, 2021 at 05:49PM

Show HN: I'm Rich and I can prove it https://bit.ly/39A1wnB

Show HN: I'm Rich and I can prove it https://bit.ly/3tbPyZf February 1, 2021 at 05:03PM

Launch HN: Albedo (YC W21) – Highest resolution satellite imagery https://bit.ly/2NR3xDn

Launch HN: Albedo (YC W21) – Highest resolution satellite imagery Hey HN! I’m Topher, here with Winston and AJ, and we’re the co-founders of Albedo ( https://bit.ly/3j8KGz6 ). We’re building satellites that will capture both visible and thermal imagery - at a resolution 9x higher than what is available today (see comparison: https://bit.ly/3pE3cSC ). My technical background is primarily in optics/imaging science related to remote sensing. I previously worked for Lockheed Martin, where I met AJ, who is an expert in satellite architecture and systems engineering. We’ve spent most of our career working on classified space systems, and while the missions we were involved with are super cool, that world is slower to adopt the latest new space technologies. We started Albedo in order to create a new type of satellite architecture that captures high resolution imagery at a fraction of the cost historically required. Winston was previously a software engineer at Facebook, where he frequently used satellite imagery and realized the huge potential of higher resolution datasets. While the use cases for satellite imagery are endless, adoption has been underwhelming - even for obvious and larger applications like agriculture, insurance, energy, and mapping. The main limitations that have prevented widespread use are high cost, inaccessibility, and low resolution. Today, buying commercial satellite imagery involves a back-and-forth with a salesperson in a sometimes months-long process, with high prices that exclude all but the biggest companies. This process needs to be simplified with transparent, commodity pricing and an automated process, where all you need to buy imagery is a credit card. On the accessibility front, it’s surprising how few providers have nailed down a streamlined, fully cloud-based delivery mechanism. While working at Facebook, Winston sometimes dealt with imagery delivered through FTP servers or physical hard drives. Another thing users are looking for is more transparency when tasking a new satellite image, such as an immediate assessment of when it will be collected. These are all problems we are working on solving at Albedo. On the space side, we’re able to achieve the substantial cost savings by taking advantage of emerging space technologies, two of which are electric propulsion and on-orbit refueling. Our satellites will fly super close to the earth, essentially in the atmosphere, enabling 10cm resolution without having to build a school bus sized satellite. Electric propulsion makes the fuel on our satellites way more efficient, at the expense of low thrust. Think about it like your car gasoline going from 30 to 300 mpg, but you could only drive 5 mph. Our propulsion only needs to maintain a steady offset to the atmospheric drag, so low thrust and high efficiency is perfect. By the time our first few satellites run out of fuel, on-orbit refueling will be a reality, and we can just refill our tanks. We’re still in the architecture and design phase, but we expect to have our first few satellites flying in 2024 and the full constellation up in 2027. The current climate crisis requires a diverse set of sensors in space to support emissions monitoring, ESG initiatives/investments, and infrastructure sustainability. Thermal sensors are a key component for this, and very few are currently in orbit. Since our satellites are larger than normal, they are uniquely suited to capture the long wavelengths of thermal energy at a resolution of 2 meters. We’ll also be taking advantage of advances in microbolometer technology, to eliminate the crazy cooling requirements that have made thermal satellites so expensive in the past. The current state-of-the-art for thermal resolution is 70 meters, which is only marginally useful for most applications. We’re aiming to adopt the stance of being a pure data provider (i.e. not doing analytics). We think the best way to facilitate overall market growth is to do one thing incredibly well: sell imagery better, cheaper, and faster than what users have available today. While this allows us to be vertical agnostic, some of our more well-suited applications include: crop health monitoring, pipeline inspection, property insurance underwriting/weather damage evaluation, and wildfire/vegetation management around power lines. By making high-res imagery a commodity, we are also betting on it unlocking new applications in a similar fashion to GPS (e.g. Tinder, Pokemon Go, and Uber). One last thing - new remote sensing regulations were released by NOAA last May, removing the previous limit on resolution. So between the technology side and regulatory side, the timing is kind of perfect for us. All thoughts and questions are appreciated - and we’d love to hear if you know of any companies that could benefit from our imagery. Thanks for reading! February 1, 2021 at 03:45PM