Thursday 1 June 2023

Show HN: Secure ChatGPT – a safer way to interact with generative AI https://bit.ly/3ON6an5

Show HN: Secure ChatGPT – a safer way to interact with generative AI Hi HN, I’m the founder of Pangea. We’ve built a developer platform where you can easily add security to your code through a simple set of APIs - features like authentication, secrets management, audit logging, redaction of PII, restricting embargoed countries, known threat actor intelligence, etc. With the ChatGPT and LLM explosion, we thought of ways to reduce the risk of both the inputs and outputs from these services. Our NextJS sample app adds a security layer on top of ChatGPT with various security services that you can implement quickly. It’s basically a front end to the OpenAI API that you can deploy which does a few security-related things: - AuthN - it provides authentication to track who inputs what and when - Redact - provides PII redaction with detection of over 40 different types of sensitive information - Secure Audit Log - logs the user, cleansed prompt, and model to a secure tamper-proof audit trail - Sends the cleansed prompt to the OpenAI API and receives the response - Domain Intel - Performs a Domain Reputation lookup on any domain names in the response - URL Intel - Performs a URL Reputation lookup on any URLs in the response - Defangs any malicious domains or URLs found in the response - On closing your session, the history of prompts disappears Storing what users have prompted allows you to better train your model, feed it more relevant information, and keep an audit log of the history. The Secure Audit Log service can store the user inputs in a secure log so that you can track who did what and when. The final layer of defense is a Domain Intel service to detect and neutralize the malicious URLs and domain names in the OpenAI API's response. The proof-of-concept app is open-source on GitHub. Visit our repo https://bit.ly/42lScMd and deploy the app with a simple NPX command. We’d love your feedback. -Oliver https://bit.ly/42lScMd June 1, 2023 at 06:49PM

No comments:

Post a Comment