Tuesday, 6 December 2022

Show HN: I designed a ChatGPT prompt evaluator to ruin your fun;) https://bit.ly/3BeNRji

Show HN: I designed a ChatGPT prompt evaluator to ruin your fun;) Today I designed a method to prevent users from jailbreaking ChatGPT (for instance, users have generated instructions to produce weapons or illegal drugs, commit a burglary, kill oneself, take over the world as an evil superintelligence, or create a virtual machine which they then can use). The OpenAI team appears to be countering these primarily using prompt engineering or fine-tuning on the ChatGPT model. The idea is to use a second and fully separate, fine-tuned LLM to evaluate prompts before sending them to ChatGPT. You can test this by inserting your successful ChatGPT jailbreaks. Break it for me if you dare! I look forward to seeing your results! https://bit.ly/3Bg7c3p December 6, 2022 at 06:46PM

No comments:

Post a Comment