@LLMSherpa
Novel jailbreak discovered. Not only does OpenAi putting your name in the system prompt impact the way GPT responds, but it also opens the model up to a prompt INSERTION. Not injection. You can insert a trigger into the actual system prompt, which makes it nigh indefensible. https://t.co/Rl4U1MOuZV