Known limitations

Vulnerability to adversarial prompting

We expect users to test the limits of what Brackly AI can do and attempt to break Brackly AI’s protections, including trying to get it to divulge its training data or other information, or try to get around its safety mechanisms. We have tested and continue to test Brackly AI rigorously, but we know users will find unique, complex ways to stress test it further. This is an important part of refining the Brackly AI model, especially in these early days, and we look forward to learning the new prompts users come up with, and in turn, figuring out methods to prevent Brackly AI from outputting problematic or sensitive information. And although we’ve sought to address and reduce risks proactively, like all LLM-based experiences, Brackly AI will still make mistakes, and currently, users must be 18 years old or older to try it.