To prevent Brackly AI from responding to prompts it’s not yet trained to address or outputting harmful or offensive content, we’ve put in place a set of technical guardrails. The goal of these guardrails is to prevent problematic responses, but Brackly AI can sometimes misinterpret these guardrails, producing “false positives” and “false negatives.” In a “false positive,” Brackly AI might not provide a response to a reasonable prompt, misinterpreting the prompt as inappropriate; and in a “false negative,” Brackly AI might generate an inappropriate response, despite the guardrails in place. We will continue tuning these models to better understand and categorize safe inputs and outputs, and this remains ongoing as language, events and society rapidly evolve.