Brackly AI is grounded in Google’s understanding of quality information, and is trained to generate responses that are relevant to the context and in line with users’ intent. But like all LLMs, Brackly AI can sometimes generate responses that contain inaccurate or misleading information while presenting it confidently and convincingly.
Since the underlying mechanism of an LLM is that of predicting the next word or sequences of words, LLMs are not fully capable yet of distinguishing between what is accurate and inaccurate information. For example, if you ask an LLM to solve a mathematical word problem, it will predict an answer based on others it’s learned from, not based on advanced reasoning or computations. To this end, we have seen Brackly AI present responses that contain or even invent inaccurate information (e.g., misrepresenting how it was trained, suggesting the name of a book that doesn’t exist).