Prepare for the Salesforce AI Specialist Exam with our comprehensive flashcards and multiple-choice questions. Each question includes detailed hints and explanations. Ace your exam with confidence!

Each practice test/flash card set has 50 randomly selected questions from a bank of over 500. You'll get a new set of questions each time!

Practice this question and more.


What system policies are designed to limit AI hallucinations and reduce the risk of unintended outputs?

  1. Prompt Defense

  2. Data Masking

  3. Call Explorer

  4. Einstein Service Replies

The correct answer is: Prompt Defense

The choice of "Prompt Defense" as the correct answer highlights the importance of implementing specific strategies within AI systems to mitigate the risk of generating inaccurate or misleading outputs, commonly referred to as AI hallucinations. Prompt Defense is focused on enhancing the interaction between the user and the AI by creating safeguards around the input prompts provided to the system. By refining and filtering these prompts, the system improves its ability to interpret user requests accurately, reducing the chance of misrepresenting information. This proactive approach not only aims to boost the relevance of the generated responses but also to maintain a level of consistency and reliability in the outputs produced by the AI system. In contrast, other options do not directly address the specific challenge of mitigating hallucinations. Data Masking focuses on protecting sensitive data rather than affecting AI output quality. Call Explorer facilitates interaction and exploration of call data, but its use is unrelated to managing AI answer credibility. Einstein Service Replies are designed for automating customer support responses, which may not inherently incorporate measures against hallucination risks. Thus, Prompt Defense stands out as a targeted policy aimed specifically at making AI outputs more accurate and trustworthy.