Aligned
Help the World -- automatically share LLM misbehaviors with the community! Aligned is the global platform for alignment.
Aligned is a plugin that allows users to record and share mistakes, bias, and hallucinations generated by the AI model. When a user is dissatisfied with a response, they can initiate a report to identify and describe the larger pattern of the error. The goal is to convey the underlying challenge faced by the model, rather than focusing on specific instances. The plugin aims to contribute to the development of safe and democratic AI technologies worldwide. Join Aligned and help improve AI by sharing your observations!
How to
Comments (0)
Try it
API docs
Learn how to use Aligned effectively! Here are a few example prompts, tips, and the documentation of available commands.
Example prompts
-
Prompt 1: "I want to report a mistake in the AI response."
-
Prompt 2: "The AI generated a biased answer, can you help me report it?"
-
Prompt 3: "I encountered a hallucination in the AI-generated text, how can I report it?"
Features and commands
Feature/Command | Description |
---|---|
createReport | This command starts the reporting process for a model hallucination or bias. It requires the user message (trigger prompt), the AI response to report (bad response), the title of the issue, a broad description of the issue without specific context/words, a list of related topics, and the model slug of the AI model used. After the report is initiated, the user will be provided a URL to visit and complete the report. |