AI Use & Transparency Policy

Last updated: December 18, 2024

1. Our Philosophy

ValueSignal is built on trust and transparency. We believe you should understand how your data is used, how signals are generated, and what they represent. This policy explains our approach to AI, data processing, and signal generation in plain language.

2. What Are Skill Signals?

2.1 Signals Are Informational, Not Decisions

Skill signals are AI-generated representations of capabilities, problem-solving approaches, and demonstrated competencies based on your conversation data. They are:

  • Informational — designed to provide insights, not make decisions
  • Interpretable — meant to be reviewed and understood by humans
  • Evidence-based — derived from your actual AI conversations and work
  • Reviewable — you can see the evidence behind every signal

Skill signals are not:

  • Automated employment or hiring decisions
  • Definitive assessments of your abilities
  • Guarantees of job fit or career success
  • Substitutes for human judgment

3. How Signals Are Generated

3.1 Analysis Process

When you capture an AI conversation, our system:

  1. Analyzes the conversation for patterns, problem-solving approaches, and demonstrated capabilities
  2. Extracts relevant evidence (prompts, responses, iterations, reasoning)
  3. Generates scores across multiple dimensions (e.g., problem-solving, technical depth, creativity, communication)
  4. Creates a signal that represents these capabilities
  5. Makes the signal available for your review before it becomes "live"

3.2 Human-in-the-Loop Philosophy

We believe AI should augment human judgment, not replace it. That's why:

  • You review every signal before it's published
  • You can edit, delete, or reject any signal
  • All signals show transparent evidence (the actual conversation that generated them)
  • Recruiters and employers interpret signals with their own judgment

4. AI Models and Third-Party Services

ValueSignal integrates with third-party AI services to analyze conversations:

  • ChatGPT (OpenAI) — for conversation analysis and pattern recognition
  • Claude (Anthropic) — for reasoning and capability assessment
  • Gemini (Google) — for multi-modal analysis
  • Cursor — for code and technical conversation analysis

Your use of these services through ValueSignal is subject to their respective terms and conditions. We are not responsible for the practices or policies of these third-party providers.

5. Bias and Fairness

5.1 Our Commitment

We strive for neutrality and fairness in our AI analysis. However, AI systems can reflect inherent biases present in:

  • Training data used by third-party AI models
  • Algorithmic design and scoring methodologies
  • Language, cultural, or domain-specific assumptions

5.2 What We Do

  • Regularly review and test our scoring algorithms for potential bias
  • Provide transparent evidence so you can validate signals yourself
  • Allow you to report concerns about potential bias
  • Continuously improve our models based on feedback

5.3 Your Responsibility

You are responsible for reviewing and validating your skill signals. If you notice potential bias or inaccuracies, please contact us at privacy@valuesignal.ai. We take these reports seriously and will investigate.

6. Data Usage and Model Training

6.1 We Do Not Use Your Conversations for Model Training

We do not use private user conversations to train ValueSignal models without explicit consent. Your conversation data is used solely to generate your personal skill graph. We do not feed your conversations into training datasets for our internal models.

6.2 Aggregated Data

Aggregated, anonymized data (with all personally identifiable information removed) may be used to improve system performance, accuracy, and reliability. This aggregated data cannot be traced back to individual users or conversations.

6.3 Opt-In for Model Improvement

If you wish to allow your anonymized data to be used for model improvement, you may opt-in through your account settings. You may withdraw this consent at any time. Opting in is completely optional and does not affect your ability to use the Service.

7. Transparency and Control

7.1 You Control Your Data

  • You choose what conversations to capture
  • You review every signal before it's published
  • You can edit, delete, or reject any signal at any time
  • You control what's visible on your public profile
  • You can delete your account and all associated data

7.2 Transparent Evidence

Every signal shows the evidence behind it — the actual prompts, responses, and conversation context that generated the signal. This transparency allows you (and others) to understand and validate how signals are created.

8. Questions and Feedback

If you have questions about how ValueSignal uses AI, generates signals, or processes your data, please contact us at:

Email: privacy@valuesignal.ai
Website: valuesignal.ai

We welcome feedback on how we can improve transparency, fairness, and user control.