Real Jobs. Real Change. See What's Next.
A living directory of real jobs that didn't exist 5 years ago. Curated for leaders, builders, and the curious.
AI Safety Scientist
Evaluates, tests, and certifies artificial intelligence products to ensure they meet rigorous safety standards and regulatory compliance requirements across industries
Key Responsibilities:
- Evaluate and assess AI product safety by analyzing performance against reliability, robustness, transparency, and bias management principles
- Test and certify AI products for intended function and safety to meet regulatory requirements and industry standards
- Train junior team members in AI safety evaluation, testing methodologies, and certification processes
- Create documentation and manage certification projects to ensure AI products comply with safety standards and regulations
- Maintain technical knowledge of AI standards (ISO/IEC JTC 1/SC 42, IEEE-SA) and regulations (EU AI Act, GDPR, US AI Executive Order)
- Collaborate with interdisciplinary teams to ensure AI products adhere to safety standards, regulations, and best practices
- Work closely with customers developing AI applications to assess safety requirements and certification needs
Skills & Tools:
- AI/ML development experience (5+ years in industrial applications)
- AI safety frameworks and evaluation methodologies
- Testing, inspection, and certification (TIC) industry knowledge
- AI standards and regulatory compliance (ISO, IEEE, EU AI Act, GDPR)
- Risk management and safety assessment tools
- AI explainability and bias detection techniques
- Project management and client communication skills
- Documentation and certification process management
- Cross-functional collaboration and team leadership abilities
Where This Role Has Appeared:
- UL Solutions (Testing/Certification, Northbrook, IL/Remote, $125k-$167k, July 2025)
Variants & Related Titles:
- AI Compliance Engineer
- AI Testing and Validation Specialist
- AI Safety Engineer
- AI Certification Consultant
- AI Risk Assessment Specialist
Why This Role Is New:
AI Safety Scientist emerged in 2023-2024 as AI systems moved into safety-critical applications and regulators began implementing AI-specific safety requirements like the EU AI Act. The role extends traditional product safety testing to AI systems, addressing unique challenges like algorithmic bias, explainability, and robustness that don't exist in conventional product testing.
Trend Insight:
As AI regulation intensifies globally and AI systems become embedded in critical infrastructure, third-party AI safety certification is becoming as essential as traditional product safety testing, creating a new market for AI safety professionals.
Seen this role elsewhere? Submit an example or share your story.