AI Safety Researcher
The world’s most popular audio streaming subscription service is looking for a AI Safety Researcher to join the band in a consultant assignment. The client has transformed music listening forever when launched in 2008.
Period: ASAP to 2026-07-25 (full-time), with a possibility of extension.
About the role
The Personalization mission makes deciding what to play next easier and more enjoyable for every listener. From Blend to Discover Weekly, some of client's most-loved features. The team built them by understanding the world of music and podcasts better than anyone else. Join the team and you’ll keep millions of users listening by making great recommendations to each and every one of them.
We are looking for a researcher to further strengthen our client's work on AI safety. You will work with a cross functional team of highly skilled researchers, engineers, and domain experts on making sure our features are safe and trustworthy. You have a strong technical background and are able to work hands-on with complex systems and data.
What you'll do
Working with a cross functional team including Research, Trust & Safety and Engineering.
Adversarial Testing: Stress test systems, e.g. via red-teaming campaigns, to identify material gaps and produce training data.
Working hands on with querying and managing data, automated red teaming frameworks, LLM-as-ajudge, and more.
Benchmarking with similar services.
System alignment: Work with the teams to better align systems with evolving safety policies, focusing on robust and scalable processes.
Prompt and context engineering; Preference Tuning; Automatic prompt optimisation.
Producing high quality test and training data..
Preferably work full time during the contract, but part time can be applicable as well.
Who you are
Essential Safety Experience: Proven experience contributing to safety-related projects or research (e.g., adversarial testing, system alignment).
Technical Stack: Strong proficiency in Python, Java, and SQL.
AI Expertise: Hands-on experience with LLMs and prompt/context engineering.
Academic Requirement: Preferably pursuing or holding an MSc or PhD in an AI/ML-related field, with a focus on safety or agentic systems.
Plus: Experience working with cross-language models.
Core Expertise: Safety Research and advanced model alignment techniques.
Responsibilities: Lead adversarial testing/red-teaming campaigns to identify material gaps, focusing on robust and scalable system alignment (e.g., Preference Tuning, automatic prompt optimisation).
We are Market Partner
Market Partner is proud to be an equal opportunity employer. You are welcome to our community regardless of who you are, no matter where you come from, or what you look like. We apply ongoing selection and may fill the position as soon as we find the right candidate.
- Avdelning
- Audio streaming
- Locations
- Stockholm
- Remote status
- Hybrid
Stockholm
About Market Partner
I över 10 år har Market Partner utvecklat olika företags verksamheter genom att erbjuda skräddarsydda kundlösningar inom Projektledning, Affärsutveckling, Rekrytering och Utbildning inom IT & Telecom.
Already working at Market Partner?
Let’s recruit together and find your next colleague.