Job Description
You will focus on identifying and mitigating risks associated with AI-generated content, ensuring adherence to safety and ethical standards. Your deep understanding of LLM APIs, combined with your technical expertise in Python, will allow you to develop and execute effective testing strategies.
Client Details
An entity supporting the technology strategy and digital transformation initiatives of a major financial firm.
Description
Content Safety Testing: Design, develop, and execute comprehensive test plans to ensure the security and reliability of content generated by large language models.LLM Integration: Work with LLM APIs (e.g., OpenAI's API) to integrate AI models into automated testing workflows, ensuring consistent and thorough content evaluation.Prompt Engineering: Leverage prompt engineering skills to enhance testing strategies and optimize LLM behaviour for safe and reliable output.Defect Identification & Resolution: Identify and document content safety issues or model behaviour concerns, and collaborate with AI developers to resolve them.Automation & Scripting: Develop automated testing scripts using Python to streamline LLM testing processes and ensure scalable and efficient content safety evaluations.
Profile
Master's or Bachelor's degree in Computer Science, Engineering, or a related fieldProficient in Python programming and comfortable working with LLM APIs (e.g., OpenAI) Experienced in testing the safety, security, and reliability of AI-generated contentKnowledgeable in prompt engineering techniques and how they can enhance content safetyPassionate about AI technologies and committed to ensuring the ethical and secure use of AI-generated contentAt least 1 year of hands-on experience with LLM APIs
Job Offer
Opportunity to work with cutting edge technologies and support the development of ethical and secure AI use casesExcellent overall compensation package and benefits