Research Engineer, Safety Reasoning
1 week ago
The Safety Systems team is responsible for various safety work to ensure our best models can be safely deployed to the real world to benefit the society and is at the forefront of OpenAI's mission to build and deploy safe AGI, driving our commitment to AI safety and fostering a culture of trust and transparency.
The Safety Reasoning Research team is poised at the intersection of short-term pragmatic projects and long-term fundamental research, prioritizing rapid system development while maintaining technical robustness. Key focus areas include improving foundational models' ability to accurately reason about safety, values, and questions of cultural norms, refining moderation models, driving rapid policy improvements, and addressing critical societal challenges like election misinformation. As we venture into 2024, the team seeks talents adept in novel abuse discovery and policy iteration, aligning with our high-priority goals of multimodal moderation and ensuring digital safety.
About the Role
The role involves developing innovative machine learning techniques that push the limit of our foundation model's safety understanding and capability. You will engage in defining and developing realistic and impactful safety tasks that, once improved, can be integrated into OpenAI's safety systems or benefit other safety/alignment research initiatives. Examples of safety initiatives include moderation policy enforcement, policy development using democratic input, and safety reward modeling. You will be experimenting with a wide range of research techniques not limited to reasoning, architecture, data, and multimodal.
In this role, you will:
- Conduct applied research to improve the ability of foundational models to accurately reason about questions of human values, morals, ethics, and cultural norms, and apply these improved models to practical safety challenges.
- Develop and refine AI moderation models to detect and mitigate known and emerging patterns of AI misuse and abuse.
- Work with policy researchers to adapt and iterate on our content policies to ensure effective prevention of harmful behavior.
- Contribute to research on multimodal content analysis to enhance our moderation capabilities.
- Develop and improve pipelines for automated data labeling and augmentation, model training, evaluation and deployment, including active learning process, routines for calibration and validation data refresh etc.
- Experiment and design an effective red-teaming pipeline to examine the robustness of our harm prevention systems and identify areas for future improvement.
- Are excited about OpenAI's mission of building safe, universally beneficial AGI and are aligned with OpenAI's charter
- Possess 5+ years of research engineering experience and proficiency in Python or similar languages.
- Thrive in environments involving large-scale AI systems and multimodal datasets (a plus).
- Exhibit proficiency in the field of AI safety, focusing on topics like RLHF, adversarial training, robustness, fairness & biases, which is extremely advantageous.
- Show enthusiasm for AI safety and dedication to enhancing the safety of cutting-edge AI models for real-world use.
About OpenAI
OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.
We are an equal opportunity employer and do not discriminate on the basis of race, religion, national origin, gender, sexual orientation, age, veteran status, disability or any other legally protected status.
OpenAI Affirmative Action and Equal Employment Opportunity Policy Statement
For US Based Candidates: Pursuant to the San Francisco Fair Chance Ordinance, we will consider qualified applicants with arrest and conviction records.
We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.
OpenAI Global Applicant Privacy Policy
At OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
-
Research Engineer, Safety Reasoning
4 weeks ago
San Francisco, United States OpenAI Full timeThe Safety Systems team is responsible for various safety work to ensure our best models can be safely deployed to the real world to benefit society and is at the forefront of OpenAI's mission to build and deploy safe AGI, driving our commitment to AI safety and fostering a culture of trust and transparency.The Safety Reasoning Research team is poised at the...
-
AI Safety Research Engineer Intern
6 days ago
San Francisco, California, United States Center for AI Safety Full timeWe are seeking a talented research engineer intern to join our team at the Center for AI Safety. As a research engineer intern, you will work closely with our researchers on cutting-edge projects in AI safety, leveraging your skills in machine learning and software development.Company OverviewThe Center for AI Safety is a leading research and field-building...
-
Research Engineer
4 weeks ago
San Francisco, United States OpenAI Full timeBy applying to this role, you will be considered for Research Engineer roles across all teams at OpenAI.About the RoleAs a Research Engineer here, you will be responsible for building AI systems that can perform previously impossible tasks or achieve unprecedented levels of performance. We're looking for people with solid engineering skills,...
-
Research Engineer
7 days ago
San Francisco, United States OpenAI Full timeBy applying to this role, you will be considered for Research Engineer roles across all teams at OpenAI. About the Role As a Research Engineer here, you will be responsible for building AI systems that can perform previously impossible tasks or achieve unprecedented levels of performance. We're looking for people with solid engineering skills (for example...
-
SAFETY ENGINEER
5 days ago
San Antonio, United States Southwest Research Institute Full timeWho We Are:The Safety & Industrial Hygiene team is responsible for ensuring the safety and health of personnel. This is done through hazard assessment and mitigation, engineering control measures, continual improvement efforts, and ensuring regulatory compliance. The Team is comprised of safety professionals that work together to support the needs of the...
-
Research Engineer, Preparedness
4 weeks ago
San Francisco, United States OpenAI Full timeResearch Engineer, Preparedness | OpenAI | OpenAICareersResearch Engineer, PreparednessSafety Systems - San FranciscoAbout the teamThe Safety Systems team is responsible for various safety work to ensure our best models can be safely deployed to the real world to benefit society and is at the forefront of OpenAI's mission to build and deploy safe AGI,...
-
San Francisco, California, United States RI Research Instruments GmbH Full timeCompany OverviewAnthropic is a leading AI safety and research company working to build reliable, interpretable, and steerable AI systems. Our interdisciplinary team has experience across machine learning, physics, policy, business, and product.Role DetailsWe are seeking a highly skilled Research Scientist to join our team. The successful candidate will have...
-
San Antonio, United States Southwest Research Institute Full timeWho We Are:We are a multidisciplinary team of engineers who collaboratively solve client driven problems primarily related to the human musculoskeletal system. We combine advanced experimental and computational methods applied to challenging problems to improve human health, performance, and safety.Objectives of this Role:Join our team to conduct innovative...
-
San Antonio, United States Southwest Research Institute Full timeWho We Are:We are a multidisciplinary team of engineers who collaboratively solve client driven problems primarily related to the human musculoskeletal system. We combine advanced experimental and computational methods applied to challenging problems to improve human health, performance, and safety.Objectives of this Role:Join our team to conduct innovative...
-
Research Engineer, Pre-training Architecture
3 weeks ago
San Francisco, United States OpenAI Full timeResearch Engineer, Pre-training Architecture | OpenAIResearch Engineer, Pre-training ArchitectureFoundations - San FranciscoAbout the TeamThe architecture team is responsible for advancing the neural network architecture of OpenAI’s flagship language models. Our work spans the entire spectrum of architecture development and deployment all the way from the...
-
Staff Research Engineer
1 week ago
San Francisco, United States Strativ Group Full timeStaff Research Engineer We are currently partnered with a well-backed AI Safety startup who are on a mission to improve humanity's understanding of complex AI models and improve the safety and reliability of AI on a global scale. The Founding team consists of experienced, successful entrepreneurs and pioneering leaders from some of the world's top Research...
-
San Francisco, California, United States OpenAI Full timeAbout the RoleIn this position, you will develop innovative machine learning techniques and collaborate with peers across the organization to advance OpenAI's mission of ensuring safe artificial general intelligence.We are looking for individuals with solid engineering skills who can write bug-free ML code and work in complex code bases behind our...
-
Software Engineer, Safety
3 weeks ago
San Francisco, United States OpenAI Full timeAbout the TeamThe Applied AI team safely brings OpenAI's advanced technology to the world. We released the GPT-3 API, Codex (which powers GitHub Copilot), and DALL-E. More is coming very soon.We empower developers with APIs offering state-of-the-art AI capabilities, which power product features that were never before possible. We also build AI-driven...
-
Research Engineer
23 hours ago
San Francisco, CA, United States RI Research Instruments GmbH Full timeYou want to build large scale ML systems from the ground up. You care about making safe, steerable, trustworthy systems. As a Research Engineer, you'll touch all parts of our code and infrastructure, whether that's making the cluster more reliable for our big jobs, improving throughput and efficiency, running and designing scientific experiments, or...
-
Research Engineer
30 minutes ago
San Francisco, CA, United States OpenAI Full timeBy applying to this role, you will be considered for Research Engineer roles across all teams at OpenAI. About the Role As a Research Engineer here, you will be responsible for building AI systems that can perform previously impossible tasks or achieve unprecedented levels of performance. We're looking for people with solid engineering skills, including:...
-
Research Engineer, Alignment
1 week ago
San Francisco, United States Openai Full timeAbout the Team The Alignment team at OpenAI is dedicated to ensuring that our AI systems are safe, trustworthy, and consistently aligned with human values, even as they scale in complexity and capability. Our work is at the cutting edge of AI research, focusing on developing methodologies that enable AI to robustly follow human intent across a wide range of...
-
Research Engineer, AI Security
4 weeks ago
San Francisco, United States OpenAI Full timeResearch Engineer, AI Security & PrivacyThe Safety Systems team is responsible for various safety work to ensure our best models can be safely deployed to benefit society. It is at the forefront of OpenAI's mission to build and deploy safe AGI, driving our commitment to AI safety and fostering a culture of trust and transparency.As cutting-edge AI models get...
-
Research Engineer, Security
4 weeks ago
San Francisco, United States OpenAI Full timeSecurity is at the foundation of OpenAI’s mission to ensure that artificial general intelligence benefits all of humanity.In addition to protecting OpenAI’s technology, people, and products, the Security Team has the mandate to explore the frontiers of AI and cybersecurity. Our research mission is to create a safer and more secure digital ecosystem by...
-
Research Engineer/Scientist, Perception
7 days ago
San Francisco, United States OpenAI Full timeAbout the Team The Perception team is responsible for the "perception" capabilities in all flagship models like GPT-4o, enabling them to understand the world beyond text inputs and to predict or act with human-level accuracy. Our goal is to advance the frontier of perception through rigorous scientific research and high-performance engineering, accelerating...
-
Research Engineer, Human Data
1 week ago
San Francisco, United States OpenAI Full timeAbout the team OpenAI's Human Data Team creates custom data solutions driving groundbreaking research. Our work enhances and evaluates our flagship models and products like ChatGPT, GPT-4o, and Sora, and contributes to safety initiatives through collaboration with our Preparedness and Safety Systems teams. We work with AI trainers to gather specialized data...