Publication Norms Research Fellow
The Partnership on AI (PAI) is seeking a Research Fellow to study publication of AI research as a lever in mitigating the risks of increasingly advanced systems, as described in our work on Publication Norms for Responsible AI. The Fellow will explore the challenges faced by the AI ecosystem in adopting responsible publication practices, and create resources to help practitioners anticipate and mitigate risks. The Fellow will work with PAI staff across the partnerships, research, and policy teams and will report to the Program Lead that oversees this area. The project as currently scoped would suit a 12-month fellowship, though based on candidate preferences and abilities we will consider durations between 6-12 months. Start date is flexible (though earlier is preferred).
Earlier this year, NeurIPS (a major AI research conference) announced that authors must submit a statement about the ‘broader societal impacts’ of their research. Last year, research teams such as OpenAI experimented with ‘staged release’ to mitigate potential risks of disseminating increasingly capable ML systems. As the AI community attempts to navigate towards more responsible publication practices, PAI’s work in this area has revealed coordination challenges and a lack of resources to be key impediments in their adoption. PAI now seeks to hire a Research Fellow to distill these issues and illuminate potential solutions.
The PAI Research Fellow will have the opportunity to refine the scope and shape their research project, but the following illustrates the kinds of activities we envision for the role:
Use qualitative interviews and focus groups with key stakeholders in the AI/ML research ecosystem to better understand the common issues and coordination challenges with respect to responsible publication practices.
Hold open discussions and host events with broader sections of the AI community to understand the variety of perspectives on publication norms.
Undertake literature reviews of other fields that may provide insight into navigating trade-offs in publication decisions for high-stakes technology, and examples of effective policies.
Publish findings in relevant venues (e.g. conference papers or workshops, blog posts, etc).
Build resources to help combat the issues and challenges uncovered in the previous activities.
Ultimately, the PAI Research Fellow’s methodology will service the following questions:
What are the key challenges faced by our Partner community and the AI research ecosystem at large in adopting more responsible publication practices?
What resources (tools, services, frameworks, etc) might help solve these challenges, and how can PAI enable their creation?
PAI is a fast-paced organization; initiative and efficiency are critical for success in this position. Because of the varied and dynamic nature of PAI’s multistakeholder research projects, the ideal candidate would have experience working on research projects with multiple sets of stakeholders, with different interests, backgrounds, and perspectives.
Key Responsibilities (including, but not limited to):
Refine and plan a practical and robust research and engagement methodology in service of the research questions outlined above.
Execute the plan in a manner that leverages the Partnership’s voice and perspective, rather than one company, lab, or organization’s.
Identify and engage stakeholders relevant to the project (particularly those likely to be overlooked).
Contribute to the production of events that support the project’s goals.
Analyze and synthesize findings into appropriate outputs (papers, frameworks, etc).
Work collaboratively with other PAI teams and external stakeholders.
Attend relevant events, convenings, and conferences.
PhD or equivalent experience in social science research methods, or a STEM degree plus professional experience with qualitative and quantitative research methods.
Familiarity with AI and an interest in its impact on society.
Experience leading participant interviews and focus groups, and proficiency in qualitative analysis techniques such as thematic coding.
Excellent written and oral communications skills.
Comfort navigating diverse input from a wide array of contributors.
Ability to translate research insights into solutions and frameworks for practical implementation.
An interest in contributing to programmatic activities such as events and workshops.
Familiarity with or interested in learning about one of more of the fields we have identified as sources of inspiration for this project (e.g. synthetic biology, biosecurity, cybersecurity, nuclear security, national security, human rights).
Preference will be given to candidates based in the Bay Area but it is not a requirement.
Research Fellowships at PAI are allocated in three bands of seniority, and the Publication Norms Research Fellow can be drawn from any of the below:
Postgraduate Research Fellowships are suitable for candidates with a PhD or equivalent experience.
Research Fellowships are suitable for early to mid-career candidates, who have a PhD and a demonstrated track record of research and/or technology policy work; or who have more than a PhD equivalent level research, technical or policy experience and output in non-academic settings.
Senior Research Fellowships are suitable for well-established, senior researchers who have led successful labs or research teams or have an extensive track record of research work.
PAI is proud to be an equal opportunity employer. We celebrate diversity and we are committed to creating an inclusive environment in all aspects of employment, including recruiting, hiring, promoting, training, education assistance, social and recreational programs, compensation, benefits, transfers, discipline, and all privileges and conditions of employment. Employment decisions at PAI are based on business needs, job requirements, and individual qualifications.
PAI will consider for employment qualified applicants with criminal histories, in a manner consistent with the San Francisco Fair Chance Ordinance or similar laws.
The Partnership on AI may become subject to certain governmental record keeping and reporting requirements for the administration of civil rights laws and regulations. We also track diversity in our workforce for the purpose of improving over time. In order to comply with these goals, the Partnership on AI invites employees to voluntarily self-identify their gender and race/ethnicity. Submission of this information is voluntary and refusal to provide it will not jeopardize or adversely affect employment or any consideration you may receive for employment or advancement. The information obtained will be kept confidential.
Resume and/or CV
Cover Letter that includes ways in which you meet the preferred criteria