I want to pass on the full text of an editorial by Blau et al in PNAS, the link points to the more complete open source online version containing acknowledgements and references:
Revolutionary advances in AI
have brought us to a transformative moment for science. AI is
accelerating scientific discoveries and analyses. At the same time, its
tools and processes challenge core norms and values in the conduct of
science, including accountability, transparency, replicability, and
human responsibility (
1–
3). These difficulties are particularly apparent in recent advances with
generative AI. Future innovations with AI may mitigate some of these or raise new concerns and challenges.
With
scientific integrity and responsibility in mind, the National Academy
of Sciences, the Annenberg Public Policy Center of the University of
Pennsylvania, and the Annenberg Foundation Trust at Sunnylands recently
convened an interdisciplinary panel of experts with experience in
academia, industry, and government to explore rising challenges posed by
the use of AI in research and to chart a path forward for the
scientific community. The panel included experts in behavioral and
social sciences, ethics, biology, physics, chemistry, mathematics, and
computer science, as well as leaders in higher education, law,
governance, and science publishing and communication. Discussions were
informed by commissioned papers detailing the development and current
state of AI technologies; the potential effects of AI advances on
equality, justice, and research ethics; emerging governance issues; and
lessons that can be learned from past instances where the scientific
community addressed new technologies with significant societal
implications (
4–
9).
Generative
AI systems are constructed with computational procedures that learn
from large bodies of human-authored and curated text, imagery, and
analyses, including expansive collections of scientific literature. The
systems are used to perform multiple operations, such as
problem-solving, data analysis, interpretation of textual and visual
content, and the generation of text, images, and other forms of data. In
response to prompts and other directives, the systems can provide users
with coherent text, compelling imagery, and analyses, while also
possessing the capability to generate novel syntheses and ideas that
push the expected boundaries of automated content creation.
Generative AI’s power to interact with scientists
in a natural manner, to perform unprecedented types of problem-solving,
and to generate novel ideas and content poses challenges to the
long-held values and integrity of scientific endeavors. These challenges
make it more difficult for scientists, the larger research community,
and the public to 1) understand and confirm the veracity of generated
content, reviews, and analyses; 2) maintain accurate attribution of
machine- versus human-authored analyses and information; 3) ensure
transparency and disclosure of uses of AI in producing research results
or textual analyses; 4) enable the replication of studies and analyses;
and 5) identify and mitigate biases and inequities introduced by AI
algorithms and training data.
Five Principles of Human Accountability and Responsibility
To
protect the integrity of science in the age of generative AI, we call
upon the scientific community to remain steadfast in honoring the
guiding norms and values of science. We endorse recommendations from a
recent National Academies report that explores ethical issues in
computing research and promoting responsible practices through education
and training (
3).
We also reaffirm the findings of earlier work performed by the National
Academies on responsible automated research workflows, which called for
human review of algorithms, the need for transparency and
reproducibility, and efforts to uncover and address bias (
10).
Building
upon the prior studies, we urge the scientific community to focus
sustained attention on five principles of human accountability and
responsibility for scientific efforts that employ AI:
1.
Transparent disclosure and attribution
Scientists
should clearly disclose the use of generative AI in research, including
the specific tools, algorithms, and settings employed; accurately
attribute the human and AI sources of information or ideas,
distinguishing between the two and acknowledging their respective
contributions; and ensure that human expertise and prior literature are
appropriately cited, even when machines do not provide such citations in
their output.
Model creators and refiners
should provide publicly accessible details about models, including the
data used to train or refine them; carefully manage and publish
information about models and their variants so as to provide scientists
with a means of citing the use of particular models with specificity;
provide long-term archives of models to enable replication studies;
disclose when proper attribution of generated content cannot be
provided; and pursue innovations in learning, reasoning, and information
retrieval machinery aimed at providing users of those models with the
ability to attribute sources and authorship of the data employed in
AI-generated content.
2.
Verification of AI-generated content and analyses
Scientists
are accountable for the accuracy of the data, imagery, and inferences
that they draw from their uses of generative models. Accountability
requires the use of appropriate methods to validate the accuracy and
reliability of inferences made by or with the assistance of AI, along
with a thorough disclosure of evidence relevant to such inferences. It
includes monitoring and testing for biases in AI algorithms and output,
with the goal of identifying and correcting biases that could skew
research outcomes or interpretations.
Model creators
should disclose limitations in the ability of systems to confirm the
veracity of any data, text, or images generated by AI. When verification
of the truthfulness of generated content is not possible, model output
should provide clear, well-calibrated assessments of confidence. Model
creators should proactively identify, report, and correct biases in AI
algorithms that could skew research outcomes or interpretations.
3.
Documentation of AI-generated data
Scientists
should mark AI-generated or synthetic data, inferences, and imagery
with provenance information about the role of AI in their generation, so
that it is not mistaken for observations collected in the real world.
Scientists should not present AI-generated content as observations
collected in the real world.
Model creators
should clearly identify, annotate, and maintain provenance about
synthetic data used in their training procedures and monitor the issues,
concerns, and behaviors arising from the reuse of computer-generated
content in training future models.
4.
A focus on ethics and equity
Scientists and model creators
should take credible steps to ensure that their uses of AI produce
scientifically sound and socially beneficial results while taking
appropriate steps to mitigate the risk of harm. This includes advising
scientists and the public on the handling of tradeoffs associated with
making certain AI technologies available to the public, especially in
light of potential risks stemming from inadvertent outcomes or malicious
applications.
Scientists and model creators
should adhere to ethical guidelines for AI use, particularly in terms
of respect for clear attribution of observational versus AI-generated
sources of data, intellectual property, privacy, disclosure, and
consent, as well as the detection and mitigation of potential biases in
the construction and use of AI systems. They should also continuously
monitor other societal ramifications likely to arise as AI is further
developed and deployed and update practices and rules that promote
beneficial uses and mitigate the prospect of social harm.
Scientists, model creators, and policymakers
should promote equity in the questions and needs that AI systems are
used to address as well as equitable access to AI tools and educational
opportunities. These efforts should empower a diverse community of
scientific investigators to leverage AI systems effectively and to
address the diverse needs of communities, including the needs of groups
that are traditionally underserved or marginalized. In addition, methods
for soliciting meaningful public participation in evaluating equity and
fairness of AI technologies and uses should be studied and employed.
AI
should not be used without careful human oversight in decisional steps
of peer review processes or decisions around career advancement and
funding allocations.
5.
Continuous monitoring, oversight, and public engagement
Scientists,
together with representatives from academia, industry, government, and
civil society, should continuously monitor and evaluate the impact of AI
on the scientific process, and with transparency, adapt strategies as
necessary to maintain integrity. Because AI technologies are rapidly
evolving, research communities must continue to examine and understand
the powers, deficiencies, and influences of AI; work to anticipate and
prevent harmful uses; and harness its potential to address critical
societal challenges. AI scientists must at the same time work to improve
the effectiveness of AI for the sciences, including addressing
challenges with veracity, attribution, explanation, and transparency of
training data and inference procedures. Efforts should be undertaken
within and across sectors to pursue ongoing study of the status and
dynamics of the use of AI in the sciences and pursue meaningful methods
to solicit public participation and engagement as AI is developed,
applied, and regulated. Results of this engagement and study should be
broadly disseminated.
A New Strategic Council to Guide AI in Science
We
call upon the scientific community to establish oversight structures
capable of responding to the opportunities AI will afford science and to
the unanticipated ways in which AI may undermine scientific integrity.
We propose that the National Academies of Sciences, Engineering, and Medicine establish a
Strategic Council on the Responsible Use of Artificial Intelligence in Science.
*
The council should coordinate with the scientific community and provide
regularly updated guidance on the appropriate uses of AI, especially
during this time of rapid change. The council should study, monitor, and
address the evolving uses of AI in science; new ethical and societal
concerns, including equity; and emerging threats to scientific norms.
The council should share its insights across disciplines and develop and
refine best practices.
More broadly, the
scientific community should adhere to existing guidelines and
regulations, while contributing to the ongoing development of public and
private AI governance. Governance efforts must include engagement with
the public about how AI is being used and should be used in the
sciences.
With the advent of generative AI,
all of us in the scientific community have a responsibility to be
proactive in safeguarding the norms and values of science. That
commitment—together with the five principles of human accountability and
responsibility for the use of AI in science and the standing up of the
council to provide ongoing guidance—will support the pursuit of
trustworthy science for the benefit of all.