Beyond Disclosure: A Response to ASM’s AI Guidelines and a Call for Methodological Standards
Beyond Disclosure: A Response to ASM’s AI Guidelines and a Call for Methodological Standards
Author: David R. Graham, Graham Scientific, LLC
Introduction: The Need for More Than Acknowledgment
Artificial Intelligence (AI) is rapidly becoming part of the scientific research process — assisting with data analysis, hypothesis generation, and even scientific writing. In response to this emerging reality, the American Society for Microbiology (ASM) has offered important guidelines for AI use in scientific manuscripts, emphasizing the need to maintain human authorship and accountability.
These guidelines are an essential first step toward ensuring scientific integrity. However, disclosure alone is not enough. As AI takes on increasingly sophisticated and relational roles in research, the scientific community must develop methodological standards for how AI usage is tracked and reported — not only to ensure transparency but to safeguard the trust and reproducibility that science relies on.
Where ASM Gets It Right — and What’s Missing
ASM rightly recognizes that AI cannot be an author, as it lacks independent reasoning, accountability, and ethical responsibility. They also acknowledge that AI can be used for improving language, accessibility, and quality — but not as an originator of scientific content without human input.
However, ASM's guidelines do not address how AI use should be methodologically reported — leaving authors and reviewers without a clear framework to evaluate when, how, and to what extent AI shaped the research and writing process.
Why Methodological Tracking of AI Use Matters
In scientific research, methods matter. We rigorously document how experiments are conducted, how data is analyzed, and how results are interpreted — because reproducibility and accountability depend on transparent processes.
AI is now part of that process. Whether generating hypotheses, interpreting data patterns, or helping to draft manuscripts, AI directly shapes scientific outcomes. If we fail to track and report AI's involvement methodologically:
- Reviewers and readers cannot assess how AI influenced the work.
- Potential biases introduced through AI tools remain invisible.
- Authors may unintentionally (or intentionally) misrepresent the degree of human intellectual contribution.
Thus, AI use is not merely a disclosure issue — it is a methodological issue, and it should be treated as such.
A Proposed Methodological Framework for Reporting AI Use
1. AI Tool Identification
- Tool name and version (e.g., ChatGPT-4, Claude, Gemini).
- Provider (e.g., OpenAI, Anthropic, Google).
- Access method (API, chatbot interface, embedded software).
2. Purpose of AI Use
- Hypothesis generation.
- Data analysis or exploration.
- Literature review or synthesis.
- Manuscript drafting or outlining.
- Language refinement or accessibility enhancement.
3. Nature of AI Interaction
- Single-prompt response.
- Iterative dialogue (co-development of ideas).
- Autonomous content suggestion (e.g., auto-completion).
- Human supervision description.
4. Prompts and Input Data
- General description of prompts or input data.
5. Validation and Human Responsibility
- Steps to validate AI outputs.
- Human accountability for final content.
6. Bias and Ethical Considerations
- Reflection on potential AI biases and mitigation steps.
Example of AI Usage Reporting (Template):
AI Tool Use in Research Process:
This research used ChatGPT-4 (OpenAI, Version 4.0) for refining hypothesis statements and drafting preliminary sections of the discussion. AI was accessed via secure API interface under direct human supervision. Prompts included summary data on metabolic biomarkers and aging-related pathways. AI-generated outputs were reviewed, revised, and validated by the authors before inclusion. Authors maintain full responsibility for all interpretations and conclusions. Potential AI biases were addressed through independent verification of sources and findings.
A Call to Journals, Researchers, and AI Developers
I urge journals like ASM and others to go beyond simple disclosure policies and develop robust AI usage reporting frameworks that can be applied consistently.
"In a future where AI is part of how we think and create, science must be honest about how that thinking happens."
Comments
Post a Comment