| Post date: 2018/10/28 | 

Our policy on using AI tools in your writing

AI Guidelines

Using AI tools in your writing

Author guidance on generative AI tools

Common author questions and best practices

Common author questions and best practices

Guide for Authors: Responsible Use of Generative AI

Generative artificial intelligence (AI) tools are rapidly becoming an integral part of research and scientific writing. These tools can support authors by enhancing creativity, simplifying technical tasks, and assisting in the development of clear, well-structured manuscripts. However, the use of AI must always be approached with responsibility to ensure that manuscripts remain original, ethically sound, and in line with professional publishing standards.
This guide has been developed for contributors to Studies in Medical Sciences (SMS) to help them make informed decisions when using AI in the preparation of their submissions. It outlines key principles, expectations, and good practices for the ethical and transparent integration of AI into the writing and research process.
By following these guidelines, authors will be able to:
. Use AI tools responsibly and disclose their use where appropriate
. Preserve their own scholarly voice and expertise
. Safeguard intellectual property rights
. Maintain ethical standards and academic integrity
. Align their work with the publishing policies of SMS
These principles apply to all manuscripts submitted to SMS, including original research, reviews, case reports, and other article types. Authors are encouraged to familiarize themselves with this guidance before incorporating AI tools into their work.

Author Guidance on the Use of Generative AI

At Studies in Medical Sciences (SMS), we recognize that artificial intelligence (AI) tools and technologies (“AI Technology”) are becoming increasingly important in scientific writing and research. When used thoughtfully and responsibly, AI can support authors in improving clarity, enhancing efficiency, and assisting with certain technical tasks, while still maintaining high scientific and editorial standards. However, the use of AI does not remove the author’s responsibility for the integrity, originality, and accuracy of their work. The following guidelines outline SMS’s expectations for authors who use AI during manuscript preparation.

1. Review of Terms and Conditions

Before using any AI tool, authors must review the terms of service, privacy policies, and license agreements carefully. Authors should ensure that:

. The AI tool does not claim ownership over the submitted work.
. There are no restrictions that could interfere with the author’s or SMS’s ability to publish or distribute the manuscript.
. The tool does not retain or reuse confidential data without consent.Because terms of service may change over time, authors are encouraged to review them periodically to remain compliant.

2. Human Oversight and Responsibility

AI tools should be used as assistants, not substitutes, in the writing process. Authors must:
. Take full responsibility for all claims, data interpretations, citations, and conclusions.
. Carefully review and edit any AI-generated content to ensure it reflects the author’s expertise, voice, and scholarly intent.
. Confirm that the final version of the manuscript adheres to SMS’s ethical and editorial standard

3. Disclosure of AI Use

Transparency is essential to maintain reader trust and research integrity. Authors must:
Keep records of all AI tools used (including name, version, and purpose).
Document how AI contributed to manuscript preparation (e.g., language editing, summarization).
Disclose the use of AI tools at the time of submission in the cover letter or acknowledgments section. SMS may request supporting documentation to verify responsible use.

3. Disclosure of AI Use

Transparency is essential to maintain reader trust and research integrity. Authors must:
. Keep records of all AI tools used (including name, version, and purpose).
. Document how AI contributed to manuscript preparation (e.g., language editing, summarization).
. Disclose the use of AI tools at the time of submission in the cover letter or acknowledgments section. SMS may request supporting documentation to verify responsible use

4. Protection of Rights and Data

Authors must avoid using any AI system that could compromise intellectual property rights. This includes:
. Ensuring the AI provider does not gain ownership or training rights over the content beyond what is necessary to provide the service.
. Avoiding tools that could expose sensitive, confidential, or unpublished research data without adequate privacy safeguards.
. Reviewing AI provider documentation for clauses related to “data ownership,” “reuse,” or “opt-out” to prevent unwanted transfer of rights.

5. Ethical and Responsible Practice

Authors must use AI tools in compliance with applicable laws, privacy regulations, and research ethics principles. They should:
. Avoid inputting personal or identifiable patient data into systems without proper anonymization.
. Check AI-generated text for factual accuracy and neutrality.
. Remain vigilant about potential biases or stereotypes in AI output and correct them to maintain objectivity.
Avoid using AI to mimic the unique style or intellectual contributions of other researchers without permission

6. Compliance with Journal Agreements

Authors remain bound by SMS’s publishing policies and submission agreements. They are responsible for ensuring that:
. Their work is original and has not been previously published.
. They hold the necessary rights to all material included in the manuscript.
. They comply with all warranties and declarations signed during the submission process.
SMS values authors’ creativity, expertise, and intellectual contributions. We view AI tools as aids that support—not replace—the essential role of researchers and scholars. These guidelines complement SMS’s editorial policies and will be updated regularly as AI technologies evolve.

Maintaining Your Scholarly Voice When Using AI

Your scholarly voice is central to how readers engage with and interpret your work. It reflects the unique way you present evidence, construct arguments, and contribute to advancing knowledge in medical sciences. This voice is what makes your research impactful and credible.
When incorporating AI tools into your writing process, they should serve to support and refine your voice, not replace it. Authors are encouraged to:
. Begin with a clear outline of their manuscript, including research questions, methods, results, and key arguments.
. Use AI tools selectively and with a clear purpose, such as summarizing background literature, suggesting alternative wordings, or improving readability.
. Review and edit all AI-generated text to ensure that it represents their expertise, interpretation, and academic style.
. Critically evaluate AI outputs to avoid inaccuracies, biases, or unintended changes in meaning.
Practical Ways to Use AI While Preserving Your Voice
. Analyzing patterns or themes: Use AI to help identify trends in literature or data that may support your argument.
. Clarifying complex concepts: Generate suggestions for explaining technical ideas in a more accessible way, while ensuring accuracy.
. Tailoring examples: Adjust case studies or examples to match the intended audience (clinicians, researchers, policymakers).
. Refining style and readability: Employ AI for language polishing, grammar checking, or improving manuscript flow—always with final human oversight.
By approaching AI as a complementary tool, you stay in control of your work and ensure that the final manuscript is both scientifically sound and authentically yours.

Getting Started with AI Tools in Scientific Writing

For authors new to AI, the best approach is to begin by mapping your writing workflow from initial outline to final manuscript polishing. Breaking your process into discrete steps can help identify areas where AI can provide meaningful support. Start with smaller, low-risk tasks or aspects you typically find time-consuming. Think of AI tools as a junior assistant rather than an expert collaborator—provide clear prompts and guide the tool carefully to get useful outputs.
Key Areas to Explore AI Assistance
1. Research and Analysis
. Summarize relevant literature and highlight key themes.
. Analyze trends across multiple sources or datasets.
. Note: AI-generated references should always be independently verified, as citations may be incomplete or inaccurate.
2. Content Development
. Simplify complex scientific concepts for clarity and readability.
. Generate illustrative examples suitable for different audiences, such as clinicians, researchers, or policymakers.
. Develop structured summaries of sections or discussion points.
. Assist in formulating discussion questions or hypotheses for deeper engagement.
3. Review and Editing
. Improve clarity, conciseness, and flow of text.
. Suggest alternative wording, identify redundancies, and enhance transitions between sections.
. Ensure consistency of terminology, particularly for technical or specialized writing.
. For non-native English authors, AI can support natural phrasing and alignment with professional academic tone.
Tips for Effective AI Integration
. Spend 10–12 hours experimenting with AI tools before fully incorporating them into your workflow.
. Test different prompting strategies to optimize outputs for your specific needs.
. Always review AI-generated content critically, ensuring it reflects your expertise, voice, and the scientific rigor expected by SMS.
By taking a measured and systematic approach, authors can gain confidence and competence in using AI while maintaining full control over content quality and scholarly integrity.

Creating Effective Prompts to Maximize AI Assistance

The quality of AI-generated outputs depends largely on the prompts you provide. Well-crafted prompts allow AI tools to produce more relevant, accurate, and actionable responses. Clear instructions, defined goals, and contextual information are essential for effective AI use in scientific writing.
Best Practices for Prompting AI
1. Define the Role and Task Clearly
Specify the perspective or expertise you want the AI to adopt (e.g., clinical researcher, systematic reviewer, medical educator).
2. Be Specific and Detailed
Provide precise instructions about the content, format, length, and scope of the response.
3. Break Down Complex Tasks
Divide large or multifaceted tasks into smaller, manageable steps to ensure accurate outputs.
4.Provide Context
Include information about your audience, purpose, and desired level of technical detail.
5. Include Examples
Provide sample content or illustrative scenarios to guide the AI.
6. Refine Iteratively
Adjust prompts based on AI outputs, testing different phrasings or levels of detail to improve results.

Generic Prompt Specific Prompt
Check this chapter and tell me what is wrong with it. Act as an experienced developmental editor in medical education. Review the introduction of this manuscript on clinical case-based learning, focusing on clarity, logical flow, and suitability for medical students. Assess whether key concepts are introduced in a coherent sequence, if terminology is appropriate, and whether examples support comprehension. Provide feedback in two sections: (1) high-level strengths and areas for improvement, and (2) detailed analysis with specific examples of issues.
Write a case study about leadership. Act as an expert in healthcare management. Write a 300-word case study for medical students illustrating transformational leadership in a hospital department during a major workflow change. Highlight both effective and ineffective leadership approaches, including decision-making under pressure. Structure the case study as follows: Background – Introduce the department and the leadership challenge; Main Scenario – Describe the problem, leadership responses, and outcomes; Discussion Questions – Provide 2–3 questions prompting students to analyze leadership decisions and suggest alternatives. Avoid real names or identifiable patient information. Keep the tone professional and engaging for a medical education audience.

By following these guidelines, authors can optimize AI outputs while maintaining control over content quality, accuracy, and scholarly integrity in submissions to SMS.

Types of AI Tools for Authors

A variety of AI tools are available to support authors throughout the writing and research process. For those new to AI, general-purpose large language models (LLMs) are an excellent starting point.
1. General-Purpose Large Language Models (LLMs)
LLMs are AI systems trained on vast datasets, enabling interactive dialogue for idea generation, summarization, and content refinement. In scientific writing, LLMs can assist with:
. Summarizing research articles and literature
. Generating manuscript outlines or alternative phrasing
. Structured problem-solving, pattern recognition, and information synthesis
. Supporting technical tasks such as coding, data analysis, or presentation preparation These models are versatile but require careful oversight to ensure accuracy and alignment with your scholarly voice.
2. Task-Specific AI Tools
. Task-specific AI tools are designed for particular functions, such as:
. Citation management and formatting
. Grammar and style checking3. Analysis and Reasoning Tools Plagiarism detection
Because they are trained on specialized datasets, these tools often outperform general LLMs for their designated tasks. Integration with software like Microsoft Word or LaTeX streamlines workflows, making them particularly useful for academic and technical writing.
3. Analysis and Reasoning Tools
AI reasoning tools provide step-by-step transparency in their analytical processes. They are useful for:
. Evaluating argument structures and logic
. Checking terminology consistency
. Mapping relationships between concepts
. Identifying gaps in topic coverage . These tools help authors ensure coherence and rigor in their manuscripts.
. Comparing AI Tools to Find the Best Fit
AI tools differ in accuracy, capabilities, and usability. A structured comparison helps determine which tool best suits your workflow. When evaluating multiple AI systems, test the same prompt across tools and compare the outputs. Consider the following factors:

Types of AI Tools for Authors

A variety of AI tools are available to support authors throughout the writing and research process. For those new to AI, general-purpose large language models (LLMs) are an excellent starting point.
1. General-Purpose Large Language Models (LLMs)
LLMs are AI systems trained on vast datasets, enabling interactive dialogue for idea generation, summarization, and content refinement. In scientific writing, LLMs can assist with:
. Summarizing research articles and literature
. Generating manuscript outlines or alternative phrasing
. Structured problem-solving, pattern recognition, and information synthesis
. Supporting technical tasks such as coding, data analysis, or presentation preparation These models are versatile but require careful oversight to ensure accuracy and alignment with your scholarly voice.
2. Task-Specific AI Tools
. Task-specific AI tools are designed for particular functions, such as:
. Citation management and formatting
. Grammar and style checking3. Analysis and Reasoning Tools Plagiarism detection
Because they are trained on specialized datasets, these tools often outperform general LLMs for their designated tasks. Integration with software like Microsoft Word or LaTeX streamlines workflows, making them particularly useful for academic and technical writing.
3. Analysis and Reasoning Tools
AI reasoning tools provide step-by-step transparency in their analytical processes. They are useful for:
. Evaluating argument structures and logic
. Checking terminology consistency
. Mapping relationships between concepts
. Identifying gaps in topic coverage . These tools help authors ensure coherence and rigor in their manuscripts.
. Comparing AI Tools to Find the Best Fit
AI tools differ in accuracy, capabilities, and usability. A structured comparison helps determine which tool best suits your workflow. When evaluating multiple AI systems, test the same prompt across tools and compare the outputs. Consider the following factors:

Factor What To Look For Example Questions for LLMs Example Task-Specific Tools
Response Quality / Accuracy Relevance, clarity, and correctness of content Does the output make logical sense and stay on topic? Does a grammar tool detect nuanced errors? Does a citation tool format sources correctly?
Editing Strengths Effectiveness in improving clarity Does it improve sentence structure or only surface-level edits? Does it enhance clarity without introducing new errors?
Factual Reliability Accuracy of information Does it generate factual errors or hallucinations? Does it correctly apply rules, like citation formatting or plagiarism detection?
Adaptability to Style Ability to match tone and complexity Can it adjust to different writing styles when prompted? Can it be customized for editing or formatting preferences?
Customization Options Ability to refine outputs with context Can you adjust outputs for audience, style, or structure? Can you fine-tune citation styles, editing rules, or sensitivity settings?
Ease of Use / Integration User-friendliness Is extensive prompting required to get reliable results? Can it integrate with Word, Google Docs, or reference managers?
Data Privacy Handling of user content Does it store or train on your material? Are privacy settings configurable? Are privacy and security settings adjustable?
Cost vs. Value Free vs. paid features Are premium features worth the cost? Does the paid version offer significantly better functionality?

For additional guidance on privacy, intellectual property protection, and rights management, authors should review the sections on Privacy Features in AI Tools and Evaluating AI Tools for IP Protection

Privacy Features to Consider When Using AI Tools

AI tools often provide settings designed to protect the privacy and confidentiality of your content, but the availability and effectiveness of these features can differ between platforms. Authors should carefully evaluate privacy options before integrating AI into their workflow.
Key Privacy Features to Look For
. Data Collection Controls: Options to prevent your content from being used to train the AI model.
. History Tracking: Settings to disable or limit storage of interactions.
. Privacy Modes: Features that prevent the AI from retaining input or outputs.
. Data Deletion: Ability to erase interaction history or uploaded documents.
Authors should review the tool’s documentation to understand data handling, ownership, usage rights, and deletion policies. Check for explicit statements on whether your content is stored or reused by the AI provider and whether an opt-out option exists. Some privacy features may only be available in paid or enterprise versions, so verify which options are accessible with your chosen plan.
Enhanced Privacy Options
Institutions or organizations may offer access to enterprise or private versions of AI tools that run on local servers or secure environments. These versions often provide stronger privacy protections and clearer terms of service, although policies can vary by provider.
By prioritizing AI tools with robust privacy controls and transparent content protection policies, authors can safeguard sensitive data while responsibly leveraging AI to support their research and writing.

Safe and Responsible Use of Generative AI Tools

Authors should carefully consider privacy, confidentiality, and intellectual property before entering unpublished or sensitive content into AI tools. The security of your material depends on the tool’s terms of service and its data-handling policies, which can vary widely and evolve over time.
Sensitive Content to Protect
. Data protected by regulations, such as HIPAA or FERPA
. Private institutional or personal information
. Materials subject to confidentiality agreements or embargoes
. Novel research findings, unique methodologies, or unpublished results
Best Practices for Handling Sensitive Content
. Anonymize or remove private details before using publicly available AI tools.
. Recognize that free or public AI platforms often have limited privacy controls.
. Consider professional, paid, or enterprise versions of AI tools, which may allow opt-out from data training, data deletion requests, or more secure handling of sensitive material.
. Prioritize tools that explicitly state they do not retain or use user content for model training.
By exercising caution and selecting AI tools with robust privacy safeguards, authors can responsibly leverage AI assistance while protecting sensitive data, maintaining compliance with regulations, and safeguarding their intellectual property.

Verifying Accuracy and Avoiding Misinformation in AI-Generated Content

AI tools can produce content that appears authoritative but may contain factual inaccuracies, fabricated references, or outdated information. In scientific writing, these errors can range from subtle misrepresentations of data to entirely invented citations that seem legitimate. Because AI models often synthesize information from multiple sources, authors must be vigilant in ensuring the accuracy of all content included in their manuscripts.
Best Practices for Verifying AI-Generated Content
1. Identify Information Requiring Verification
. Statistical data, numerical results, or trends
. Technical terminology and methodology descriptions
. References and citations
. Any factual statements or conclusions
2. Cross-Check Against Authoritative Sources
. Consult peer-reviewed literature, textbooks, official databases, or other reliable sources in your field.
. Confirm that data, interpretations, and references are correct and appropriately cited.
3. Seek Peer or Colleague Review
. Have experts in relevant areas review technical content for accuracy and validity.
4. Maintain Final Responsibility
. AI can assist with drafting, organization, and language refinement, but authors are fully responsible for the accuracy, credibility, and integrity of the final manuscript.
. When in doubt, rely on your own expertise and verified sources rather than AI outputs.
By systematically verifying AI-generated material, authors can avoid misinformation, maintain scientific rigor, and uphold the high editorial standards expected by SMS.

Recognizing and Addressing Bias in AI-Generated Content

AI models can reflect biases present in the data on which they were trained. These biases may manifest differently depending on your field, the task at hand, or the way the AI tool is used. Biases can be subtle, such as word choice, framing of examples, or methodological assumptions, or more obvious, appearing as direct statements, recommendations, or oversimplifications. Recognizing that all AI tools have some inherent bias is the first step toward producing accurate and inclusive scientific content.
Risks of Bias in AI-Generated Content
. Reinforcing stereotypes or misconceptions
. Underrepresenting diverse populations or perspectives
. Excluding certain groups or contexts unintentionally
Strategies to Identify and Mitigate Bias
1. Examine Your Content Critically
. Review examples, scenarios, and case studies for assumptions about access to resources or technology.
. Check whether populations, regions, or groups are represented appropriately.
. Assess whether methodologies reflect diverse perspectives and contexts.
2. Seek Peer Feedback
. Engage colleagues or experts to review assumptions, terminology, and inclusivity in your content.
3. Revise Thoughtfully
. Adjust language, examples, and case studies to reflect broader perspectives.
. Incorporate representative and varied viewpoints from your field to improve accuracy and inclusivity.
By proactively evaluating AI-generated content for bias and making careful revisions, authors can ensure their manuscripts are scientifically rigorous, ethically sound, and reflective of diverse perspectives, consistent with the standards of SMS.

Evaluating AI Tools for Intellectual Property Protection

When using AI tools, it is essential to protect your intellectual property (IP) and ensure that the rights to your work remain fully under your control. Some AI providers include clauses in their terms of service that grant them broad rights to use, reproduce, or train their models on content processed through their platforms. Without careful review, authors may unintentionally grant these rights, potentially limiting their ability—or that of SMS—to use, publish, or share the material.
Key Risks in AI Terms and Conditions
Be cautious of terms that:
. Grant perpetual, royalty-free, or transferable rights to your content
. Allow the provider to use content for training, redistribution, or any unspecified purpose
. Classify your submissions as open, free to use, or under a Creative Commons or other permissive license
. Impose limitations on your ability to use the output
. Contain unclear data retention or deletion policies
Best Practices for IP Protection
Choose AI tools that:
. Explicitly do not claim rights to your content or outputs
. Place no limitations on your use of content generated or uploaded
. Offer clear data deletion policies and transparent retention practices
. Limit the provider’s use of user content strictly to service delivery
. Allow you to control or opt out of data training
Additional Recommendations
. Review all terms of service thoroughly before uploading any unpublished or sensitive material.
. Consider enterprise or professional versions of AI tools, which often provide clearer terms, stronger IP protection, and more robust privacy safeguards.
. Maintain documentation of the tools used and their terms to support transparency and compliance with journal submission requirements.
By selecting AI tools with strong content protection policies and carefully evaluating their terms, authors can safeguard their intellectual property while responsibly integrating AI into their research and writing processes for SMS.

Copyright Considerations When Using Generative AI Tools

Understanding copyright is essential for authors using AI-assisted writing tools. Copyright protects creative works—such as manuscripts, figures, or analyses—by granting the creator exclusive rights to use, reproduce, distribute, and authorize others to use their work. Authors can also transfer copyright to publishers, who then control how the material is shared, used, or monetized.
AI and Copyright
Copyright law generally requires human authorship. As a result, content generated solely by AI without substantial human input may not qualify for copyright protection. Because copyright frameworks vary internationally and are evolving in response to AI, authors should stay informed about regulations relevant to their region and intended publication venues.
Best Practices to Protect Your Copyright
To ensure your work remains protected and reflects your original contributions:
1. Substantially Modify AI Outputs
. Go beyond basic editing; revise content to incorporate your unique expertise, insights, and interpretation.
2. Adapt and Transform AI Content
. Customize outputs to align with your voice, style, and analytical approach.
3. Integrate Thoughtfully
. Use AI-generated material as part of a larger human-authored work, making deliberate choices about structure, sequence, and emphasis.
4. Document Your AI Use
. Record the AI tool used, the purpose, and the extent of human contributions. Transparency supports ethical reporting and compliance with journal requirements.
Additional Considerations
. Review AI tool terms carefully, as some impose restrictions on reuse, modification, or sublicensing of AI-generated outputs.
. Protect privacy and moral rights, including attribution rights, when using AI-assisted content.
. For international collaborations or submissions across different jurisdictions, consult with your journal contact to ensure AI use aligns with copyright and publication policies.
By applying these practices, authors can responsibly integrate AI tools while safeguarding copyright, ensuring their manuscripts submitted to SMS reflect genuine human authorship and scholarly contribution.


View: 8902 Time(s)   |   Print: 898 Time(s)   |   Email: 0 Time(s)   |   0 Comment(s)

© 2025 CC BY-NC 4.0 | Studies in Medical Sciences

Designed & Developed by : Yektaweb