How This Site Works
Peptide Protocol Wiki is an AI-curated research database. Every article is generated by software that systematically searches published medical literature, verifies citations, and presents findings in a structured format.
This page explains exactly how — including the limitations. We show our work so you can judge for yourself.
Our Research Pipeline
Each peptide profile on this site is built through a five-phase automated pipeline. The system generates 21 targeted queries per peptide (3 per article type across 7 article types), searches PubMed through the Edison/PaperQA scientific API, and assembles the results into structured articles. Here is what each phase does.
Phase 1: Systematic Literature Search
The pipeline sends multiple targeted queries to PubMed through an intermediary called Edison (built on PaperQA). Rather than a single broad search, the system issues focused queries covering mechanism of action, therapeutic applications, evidence limitations, pharmacokinetics, dosing data, side effect profiles, and more. Each query is designed to surface a specific facet of the research literature.
A typical peptide profile draws from 30 to 80 published papers. All queries run concurrently with rate limiting to avoid API throttling.
Phase 2: Gap Analysis
After the initial search, the system analyzes response coverage by section. It maps returned content against expected topics using keyword matching and identifies areas where the literature search returned thin or no results. For each gap, it generates follow-up queries and runs a second pass to fill them.
Phase 3: Citation Verification
Every PubMed ID (PMID) and DOI returned by Edison is machine-validated against the PubMed and CrossRef databases. The system deduplicates sources, resolves author names, and flags any citations that cannot be verified. Unverifiable citations are marked as such rather than silently included.
Phase 4: Content Assembly
Verified findings are cleaned of internal search artifacts and structured into a consistent article format. The assembler strips raw API metadata, simplifies overly complex tables, maps content to predefined section headings, and deduplicates at the paragraph level to prevent repeated information. The output is Markdown with YAML frontmatter validated against strict schemas.
Phase 5: Community Data Integration
For peptides with significant community discussion, a separate layer aggregates protocols and outcomes reported on public forums (primarily Reddit communities like r/Peptides and r/PeptideResearch). This content is always clearly labeled as anecdotal and is never mixed with clinical evidence levels. Community data receives its own evidence tier ("anecdotal", "structured-community", or "preliminary") and never upgrades the clinical evidence rating of a peptide.
What We Don't Do
Transparency means being clear about what this system cannot do. These are real limitations, not disclaimers written to satisfy a legal requirement.
- No human editorial judgment is applied. The pipeline selects, organizes, and presents published findings. It does not evaluate whether a study was well-designed, whether the sample size was adequate, or whether the conclusions are justified by the data. It reports what the literature says — not what it should say.
- No medical professional reviews articles before publication. Content is generated, validated against schemas, and published by software. There is no physician, pharmacist, or scientist reading each article before it goes live.
- No treatment recommendations are made. This site presents research findings. It does not tell you what to take, how much to take, or whether you should take anything at all. That is a conversation between you and a qualified healthcare provider.
- AI interpretation errors are possible. Language models can misattribute findings, conflate similar compounds, or misinterpret statistical results. The citation verification phase catches many of these errors, but not all.
- No clinical experience informs this content. The pipeline has access to published literature — not to patient outcomes, clinical intuition, or the practical realities of peptide use. Published research tells one part of the story.
Why This Approach Has Value
Automated research curation has genuine advantages over traditional health content production — not because AI is better than humans, but because the methodology addresses specific problems in how health information is typically created.
- Systematic coverage. A typical health content writer reads 5 to 10 papers before writing an article. This pipeline searches 30 to 80 papers per peptide profile across multiple targeted queries. It is more likely to surface relevant findings that a manual process would miss.
- No financial conflicts. This site has no vendor affiliations, no pharmaceutical sponsors, and no advertising relationships. Content is not shaped by who is paying for it.
- Verifiable claims. Every factual claim links to its PubMed source. You do not have to take our word for anything — you can check the original paper.
- Consistent methodology. Every peptide profile is built through the same pipeline with the same query structure. There is no variation in rigor based on which writer was assigned the article or how much time they had.
- Updatable. When new research is published, the pipeline can be re-run against current PubMed data. The entire database can be refreshed systematically rather than relying on someone remembering to update a specific page.
Verify It Yourself
Every factual claim in a peptide profile links to its PubMed source. Click the citation. Read the abstract. If the abstract does not support what we wrote, that is an error on our part, and we want to know about it.
We encourage readers to treat this site the way they should treat any health information source: check the sources. The difference is that we actually provide them.
Found an error? A broken citation, a misattributed finding, or a claim that does not match its source? Contact us at contact@peptideprotocolwiki.com. Every report improves the database for everyone.
Frequently Asked Questions
Should I trust health information generated by AI?
You should evaluate any health information by checking its sources — regardless of whether it was written by a human or generated by software. A well-sourced article produced by an automated pipeline is more useful than a poorly sourced article written by a credentialed author. We provide PubMed citations for every factual claim so you can verify them directly.
Why not hire doctors to write and review content?
We invested in systematic methodology over credentialing. A single physician reviewing an article will catch errors that a human would notice — but they will also read far fewer source papers than an automated pipeline. Our approach is to make the methodology transparent and the citations checkable, so the content can be evaluated on its merits. Compare our citation density and source quality against any competitor in this space.
Is this FDA-approved information?
No. Most peptides covered on this site are research compounds that have not received FDA approval for therapeutic use. This site presents published research findings — it does not provide medical advice, treatment recommendations, or FDA-approved prescribing information. Always consult a qualified healthcare professional before making any health decisions.
How current is the research data?
Each article displays a "last updated" date in its frontmatter. The pipeline can be re-run at any time to incorporate newly published research. However, there is always some lag between when a paper is published on PubMed and when it appears in our profiles.
What about the community data sections?
Community data is aggregated from public forums and is clearly labeled as anecdotal. It provides context about how peptides are discussed and used in practice, but it is not clinical evidence. Community evidence levels are tracked separately and never influence a peptide's clinical evidence rating. See our editorial policy for more on how we distinguish evidence tiers.
Medical Disclaimer
This website is for educational and informational purposes only. The information provided is not intended to diagnose, treat, cure, or prevent any disease. Always consult with a qualified healthcare professional before using any peptide or supplement.