What's Inside Each Issue

Each week, we release a 10–20 minute audio summary of the newsletter. It's a fast way to stay current if you prefer listening over reading — perfect for commutes or quick breaks.

Each newsletter includes a link to a NotebookLM workspace — think of it as a chatbot trained on that week's content. You can:

  • Ask clarifying questions about the articles
  • Explore definitions for unfamiliar terms
  • Dig deeper into methods, implications, or relevance to your specific role

It's an interactive way to explore the week's findings beyond the written summaries.

Some articles are designated as highlights — click on these on our website to get a journal club–style breakdown that explains key findings, methods, and limitations in more depth.


Tips for Using the Newsletter

1. Disclaimer— Most of This Is AI-Generated

Almost every part of the newsletter involves AI: article selection, summarization, and even the podcast.

The studies themselves are real, but AI can sometimes:

  • Misstate findings
  • Miss important caveats
  • Overstate significance

The podcast in particular can sound more confident than warranted.

Treat it as a summary and conversation starter, not a definitive interpretation.

2. Use NotebookLM to Fill in the Gaps

If something seems unclear or you want to explore implications for your own work, the NotebookLM link is your best tool.

Because it's built specifically from that week's content, it's ideal for asking follow-up questions or exploring how the research might apply in your context.

3. Use Tags to Quickly Understand the Study Type

Each article is tagged to help you instantly understand its focus — whether it's a new model, a validation study, or real-world deployment.

Here's our tagging framework:

TagDefinition
Model DevelopmentCreation of a new AI model (predictive or generative), usually including internal validation on held-out data from the same source.
External ValidationTesting model performance or outputs on data from a different institution, geography, or population to assess generalizability.
Prospective Silent EvaluationRunning the model in real time ("silent mode" or "shadow deployment") without influencing care, to assess forward-looking performance.
Real-World DeploymentEvaluation when the AI system is integrated into workflows and influences decisions, communication, or documentation. Focuses on adoption, usability, and behavior change.
Process OutcomesOutcomes beyond accuracy that measure workflow, efficiency, or system-level impact.
Clinical OutcomesOutcomes directly reflecting patient health, safety, or quality of care.
Reviews & PerspectivesArticles without new empirical data that synthesize or reflect on the field (reviews, editorials, ethical/regulatory perspectives, policy analysis).
Guidelines & StandardsFormal consensus statements, reporting guidelines, or frameworks for how AI in medicine should be studied, reported, or regulated.

4. Strengthen Your Skills in Evaluating AI Research

If you want to sharpen your ability to critically evaluate AI studies in healthcare, check out this primer:

From theory to practice: Evaluating AI in pharmacy (AJHP, 2024)


Final Thought

AI helps us discover and summarize faster — but the real value comes from how you interpret it.

Use the tools, explore the NotebookLM, question the summaries, and stay curious.