Vai al contenuto

Use cases

The system is versatile and adapts to different professional contexts. This chapter presents concrete scenarios that illustrate how to leverage the integration, including two documented real-world cases.

Legal analysis requires absolute source fidelity — every statement must be anchored to a specific document, citations must be precise, and documentation gaps must be explicitly declared.

With the system, you can load rulings, contracts, or regulations into specialized notebooks and query them with targeted requests:

Extract all liability clauses from the contract, citing the page number for each one. If a standard clause is not present, flag it explicitly.

The automatic structuring adds the necessary constraints: exclusive use of sources, mandatory citations, declaration of missing information. The result is an analysis that clearly distinguishes between what the documents contain and what they don't.

Research and scientific literature

For those working with scientific publications, the system offers a structured way to conduct literature reviews:

What methodology did the authors use in the different studies? Cite the specific section of each paper.

What limitations are explicitly mentioned by the authors?

Are there contradictions between the results of the different studies?

Each response is grounded in the documents with citations that allow tracing back to the original source. This is particularly useful when preparing a systematic review or verifying consistency across different sources.

Technical documentation and training

When working with extensive manuals, such as a software's complete documentation, the system allows extracting specific information without reading hundreds of pages.

An example — an intentionally generic request to query the notebook with the DaVinci Resolve 20 manuals, 7 manuals totaling thousands of pages:

List the AI-based features of DaVinci Resolve.

Despite the simplicity of the question, the automatic structuring transformed it into a prompt with operational constraints, structured output format, and instructions for handling missing information. The result was an organized catalog of AI features with name, description, usage context, and direct quotes from the documentation, without needing to intervene to correct inaccuracies.

The case is described in detail in the How it works chapter, where the automatically generated structured prompt is also shown.

A real case: comparative analysis of AI case law

This is a documented real case. The objective was to analyze the evolution of case law on generative artificial intelligence, comparing cases collected in a book with two subsequent rulings not included in the volume.

The context

The material was organized in two separate notebooks: one containing a book with a collection of case law, the other with two recent rulings. The separation into different notebooks is one of the points where the integration shows concrete advantages over direct NotebookLM use, which doesn't allow querying multiple notebooks in the same session.

The working method

The work was organized in four phases with human review between each one. The clear initial definition of objectives, methodology, and expected outputs determined the quality of the entire process.

  • Phase 1: analysis of the first corpus. Claude queried the first notebook, extracting legal acts and identifying recurring patterns in legal reasoning. In this phase, the value of human review emerged: the analysis contained some inaccuracies that were corrected before proceeding. Without the intermediate verification, those errors would have propagated to subsequent phases.

  • Phase 2: analysis of the second corpus. With the method validated in the previous phase, Claude applied the same approach to the second notebook containing the recent rulings. The automatic management of switching between notebooks simplified the process.

  • Phase 3: comparative analysis. Claude integrated the results of the two previous analyses, identifying common patterns and divergences in legal reasoning across the different corpora. In this phase, the ability to work with multiple notebooks in the same conversation proved decisive.

  • Phase 4: output production. The work was synthesized into a structured markdown document and a PowerPoint presentation.

What the case taught

The most significant element is the importance of human review at every step. The system produces source-grounded results, but interpretation and validation require domain expertise. The human-in-the-loop approach, where the system analyzes and the user validates, prevented errors from early phases from amplifying in later ones.

Another lesson concerns authentication — during the work, the Google authentication session expired. The server detected the expiration and renewed credentials automatically, without interrupting the workflow.

The approach is transferable to any context requiring structured document analysis: scientific literature reviews, market analyses, technical documentation audits, comparison of regulation versions.

When not to use the integration

The system is not the best choice in every case:

  • Generic questions: if the answer is general knowledge and doesn't require fidelity to specific sources, Claude alone is more than sufficient
  • Real-time information: for news or up-to-date data, web search is more appropriate
  • Short documents: a single document of just a few pages can be loaded directly into the Claude conversation, without needing to go through NotebookLM
  • Exploratory conversations: when you don't need grounding to specific sources but want to freely explore a topic