Skip to main content
Skip table of contents

Shinydocs AI Interface Quick Start

This guide covers everything you need to get Ollama and Shinydocs Pro working together. You’ll install the tools, set a couple of key environment variables, enable the right features in the Control Center, and be ready to start testing AI with your documents.

Shinydocs AI Interface is currently in early preview. Functionality and UI are subject to change and may differ slightly from this guide.

Prepare

1. Ollama Installation (Windows)

Make sure you have the latest NVIDIA GPU drivers installed for your graphics card: Download The Official NVIDIA Drivers | NVIDIA

  1. Download Ollama from https://ollama.com/

  2. Install Ollama

  3. Set your context length environment variable for Ollama

    1. In Windows, search for View advanced system settings and open it

      image-20250430-175001.png
    2. Click Environment Variables

    3. In the System variables section, click New

    4. Create a new variable:

      1. Variable name: OLLAMA_CONTEXT_LENGTH

      2. Variable value: 10240

        1. This will set Ollama to have a maximum context window of 10,240 tokens

          image-20250430-175146.png

    5. Click OK

    6. Click OK again

    7. Restart Ollama by right-clicking on the icon in your task tray and selecting Quit Ollama

    8. Start Ollama again to pick up the new environment variable

    9. Verify Ollama is running by going to http://localhost:11434 in your browser

      image-20250430-175748.png

2. Shinydocs Pro Installation

  1. Install Shinydocs Pro with the provided Shinydocs Pro Bundle (e.g. shinydocs-pro-25.1.15.exe)

    1. An in-depth guide can be found on our Help Desk: Shinydocs Pro Control Center Guide

  2. Use the default options during the installation unless you need to install it on another drive.

    1. Click the Options button in the installer to change the location

  3. Launch Shinydocs Pro Control Center or go to https://localhost:9701 in your browser

  4. You will be prompted to upload your Shinydocs license. Upload it at this time to activate Shinydocs Pro

    1. AI features are not available on a trial license at this time

  5. Add a content source to Shinydocs Pro of your choice. For trying out the Shinydocs AI interface, we recommend targeting a data source with highly known information and that is less than 20 GB. While the size of the repository does not affect the AI features at this time, the bigger the source, the longer it will take to analyze and be ready for AI interactions.

  6. Wait for the analysis to finish. This could take a few minutes/hours/days, depending on the size of your repository. You will know it is complete when you see Content analysis completed successfully.

    image-20250430-182959.png

3. Pull some models to try with Shinydocs Pro

We highly recommend using nomic-embed-text as your embedding model. You can pull this from Ollama by running the following command:

ollama pull nomic-embed-text:latest

Check out Ollama’s available models Ollama Models to find models you want to try. Different LLM models behave differently, and some are better at different tasks. To get you started, we recommend trying these models to start.

Ollama supported models for different use cases

You will need to make sure the model you download will fit into your GPU’s VRAM.

Feel free to explore Ollama’s models and how they might fit your use cases: Ollama Models

Model Name

Use-Cases

Ollama Pull Command

Link

Gemma3 (Google DeepMind)

  • Document Q&A

  • Creative output

  • Summarization

  • Sentiment

  • ollama pull gemma3:4b

  • ollama pull gemma3:12b

  • ollama pull gemma3:27b

gemma3

Cogito

  • Text classification

  • PII and sensitive info detection

  • Pattern parsing

  • Analysis-based questions

  • Sentiment

  • ollama pull cogito:8b

  • ollama pull cogito:14b

  • ollama pull cogito:32b

  • ollama pull cogito:70b

cogito

Llama3.2 (Meta)

  • General conversation

  • Document Q&A

  • Summarization

  • Creative, non-technical prompts

  • ollama pull llama3.2:1b

  • ollama pull llama3.2:3b

llama3.2

Llama3.1 (Meta)

  • High-performance Q&A

  • Structured enrichment

  • General enterprise analysis

  • ollama pull llama3.1:8b

  • ollama pull llama3.1:70b

  • ollama pull llama3.1:405b

llama3.1

Mistral (Mistral AI)

  • Efficient instruction following

  • General enrichment

  • Fast summarization

  • ollama pull mistral:7b

mistral

Phi-4 (Microsoft)

  • Complex context understanding

  • Compliance QA

  • Technical/Scientific summary and analysis

  • ollama pull phi4:14b

phi4

Phi-3 (Microsoft)

  • Lightweight structured enrichment

  • Low-resource Q&A

  • Efficient summarization

  • ollama pull phi3:3.8b

phi3

SmolLM2

  • Basic document Q&A

  • Structured extraction on small text bodies

  • Low-resource summarization

  • ollama pull smollm2:135m

  • ollama pull smollm2:360m

  • ollama pull smollm2:1.7b

smollm2

Qwen 3 (Alibaba)

  • Multi-lingual support

  • Complex text

  • Math/numbers based insights

  • Code understanding

  • ollama pull qwen3:0.6b

  • ollama pull qwen3:1.7b

  • ollama pull qwen3:4b

  • ollama pull qwen3:8b

  • ollama pull qwen3:14b

  • ollama pull qwen3:30b

  • ollama pull qwen3:32b

  • ollama pull qwen3:235b

qwen3

IBM Granite 3.3

  • Complex reasoning

  • Compliance QA

  • Larger (128k+ token) context windows

  • ollama pull granite3.3:2b

  • ollama pull granite3.3:8b

granite3.3

DeepSeek-R1

  • Thinking model (responds with <think> tags to show reasoning

  • Slower responses as there is usually more text in the response

  • Detailed compliance QA

  • Analytical document workflows

  • ollama pull deepseek-r1:1.5b

  • ollama pull deepseek-r1:7b

  • ollama pull deepseek-r1:8b

  • ollama pull deepseek-r1:14b

  • ollama pull deepseek-r1:32b

  • ollama pull deepseek-r1:70b

  • ollama pull deepseek-r1:671b

deepseek-r1

4. Configure Shinydocs Pro AI interface with Ollama

  1. Enable Streamlined Search and AI in /flags

    1. Open Shinydocs Pro Control Center in your web browser (https://localhost:9701)

    2. Once Shinydocs Pro Control Center is visible, change the URL in your browser to https://localhost:9701/flags
      IMPORTANT: The flags page contains settings that could break your Shinydocs Pro install. Only change values on this page when instructed to by Shinydocs Support, like in this guide.

    3. Enable the following flags:

      1. Enable Search App Selection

      2. AI

    4. Disable the following flags:

      1. Streamlined Search Permission Checking

        1. This feature is coming soon!

  2. Switch from Enterprise Search to Streamlined Search

    1. Go to https://localhost:9701/settings/search

      1. Or in Shinydocs Control Center, click Settings > Search

    2. Select Streamlined Search as the search product to use

    3. Click Save changes

  3. Configure the AI Interface in Shinydocs Pro

    1. Go to Shinydocs Pro Control Center > Settings > AI

    2. For AI Engine, select Ollama from the drop-down

    3. Set the following (depending on your environment and which models you downloaded):

      1. API endpoint: http://localhost:11434
        This is the Ollama endpoint that will be used

      2. Text model: yourmodel:parameters
        e.g gemma3:12b

      3. Embedding mode: nomic-embed-text:latest
        We highly recommend using nomic-embed-text

        image-20250430-192700.png
    4. Click Save changes

    5. You can come back here to change your models at any time

5. Try Streamlined Search with AI

Streamlined search is not production-ready and has no permission checking capabilities at this time.

Now that everything is set up, you will be able to use the LLM to converse with your documents in Search.

To get to the new streamlined search once it is enabled (done above), you can click on the Search button in the bottom left of Shinydocs Control Center or go to https://localhost:9701/search

image-20250430-182423.png
  1. Perform a search for files you know exist in the source you analyzed

    image-20250430-183421.png
  2. Click on a result’s file name to bring up the info panel, and click Ask AI

    image-20250430-190347.png
  3. The chat window will open, and you will be able to ask any questions about the file’s contents

    image-20250430-192547.png
  4. Experiment with different models and prompts, see what works best for your data!

Helpful tips

Ollama commands

The following Ollama commands can be run in your terminal (e.g. cmd) for helpful information

  • ollama list

    • Returns all models Ollama has downloaded and available to use

  • ollama ps

    • Returns current models in memory and if they are bound to CPU, GPU, or mixed

    • Ideally, all models should be bound to GPU when running

JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.