Skip to main content
Skip table of contents

Shinydocs Private AI

Artificial Intelligence is no longer just hype. It’s starting to change how we actually work, especially when it comes to understanding and organizing massive volumes of unstructured content. For most enterprise teams, it’s not about science fiction. It’s about using AI to get real answers from your data, reduce time spent digging through documents, and make better decisions with less manual effort.

The recent breakthroughs in Large Language Models (LLMs) have made this possible. These models don’t just match keywords or apply rules. They actually understand the structure and meaning of language. They can read a paragraph, understand what it’s saying, and respond in a way that makes sense. This is already changing everything from legal review and records management to how we search for and interact with content day to day.

This shift is only going to accelerate. LLMs are now small enough and fast enough to run in-house. That’s where Ollama comes in.

What Is Ollama?

Shinydocs is not affiliated with Ollama. While Shinydocs Pro supports connecting to Ollama for local LLM capabilities, the setup, management, and maintenance of the Ollama environment, including model selection and hardware requirements, are entirely the responsibility of your organization. Shinydocs simply provides a secure way to integrate with the LLM engine you choose, and aid in selecting the best model and prompt for your use cases.

Ollama is a tool that lets you run open-source LLMs locally on your own hardware. You can use models like LLaMA 3, Mistral, or Phi without sending any data to the cloud. It keeps everything fast, private, and under your control.

Ollama gives you local control over the model and lets you build secure, scalable workflows without handing your documents over to a third party.

We’ve integrated Shinydocs Pro directly with Ollama, so if you’re running a local instance, we can connect to it and start analyzing your documents right away. OpenAI is also supported, if that’s your preferred route.

How Shinydocs Pro Uses LLMs to Work With Your Content

Shinydocs Pro now includes what we call Private AI. It gives you two powerful ways to use LLMs in your environment: document enrichment and chat.

Document Enrichment

During enrichment, we analyze your documents using an LLM and apply structured outputs as tags. This is where the model reads the content and gives us meaningful results that go straight into your index. For example:

  • Assigning a top-level category like HR, Legal, or Finance

  • Identifying retention timelines such as “7 years” or “permanent”

  • Flagging if a document contains sensitive PII

  • Generating a short, plain-English summary of a document (or other languages in supported models!)

These are applied as tags in your index so they can be searched, filtered, and reported on just like anything else.

Chat With Your Documents

In Shinydocs Streamlined Search, you can now select one or more documents and start a conversation. Ask a question in plain language, and the LLM will read the document and give you a contextual answer. You can compare multiple documents, summarize them, extract decisions or dates, and more.

It doesn’t matter if you’re using Ollama or OpenAI behind the scenes. The experience is the same, and the goal is simple: help you get answers faster without opening five PDFs and reading through a wall of text.

Why It Matters

This isn’t just AI for the sake of AI. It’s practical and focused on solving real problems that legal teams and municipalities face every day. Whether you’re classifying records, reviewing contracts, enforcing retention rules, or just trying to get a handle on what’s in your shared drives, this adds a new level of speed and clarity to the process.

It’s fast, private, and flexible. You decide how and where to run it.

We’re excited to bring this to you, and there’s more coming. If you want help setting it up or ideas for how to use it in your environment, reach out to us or check out the configuration guide.

JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.