KEC Mentor Network Guest Blog Post: Marcus Blair

KEC Mentor Network Guest Blog Post: Marcus Blair

The Knoxville Entrepreneur Center is pleased to introduce you to some of the business experts in our KEC Mentor Network. Each month we will highlight one of our mentors and invite them to share some of their expertise with our wider audience. You can apply to work with our roster of more than 25 mentors HERE

Why Your Business Should Scrap ChatGPT and Host Your Own LLM

By Marcus Blair, Omega Business Solutions

Yes, yes, I know – more content about generative AI. It’s everywhere right now. But here’s the thing: most discussions focus on just three major Large Language Models (LLMs): ChatGPT, Gemini, and Claude. Everyone has an opinion on how to use them, how to integrate them, and what they mean for business.

Until now, interacting with AI has mostly meant logging into one of these platforms or building a product via their API. For an individual, $20 a month is a great deal. All three are like the Swiss Army knives of Generative AI (Gen AI), offering a little bit of everything. And they easily handle most common tasks.

But businesses don’t need a one-size-fits-all tool. They need AI that understands their industry, workflows, culture, and challenges. Public LLMs lack domain-specific knowledge, and using proprietary data with them raises serious security and compliance risks.

This is why the future belongs to locally hosted, finely-tuned LLMs built for specific businesses and their unique needs. When you own the model, you control it; no more security concerns about uploading sensitive data. It’s as secure as any internal system, whether on your network or cloud, meaning it can pass compliance audits. And with 1.4 million models available on Hugging Face, there’s an AI solution for nearly every use case (admittedly, there is some redundancy there, but the point stands).

A window of opportunity has opened, and the companies that act now will gain a lasting competitive advantage. The next step? Dump public AI platforms and deploy a private model, one that moves your business forward without creating long-term obstacles. Start with an internal tool, and then, if it makes sense, expand its capabilities for external use.

Here are 4 Reasons your organization should ditch ChatGPT and own your AI instead:

Data Security & Compliance

Data security is a top benefit when considering a locally hosted AI solution. There are multiple ways sensitive information can be compromised when uploaded to a public LLM, but let’s just focus on two of them.

First, public LLMs can memorize and unintentionally expose sensitive data in future outputs. This is particularly concerning for engineering and technical fields, where multiple, unrelated users may be working on similar problems, and for legal applications, where confidentiality is critical. Real-world cases highlight this risk: Samsung employees accidentally leaked confidential documents via ChatGPT. Similarly, Amazon employees discovered ChatGPT responses that closely resembled internal company data. As a result, both companies restricted the use of public AI tools, and Samsung opted to develop internal AI tools – a strategy we strongly recommend.

Second, AI providers retain user data for undisclosed periods and may use it for purposes that aren’t always transparent. Policies can change at any time, increasing the risk of sensitive information being exposed. One example is ChatGPT’s 2023 data leak, where users briefly saw other people’s chat histories. Additionally, reports suggest a potential breach affecting millions of user accounts, and OpenAI is still investigating as of this writing.

All of these risks can be eliminated by deploying a private AI model within your organization’s own network or cloud infrastructure. Businesses that take proactive steps now will not only safeguard their intellectual property but also position themselves for an immediate competitive advantage.

Vendor Lock-In

Gen AI is evolving at a breakneck pace; models that were cutting-edge a year ago are already outdated. This rapid innovation means businesses relying solely on one provider may find themselves slipping behind competitors leveraging more advanced models. With open-source tools, switching or even generating outputs from multiple models at once is as simple as selecting an option from a drop-down menu or clicking a plus sign.

Why would you need to change? Because different LLMs excel in different areas. The “Big 3” – ChatGPT, Gemini, and Claude – are highly capable generalists, but specialized models designed for tasks like document processing, image recognition, and audio applications consistently outperform them in those areas. The Big 3 cannot be downloaded, fine-tuned, or customized to fit any specific business needs.The best-case scenario is providing them with external data, but that’s not really an option either (see Point 1).

Maintaining flexibility is a key goal for any organization, and the best way to do this is to avoid becoming overly dependent on a single vendor. The tighter the integration, the more difficult it will be to break away if and when the market changes. The best way to future-proof your AI strategy is to deploy a private model within your own infrastructure using open source tools that allow easy changes. 

Scale and Cost

Scale and cost are two concepts that are inextricably linked – the more AI gets used, the more expensive it will become. AI providers use some form of per-use charging (per token, per user, or per query). So, as headcount or usage grows, cost will quickly rise as well..

The reality of the situation is that OpenAI, Google, and Anthropic are for-profit businesses, and building  an LLM isn’t a profitable endeavor (yet). The prices have to increase at some point so the providers can stay financially viable. OpenAI is rumored to more than double their base tier subscription cost by 2029, while reserving some features for higher tier plans.  Businesses and developers dependent on these providers will likely face rising costs that become difficult to predict and control.

Locally hosted, open source models can be deployed without API costs, and they are rapidly closing the gap. A great example is Deep Research, which launched as a $20/month product from Google, was later repackaged by OpenAI at $200/month, and then open-sourced for free by Hugging Face. Clearly, the open source world can more than keep pace with the proprietary providers.

Organizations who invest in their own private AI today can be one step ahead of curve without the spiraling costs that come with proprietary AI.

More Effective Agents

An AI agent is a semi-autonomous system that performs complex tasks by interacting with multiple data sources. Unlike rules-based automation, which follows fixed workflows, AI agents adapt dynamically using LLMs to perceive context and adjust their actions accordingly.

For instance, if a customer requests a refund, an AI agent could collect user details, verify CRM records, check invoice history, review refund policies, and determine eligibility. By combining LLM-driven reasoning with system integrations, AI agents can handle nuanced decision-making beyond traditional automation.

While AI agents can function with a proprietary LLM, they are far more effective with a privately hosted, finely tuned model. A private, tuned LLM understands industry-specific language, retrieves relevant data efficiently, and makes more accurate decisions, reducing errors and improving contextual awareness. Proprietary LLMs have limited ability to develop deep domain expertise compared to privately hosted models. Companies that train and host their own models can achieve greater efficiency, lower error rates, and ultimately more successful projects with higher ROIs.

That’s Great. But Now What?

By now, the case for private AI is clear. Relying on public AI platforms introduces risks: data security concerns, vendor lock-in, unpredictable costs, and limited customization. These challenges don’t just create short-term headaches; they become long-term obstacles that restrict how businesses can use AI effectively.

Moving to a privately hosted AI model isn’t just about security or compliance. It’s about future-proofing your business. AI should be an asset you control, not a service that limits your flexibility. And with the technology evolving so rapidly, businesses that take control today will gain a lasting advantage while their competitors scramble to catch up later.

So, where do you start? The good news is that this shift doesn’t have to be complicated. Implementing private AI isn’t about reinventing the wheel; you don’t have to train your own model, but eventually you will choose one – or more than one – suited to your business needs and integrate it with the right data for maximum effectiveness.

Approach this like you were deploying any other piece of software. The first step is to identify your use case. Do you have a specialized task, or will this be a general-use system for things like a company-wide search engine, data visualization, or content generation? Start by asking your team if they’re already using a proprietary model (hint: they are). What tasks are they using it for, and how does it benefit them? Those will be the first functions to move in-house. Remember, the best projects are targeted to a specific need, not just “we need AI” or FOMO.

Secondly, who is going to own the implementation? While IT plays a role, the business team should drive AI adoption, ensuring it delivers measurable value, not just technical capabilities. Ultimately, AI must provide business value (read: dollars), and the best people to evaluate that will be the business leaders.

Third, assess your data. Private AI is only as good as the data it accesses. Ensure your data is clean and accurate, but don’t let limited data stop you; private AI can still provide significant value even without large datasets. The ability to upload and analyze documents and integrate AI into existing systems is already a huge advantage. And if fine-tuning is needed, publicly available datasets can help fill the gaps.

Fourth, decide where you want to keep it. Whether hosted on-premise or in a private cloud, your AI remains within your secure infrastructure, fully under your control. This means complete autonomy over security, compliance, and performance, and, most importantly, without letting a third-party provider dictate the rules.

AI is moving fast, and organizations who act now will seize a competitive advantage for the foreseeable future. AI should be an asset under your control that delivers value to your business on your terms without vendor lock-in, unpredictable costs, or data privacy risks. Now’s the time for you to turn AI into an asset and control your own future. 

Want to learn more about what Marcus is up to?

Marcus will be joining us at KEC for an event on March 12th. He will be moderating a panel of AI experts as they discuss where AI is headed in the next 18 months—how smaller, specialized models and evolving chip technology will make LLMs more accessible than ever. The panel will also tackle the workforce implications: What new job opportunities will emerge? What skills will be in demand?

You can learn more and register at the link below: 

APPLY TO BE A MENTEE

Mentors can provide strength, guidance, and the wisdom to help early-stage businesses move forward with confidence and purpose. Take the next step forward in your business!
shape redAsset 13
Search