I’m excited to introduce my latest project, which you can find on GitHub here: Prompt Enhancer with Local LLM
While this project involves creating a prompt enhancer using a local language model (amazing to run one on an older laptop CPU!), the primary goal wasn’t to showcase advanced AI capabilities. Instead, I focused on immersing myself in the practical applications of Docker and DockerHub, GitHub Actions, Python testing, and Rust programming. This endeavor was less about AI technical prowess and more about leveraging these tools to enhance my development workflow and skill set.
Table of Contents
Demonstration Video
Why This Project?
After years of bad press from hacks and data leaks, corporate employees are facing increased pressure to keep their company’s information safe. Many corporations have even taken the stance that employees must not use online LLMs, such as ChatGPT, for fear of what the employees may share (trade secrets, strategies, upcoming plans, etc.). Some employees looking to use the latest AI tools find themselves unable to convince their security and legal teams to accept the risk.
This tool overcomes the risks by allowing users to interact with an LLM in a safe and secure manner. In addition, this tool is designed for prompt enhancements specific to your purpose and industry. Through answering 7 short questions, users can seek improved outputs from the LLM. This prompt enhancement is especially helpful for those less accustomed to LLMs, or those with little training in prompt engineering.
Special Features
- 100% local, 100% private, can be run without an internet connection
- For Windows, Mac, and Linux machines
- For use on both:
- CPU only machine
- GPU machine (Can detect and use your GPU, whether it is from Apple, NVIDIA, or AMD)
- Open Source
- Free
- Prompt enhancement naturally built into easy 7 step process
