Was the whole lib and website vibe coded? I can't find any instructions on how to use it, the repo is for the website itself and the readme is AI blurb that doesn't make me any wiser.
// Test your AI system
const results = await injector.runTests(yourAISystem);
???
Even the "prompt-injector" NPM package is something completely different. Does this project even exist?
What are some good prevention mechanisms for this? A sort of firewall for prompts? I've seen people recommend LLMs, but that seems like it wouldn't work well. What is the industry standard? Or what looks promising at least?
hoppp 2 hours ago [-]
Nothing yet.
Probably a new kind of model needs to be trained that can find injected prompts, sort if like an immune system for LLMs.
Then the sanitized data can be passed to the LLM after.
No real solution for it yet. I would be interested to try to train a model for this but no budget atm.
doka_smoka 6 hours ago [-]
[dead]
HKayn 5 hours ago [-]
Why did you use something as heavy as SvelteKit for a website with a single page? This doesn't inspire confidence.
Even the "prompt-injector" NPM package is something completely different. Does this project even exist?
No real solution for it yet. I would be interested to try to train a model for this but no budget atm.