I'm tired of LLM bullshitting. So I fixed it. (codeberg.org)
from SuspciousCarrot78@lemmy.world to privacy@lemmy.ml on 22 Jan 2026 12:41
https://lemmy.world/post/41992574

[deleted by user]

#privacy

threaded - newest

itkovian@lemmy.world on 22 Jan 2026 12:45 next collapse

Based AF. Can anyone more knowledgeable explain how it works? I am not able to understand.

[deleted] on 22 Jan 2026 12:48 collapse

.

itkovian@lemmy.world on 22 Jan 2026 12:52 collapse

As I understand it, it corrects the output of LLMs. If so, how does it actually work?

[deleted] on 22 Jan 2026 13:42 collapse

.

itkovian@lemmy.world on 22 Jan 2026 14:16 collapse

That is much clearer. Thank you for making this. It actually makes LLMs useful with much lesser downsides.

[deleted] on 22 Jan 2026 14:22 collapse

.

BaroqueInMind@piefed.social on 22 Jan 2026 12:54 next collapse

I have no remarks, just really amused with your writing in your repo.

Going to build a Docker and self host this shit you made and enjoy your hard work.

Thank you for this!

[deleted] on 22 Jan 2026 13:06 next collapse

.

Diurnambule@jlai.lu on 23 Jan 2026 06:09 collapse

Same sentiment. Tonight it run on my systems XD.

[deleted] on 23 Jan 2026 07:02 collapse

.

Terces@lemmy.world on 22 Jan 2026 13:01 next collapse

Fuck yeah…good job. This is how I would like to see “AI” implemented. Is there some way to attach other data sources? Something like a local hosted wiki?

[deleted] on 22 Jan 2026 13:05 collapse

.

SpaceNoodle@lemmy.world on 22 Jan 2026 13:30 collapse

I wanna just plug Wikipedia into this and see if it turns an LLM into something useful for the general case.

[deleted] on 22 Jan 2026 13:50 collapse

.

MNByChoice@midwest.social on 22 Jan 2026 13:59 next collapse

Not OP, but random human.

Glad you tried the “YOLO Wikipeida”, and are sharing that fact as it saves the rest of us time. :)

[deleted] on 22 Jan 2026 14:15 collapse

.

SpaceNoodle@lemmy.world on 22 Jan 2026 14:07 collapse

Yes please

[deleted] on 22 Jan 2026 14:32 collapse

.

angelmountain@feddit.nl on 22 Jan 2026 13:28 next collapse

Super interesting build

And if programming doesn’t pan out please start writing for a magazine, love your style (or was this written with your AI?)

[deleted] on 22 Jan 2026 13:51 collapse

.

Karkitoo@lemmy.ml on 22 Jan 2026 14:42 collapse

meat popsicle

( ͡° ͜ʖ ͡°)

Anyway, the other person is right. Your writing style is great !

I successfully read your whole post and even the README. Probably the random outbursts grabbed my attention back to te text.

Anyway version 2, this Is a very cool idea ! I cannot wait to either :

  • incorporate it to my workflows
  • let it sit in a tab to never be touched ever again
  • tgeoryceaft, do tests and request features so much as to burnout

Last but not least, thank you for not using github as your primary repo

[deleted] on 22 Jan 2026 14:56 collapse

.

Karkitoo@lemmy.ml on 22 Jan 2026 15:34 collapse

Don’t spam my Github inbox plz

I can spam your codeberg’s then ? :)

About the random outburst: caused by TOO MUCH FUCKING CHATGPT WASTING HOURS OF MY FUCKING LIFE, LEADING ME DOWN BLIND ALLEYWAYS, YOU FUCKING PIEC… …sorry, sorry…

Understandable, have a great day.

[deleted] on 22 Jan 2026 15:41 collapse

.

als@lemmy.blahaj.zone on 22 Jan 2026 13:26 next collapse

neat but is this privacy related?

[deleted] on 22 Jan 2026 14:04 next collapse

.

FrankLaskey@lemmy.ml on 22 Jan 2026 14:06 collapse

Yes, because making locally hosted LLMs actually useful means you don’t need to utilize cloud-based and often proprietary models like ChatGPT or Gemini which Hoover up all of your data.

[deleted] on 22 Jan 2026 14:24 collapse

.

rollin@piefed.social on 22 Jan 2026 13:16 next collapse

At first blush, this looks great to me. Are there limitations with what models it will work with? In particular, can you use this on a lightweight model that will run in 16 Gb RAM to prevent it hallucinating? I’ve experimented a little with running ollama as an NPC AI for Skyrim - I’d love to be able to ask random passers-by if they know where the nearest blacksmith is for instance. It was just far too unreliable, and worse it was always confidently unreliable.

This sounds like it could really help these kinds of uses. Sadly I’m away from home for a while so I don’t know when I’ll get a chance to get back on my home rig.

[deleted] on 22 Jan 2026 14:09 collapse

.

rollin@piefed.social on 23 Jan 2026 00:57 collapse

I never knew LLMs can run on such low-spec machines now! That’s amazing. You said elsewhere you’re using Qwen3-4B (abliterated), and I found a page saying that there are Qwen3 models that will run on "Virtually any modern PC or Mac; integrated graphics are sufficient. Mobile phones"

Is there still a big advantage to using Nvidia GPUs? Is your card Nvidia?

My home machine that I’ve installed ollama on (and which I can’t access in the immediate future) has an AMD card, but I’m now toying with putting it on my laptop, which is very midrange and has Intel Arc graphics (which performs a whole lot better than I was expecting in games)

[deleted] on 23 Jan 2026 02:02 collapse

.

CIA_chatbot@lemmy.world on 22 Jan 2026 14:00 next collapse

Doing gods work

[deleted] on 22 Jan 2026 14:12 collapse

.

CIA_chatbot@lemmy.world on 22 Jan 2026 14:35 collapse

Friendship drive activated.

[deleted] on 22 Jan 2026 14:47 collapse

.

db0@lemmy.dbzer0.com on 22 Jan 2026 13:15 next collapse

Any chance you can also make it compatible with AI Horde?

[deleted] on 22 Jan 2026 14:09 collapse

.

db0@lemmy.dbzer0.com on 22 Jan 2026 14:24 collapse

In a nutshell: Local-LLMs, crowdsourced at scale.

FauxLiving@lemmy.world on 22 Jan 2026 19:50 collapse

AI Horde has a OpenAI compatible REST API (oai.aihorde.net). They say that it doesn’t support the full feature set of their native API, but will almost assuredly work with this.

OP manually builds the oapi JSON payload and then uses the python requests library to handle the request.

The fields they’re using match the documentation on oai.aihorde.net/docs

You would need to add a header with your AI Horde API key. Looks like that would only need to be done in router_fastapi.py - call_model_prompt() (line 269) and call_model_messages() (line 303) and then everything else is setup according to documentation

[deleted] on 23 Jan 2026 08:40 collapse

.

db0@lemmy.dbzer0.com on 23 Jan 2026 10:50 collapse

Very impressive. The only mistake on the third one is that the kudos are actually transferrable (i.e. “tradable”), but we forbid exchanges for monetary rewards.

Disclaimer: I’m the lead developer for the AI Horde. I also like you’ve achieved here and would be interesting if we can promote this usage via the AI Horde in some way. If you can think of some integration or collaboration we could do, hit me up!

PS: While the OpenAI API is technically working, we still prefer people to use our own API as it’s much more powerful (allowing people to use multiple models, filter workers, tweak more vars) and so on. If you would support our native API, I’d be happy to add a link to your software in our frontpage in the integrations area for LLMs.

[deleted] on 23 Jan 2026 11:14 collapse

.

SlimePirate@lemmy.dbzer0.com on 22 Jan 2026 13:08 next collapse

Voodoo is not magic btw, it was sullied by colonists

[deleted] on 22 Jan 2026 14:11 collapse

.

SlimePirate@lemmy.dbzer0.com on 22 Jan 2026 17:29 collapse

I think this was was done by France, not better though

[deleted] on 22 Jan 2026 18:00 collapse

.

FrankLaskey@lemmy.ml on 22 Jan 2026 12:53 next collapse

This is very cool. Will dig into it a bit more later but do you have any data on how much it reduces hallucinations or mistakes? I’m sure that’s not easy to come by but figured I would ask. And would this prevent you from still using the built-in web search in OWUI to augment the context if desired?

[deleted] on 22 Jan 2026 16:19 next collapse

.

7toed@midwest.social on 23 Jan 2026 05:56 collapse

abilterated one

Please elaborate, that alone piqued my curiosity. Pardon me if I couldve searched

[deleted] on 23 Jan 2026 06:34 collapse

.

SuspciousCarrot78@lemmy.world on 22 Jan 2026 13:13 collapse

[deleted by user]

Alvaro@lemmy.blahaj.zone on 22 Jan 2026 15:43 next collapse

I don’t see how it addresses hallucinations. It’s really cool! But seems to still be inherently unreliable (because LLMs are)

[deleted] on 22 Jan 2026 16:08 collapse

.

wolfrasin@lemmy.today on 22 Jan 2026 16:09 next collapse

Hey Human,

Thank you!

[deleted] on 22 Jan 2026 17:57 collapse

.

bilouba@jlai.lu on 22 Jan 2026 15:48 next collapse

Very impressive! Do you have benchmark to test the reliability? A paper would be awesome to contribute to the science.

[deleted] on 22 Jan 2026 15:58 collapse

.

sp3ctr4l@lemmy.dbzer0.com on 22 Jan 2026 20:09 next collapse

This seems astonishingly more useful than the current paradigm, this is genuinely incredible!

I mean, fellow Autist here, so I guess I am also… biased towards… facts…

But anyway, … I am currently uh, running on Bazzite.

I have been using Alpaca so far, and have been successfully running Qwen3 8B through it… your system would address a lot of problems I have had to figurr out my own workarounds for.

I am guessing this is not available as a flatpak, lol.

I would feel terrible to ask you to do anything more after all of this work, but if anyone does actually set up a podman installable container for this that actually properly grabs all required dependencies, please let me know!

[deleted] on 23 Jan 2026 01:33 next collapse

.

sp3ctr4l@lemmy.dbzer0.com on 23 Jan 2026 02:29 collapse

Oh I entirely believe you.

Hell hath no wrath like an annoyed high functioning autist.

I’ve … had my own 6 month black out periods where I came up with something extremely comprehensive and ‘neat’ before.

Seriously, bootstrapping all this is incredibly impressive.

I would… hope that you can find collaborators, to keep this thing alive in the event you get into a car accident (metaphorical or literal), or, you know, are completely burnt out after this.

… but yeah, it is… yet another immensely ironic aspect of being autistic that we’ve been treated and maligned as robots our whole lives, and then when the normies think they’ve actually built the AI from sci fi, no, turns out its basically extremely talented at making up bullshit and fudging the details and being a hypocrite, which… appalls the normies when they have to look into a hyperpowered mirror of themselves.

And then, of course, to actually fix this, its some random autist no one has ever heard of (apologies if you are famous and i am unaware of this), who is putting in an enormous of effort, that… most likely, will not be widely recognized.

… fucking normies man.

[deleted] on 23 Jan 2026 03:02 collapse

.

Fmstrat@lemmy.world on 25 Jan 2026 02:51 collapse

No promises, but if I end up running this it will be by putting it in a container. If I do, then I’ll put a PR on Codeberg with a Docker Compose file (compatible with Podman on Bazzite).

@SuspciousCarrot78@lemmy.world

SuspciousCarrot78@lemmy.world on 25 Jan 2026 05:34 collapse

[deleted by user]

UNY0N@lemmy.wtf on 22 Jan 2026 20:51 next collapse

THIS IS AWESOME!!! I’ve been working on using an obsidian vault and a podman ollama container to do something similar, with VSCodium + continue as middleware. But this! This looks to me like it is far superior to what I have cobbled together.

I will study your codeberg repo, and see if I can use your conductor with my ollama instance and vault program. I just registered at codeberg, if I make any progress I will contact you there, and you can do with it what you like.

On an unrelated note, you can download wikipedia. Might work well in conjunction with your conductor.

en.wikipedia.org/…/Wikipedia:Database_download

[deleted] on 23 Jan 2026 01:22 collapse

.

Murdoc@sh.itjust.works on 23 Jan 2026 03:33 next collapse

I wouldn’t know how to get this going, but I very much enjoyed reading it and your comments and think that it looks like a great project. 👍

(I mean, as a fellow autist I might be able to hyperfocus on it for a while, but I’m sure that the ADHD would keep me from finishing to go work on something else. 🙃)

[deleted] on 23 Jan 2026 04:34 collapse

.

WolfLink@sh.itjust.works on 22 Jan 2026 20:22 next collapse

I’m probably going to give this a try, but I think you should make it clearer for those who aren’t going to dig through the code that it’s still LLMs all the way down and can still have issues - it’s just there are LLMs double-checking other LLMs work to try to find those issues. There are still no guarantees since it’s still all LLMs.

[deleted] on 23 Jan 2026 05:18 next collapse

.

skisnow@lemmy.ca on 23 Jan 2026 06:04 collapse

I haven’t tried this tool specifically, but I do on occasion ask both Gemini and ChatGPT’s search-connected models to cite sources when claiming stuff and it doesn’t seem to even slightly stop them bullshitting and claiming a source says something that it doesn’t.

[deleted] on 23 Jan 2026 06:59 collapse

.

7toed@midwest.social on 23 Jan 2026 06:00 next collapse

I really need this. Each time I try messing with GPT4All’s “reasoning” model, it pisses me off. I’m selective on my inputs, low temperature, local docs, and it’ll tell me things like tension matters for a coil’s magnetic field. Oh and it spits out what I assume is unformatted LATEX so if anyone has an interface/stack recommendation please let me know

[deleted] on 23 Jan 2026 06:02 collapse

.

7toed@midwest.social on 23 Jan 2026 06:05 next collapse

Okay pardon the double comment, but I now have no choice but to set this up after reading your explainations. Doing what TRILLIONS of dollars hasn’t cooked up yet… I hope you’re ready by whatever means you deam, when someone else “invents” this

[deleted] on 23 Jan 2026 06:11 collapse

.

[deleted] on 23 Jan 2026 08:26 next collapse

.

Pudutr0n@lemmy.world on 23 Jan 2026 10:01 next collapse

re: the KB tool, why not just skip the llm and do two chained fuzzy finds? (what knowledge base & question keywords)

[deleted] on 23 Jan 2026 11:26 collapse

.

ThirdConsul@lemmy.zip on 23 Jan 2026 09:38 next collapse

I want to believe you, but that would mean you solved hallucination.

Either:

A) you’re lying

B) you’re wrong

C) KB is very small

[deleted] on 23 Jan 2026 11:33 next collapse

.

Kobuster@feddit.dk on 23 Jan 2026 10:15 collapse

Hallucination isn’t nearly as big a problem as it used to be. Newer models aren’t perfect but they’re better.

The problem addressed by this isn’t hallucination, its the training to avoid failure states. Instead of guessing (different from hallucination), the system forces a Negative response. That’s easy and any big and small company could do it, big companies just like the bullshit

SuspciousCarrot78@lemmy.world on 23 Jan 2026 12:07 collapse

[deleted by user]

rozodru@piefed.social on 26 Jan 2026 11:49 next collapse

soooo if it doesn’t know something it won’t say anything and if it does know something it’ll show sources…so essentially you plug this into Claude it’s just never going to say anything to you ever again?

neat.

SuspciousCarrot78@lemmy.world on 26 Jan 2026 14:02 collapse

[deleted by user]

recklessengagement@lemmy.world on 24 Jan 2026 04:22 next collapse

I strongly feel that the best way to improve the useability of LLMs is through better human-written tooling/software. Unfortunately most of the people promoting LLMs are tools themselves and all their software is vibe-coded.

Thank you for this. I will test it on my local install this weekend.

SuspciousCarrot78@lemmy.world on 24 Jan 2026 08:45 collapse

[deleted by user]

termaxima@slrpnk.net on 23 Jan 2026 17:48 next collapse

Hallucination is mathematically proven to be unsolvable with LLMs. I don’t deny this may have drastically reduced it, or not, I have no idea.

But hallucinations will just always be there as long as we use LLMs.

SuspciousCarrot78@lemmy.world on 24 Jan 2026 02:32 collapse

[deleted by user]

floquant@lemmy.dbzer0.com on 23 Jan 2026 21:32 next collapse

Holy shit I’m glad to be on the autistic side of the internet.

Thank you for proving that fucking JSON text files are all you need and not “just a couple billion more parameters bro”

Awesome work, all the kudos.

SuspciousCarrot78@lemmy.world on 24 Jan 2026 02:17 collapse

[deleted by user]

PolarKraken@lemmy.dbzer0.com on 23 Jan 2026 21:25 next collapse

This sounds really interesting, I’m looking forward to reading the comments here in detail and looking at the project, might even end up incorporating it into my own!

I’m working on something that addresses the same problem in a different way, the problem of constraining or delineating the specifically non-deterministic behavior one wants to involve in a complex workflow. Your approach is interesting and has a lot of conceptual overlap with mine, regarding things like strictly defining compliance criteria and rejecting noncompliant outputs, and chaining discrete steps into a packaged kind of “super step” that integrates non-deterministic substeps into a somewhat more deterministic output, etc.

How involved was it to build it to comply with the OpenAI API format? I haven’t looked into that myself but may.

SuspciousCarrot78@lemmy.world on 24 Jan 2026 02:26 collapse

[deleted by user]

nagaram@startrek.website on 23 Jan 2026 12:33 next collapse

This + Local Wikipedia + My own writings would be sick

SuspciousCarrot78@lemmy.world on 23 Jan 2026 13:16 collapse

[deleted by user]

cypherpunks@lemmy.ml on 23 Jan 2026 14:05 next collapse

<img alt="Ron Burgundy (Will Ferrell) “I don’t believe you” gif meme" src="https://lemmy.ml/api/v3/image_proxy?url=https%3A%2F%2Fmedia1.tenor.com%2Fm%2FsKbXk3XNHksAAAAd%2Fi-dont-believe-you.gif">

SuspciousCarrot78@lemmy.world on 23 Jan 2026 14:53 collapse

[deleted by user]

Zexks@lemmy.world on 23 Jan 2026 13:35 next collapse

This is awesome. Ive been working on something similar. Youre not likely to get much useful from here though. Anything AI is by default bad here

SuspciousCarrot78@lemmy.world on 23 Jan 2026 13:39 collapse

[deleted by user]

pineapple@lemmy.ml on 23 Jan 2026 11:08 next collapse

This is amazing! I will either abandon all my other commitments and install this tomorrow or I will maybe hopefully get it done in the next 5 years.

Likely accurate jokes aside this will be a perfect match with my obsidian volt as well as researching things much more quickly.

SuspciousCarrot78@lemmy.world on 23 Jan 2026 11:38 collapse

[deleted by user]

domi@lemmy.secnd.me on 23 Jan 2026 09:26 collapse

I have a Strix Halo machine with 128GB VRAM so I’m definitely going to give this a try with gpt-oss-120b this weekend.

SuspciousCarrot78@lemmy.world on 23 Jan 2026 11:34 collapse

[deleted by user]