i was initialy hostile to the use of LLMs in programming, partly because of past experience with precursor text recomposition algorithms like markov chains, and the absurd rubbish they generate. early, and low parameter models still do this, and it has become known as "hallucinations"
but the threshold has been reached with models that need near 1tb of fast memory to run, like most of the well known models, where they can really do a lot of thinking and need a little vigilance to watch out for those hallucinations cropping up and messing up the code, but it's got to a point where i think it's fair to say where 36% of output used to be hallucinations, it's probably down to around 20% or less now.
so, i'm gonna use it, and use it avidly to implement things that i have wished i could implement, forking things to improve them, and using it to take old work i have done, and complete ii (indranet)
