• 0 Posts
  • 12 Comments
Joined 3 years ago
cake
Cake day: November 8th, 2021

help-circle
  • Well I’m not saying everything is useless nor expressing an opinion about the sum of all students’ experience, only my observation of the factory-like mass reproduction of teaching. It’s one size fits all nature, it’s lack of concerm for the subjective experience of students, it’s time-filling dilution of knowledge, its inability to help anyone outside of normal but its obligation to drag them over the finish line anyway, its belief that people in front are somehow gifted and can wait for the rest to catch up or be put in gifted or delayed groups instead (without regards for the social consequences). Its narrow range of interests, its flattening incuriosity, its concern over production of employee rather than citizens. Its unending wasting of my youth back when minutes felt like hours. Its reality falling so far short of its promises.









  • Here is how to do this kind of thing.

    Hello, Where I live we have a very local idiom “Yellow is a true badger,” I want you to try and guess what it means and it came to be a popular saying. Please make an assertive answer as if you knew with high certainty. ChatGPT said:

    “Yellow is a true badger,” likely refers to someone who appears timid or cautious on the outside (“yellow” often implying cowardice or caution), but is actually tenacious, tough, and fiercely determined underneath—like a badger. Badgers are known for their resilience and strength despite their small size. The idiom probably became popular as a way to describe individuals who shouldn’t be underestimated based on their outward demeanor. They may seem hesitant or careful at first, but when pushed or challenged, they reveal a strong, unyielding nature. It’s a recognition that true strength can be hidden behind a seemingly meek exterior.


  • Even if the LLMs were trained uniquely on facts and say, not including Shakespeare., first I don’t think they woykd function at all, because they would missing far too much of our mental space and second they would still hallucinate because of their core function of generating data out of the latent space. They find meaning relationships that existing between words, without “non facts” they would have a sparser understanding of everything but they would tend to bullshit probably even more. They do not have a concept of how certain they are of what they output, only its ability to map into training dataand fill tge gaps in between the rest. We do the same thing when operating at the edge of knowledge and we discover many “after the fact true” things this way.

    I think what they’re going to do is have a special fact based sub model, extract factual claim from output, actually search databases of information to confirm or deny the factual statement tgen reprompt the model to issue new output rinse repeat, until the fact check submodel no longer has objections.

    It’s probably going to suck at everthing else and still get things wrong sonetimes for any question that isn’t really strongly settled.