• 0 Posts
  • 22 Comments
Joined 1 year ago
cake
Cake day: June 12th, 2023

help-circle










  • Uh, no. But thanks for guessing. It’s frivolous because it violates several principles of responsible disclosure. Yes, the scope of impact is relevant; the availability of methods of remediation is relevant; and the development/patch lifecycle is relevant. The feature being off-by-default and labeled experimental are indirect references to the scope of impact and availability of remediation, and the latter is an indirect reference to the state of development lifecycle. Per the developer(s)’ words, this is a bug that had limited risk and was scheduled to be fixed as part of the normal development schedule. Escalating every such bug, of which the vast majority go without a CVE, would quickly drown out notices that people actually care about. A CVE is not a bug report.










  • “op” you are referring to is… well… myself, Since you didn’t comprehend that from the posts above, my reading comprehension might not be the issue here.

    I don’t care. It doesn’t matter, so I didn’t check. Your reading comprehension is still, in fact, the issue, since you didn’t understand that the “learned” vs “programmed” distinction I had referred to is completely relevant to your post.

    It’s wether it can grasp concepts from the content of the words it absorbs as it it’s learning data.

    That’s what learning is. The fact that it can construct syntactically and semantically correct, relevant responses in perfect English means that it has a highly developed inner model of many things we would consider to be abstract concepts (like the syntax of the English language).

    If it would grasp concepts (like rules in algebra), it could reproduce them everytime it gets confronted with a similar problem

    This is wrong. It is obvious and irrefutable that it models sophisticated approximations of abstract concepts. Humans are literally no different. Humans who consider themselves to understand a concept can obviously misunderstand some aspect of the concept in some contexts. The fact that these models are not as robust as that of a human’s doesn’t mean what you’re saying it means.

    the only thing it does is chain words together by stochastic calculation.

    This is a meaningless point, you’re thinking at the wrong level of abstraction. This argument is equivalent to “a computer cannot convey meaningful information to a human because it simply activates and deactivates bits according to simple rules.” Your statement about an implementation detail says literally nothing about the emergent behavior we’re talking about.