• 0 Posts
  • 6 Comments
Joined 1 year ago
cake
Cake day: June 9th, 2023

help-circle

  • Sonori@beehaw.orgtoTechnology@lemmy.mlToday's AI is unreasonable
    link
    fedilink
    arrow-up
    7
    arrow-down
    1
    ·
    8 months ago

    OpenAI’s algorithm like all LLM’s is designed to give you the next most likely word in a sentence based on what most frequently came next in its training data. Their main strategy has actually been to use a older and simpler transformer algorithm, and to just vastly increase the scrapped text content and recently bias with each new release.

    I would argue that any system that works by stringing sudorandom words together based on how often they appear in its input sources is not going to be able to do anything but generate bullshit, albeit bullshit that may happen to be correct by pure accident when it’s near directly quoting said input sources.


  • I feel like the primary problem here is just that detecting pedestrians and figuring out how they are going to is actually one of thouse problems that is just very hard for computers. Obviously it’s not impossible to do at all, but it is difficult to do reliably, especially once one considers the risks of false positives as well as negatives.

    It is definitely concerning though that these systems are being pushed and marketed beyond their actual capabilities though. Some proper truth in advertising law might actually help here, and of course in an ideal world this would be an open source project with all the major automakers and academics contributing.


  • It’s unfortunately not certain that they will take such measures with their patients even though most try, and indeed ethic discrepancies are one of the things likely to be made worse with machine learning given that there is often little thought or training data given to them, but age of the hospitals machine is not a good proxy for risk factors. It might be statistically corralled, the actual patients risk isn’t. Less at risk people may go to a cheaper hospital, and more at risk people might live in a city which also has a very up to date hospital.


  • I believe it was from a study on detecting Tuberculosis, but unfortunately google isn’t been very helpful for me.

    The problem with that would be that people in poorer areas are more at risk from TB is not a new discovery, and a model which is intended and billed as detecting TB from a scan should ideally not be using a factor like hospital is old and poor to determine if a scan has diseased tissue, given that intrinsically means your model is more likely to miss it in patients at better hospitals while over-diagnosing it in poorer ones, and that of course at risk people can still go to newer hospitals.

    A Doctor will take risk factors into consideration, but would also know that just because their hospital got a new machine doesn’t mean that their patients are now less likely to have a potentially fatal disease. This results in worse diagnosis, even if it technically scores better with the training set.