• orca@orcas.enjoying.yachts
    link
    fedilink
    arrow-up
    14
    arrow-down
    2
    ·
    22 hours ago

    No, it was Israel. They can use this as cover. Now suddenly OpenAI and Claude are at fault. Oops! Israel excels at bombing civilian targets and schools. Biggest bunch of fucking cowards I’ve ever seen.

    • quick_snail@feddit.nl
      link
      fedilink
      arrow-up
      4
      ·
      5 hours ago

      Reuters said it was the US, not Israel.

      Both are capable of committing genocide, with or without AI. See history.

  • cøre@leminal.space
    link
    fedilink
    arrow-up
    52
    arrow-down
    2
    ·
    1 day ago

    No it didn’t, if there was oversight they saw the target and didn’t care. AI is not the problem, its the scapegoat. They want to be able to shrug and point at AI, saying there was a misclassification and that it wasn’t their fault. Meanwhile they ignore the fact that a human at any point in time could have stopped the attack or double checked the target. They chose not to because they don’t care. Collateral damage, wanton destruction, and civilian casualties is the goal.

    • Kichae@lemmy.ca
      link
      fedilink
      English
      arrow-up
      26
      ·
      1 day ago

      This is the WHOLE point of why these generative models have been pushed so hard the past couple of years. They tested the waters to see if people would accept “it’s the computer’s fault” as an acceptable excuse, and then slammed on the gas.

      Accountability sinks, as Dan Davies has named them, are the whole point. It’s everything a slimy corporate CEO or government official has ever wanted.

    • No_Money_Just_Change@feddit.org
      link
      fedilink
      Deutsch
      arrow-up
      4
      ·
      1 day ago

      It can be a hundred percent their fault for not caring and still be a target selected by ai. Bombing innocents is never justified and never just something to call a technological error but the fact they are combining faulty non transparent software with unsupervised weapon attacks is dangerous and newsworthy

  • Tomtits@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    66
    arrow-down
    2
    ·
    1 day ago

    So we’re just sailing into a time where AI will get blamed for anything and everything and, no-one will get held accountable or punished?

    • freagle@lemmy.ml
      link
      fedilink
      English
      arrow-up
      30
      ·
      1 day ago

      It’s not like before a.i. the US was held responsible for killing hundreds of thousands of children.

    • calliope@piefed.blahaj.zone
      link
      fedilink
      English
      arrow-up
      9
      ·
      1 day ago

      Oh for sure, they’re taking big tech’s lead.

      Tech has been blaming AI for layoffs for a couple years now, when they hired an insane number of people during and after COVID. They literally hired to lay off. I found this graph illustrative of the boom and bust.

      The people in this administration love when tech can get away with something (see the Cambridge Analytica scandal around 2016) because they will too.

    • kibiz0r@midwest.social
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 day ago

      Yes. AI allows the user to separate output from understanding, accountability, and obligation. It can launder intention just as well as inattention. AI is the ultimate tool of fascism.

      Edit: But I should mention, this is not new. Institutions have been pursuing techniques for this long before AI. Everything Was Already AI

  • UnspecificGravity@piefed.social
    link
    fedilink
    English
    arrow-up
    26
    ·
    1 day ago

    What about the other schools that the US and Israel have hit? What about the hospitals and civilian residential areas? What about the fact that the US military seems to want credit for the bombing?

  • bilouba@jlai.lu
    link
    fedilink
    arrow-up
    7
    arrow-down
    1
    ·
    24 hours ago

    Destroying schools is part of the Israel blueprint. Destroy culture and education as soon as you can.

    • quick_snail@feddit.nl
      link
      fedilink
      arrow-up
      1
      ·
      5 hours ago

      WSJ was first to report that Anthropic’s Claude AI was used in determining targets in Iran, but it’s paywalled.

      Futurism published an article asking the Pentagon about it, and the Pentagon refused to answer questions.