For now, the artificial intelligence tool named Neutron Enterprise is just meant to help workers at the plant navigate extensive technical reports and regulations — millions of pages of intricate documents from the Nuclear Regulatory Commission that go back decades — while they operate and maintain the facility. But Neutron Enterprise’s very existence opens the door to further use of AI at Diablo Canyon or other facilities — a possibility that has some lawmakers and AI experts calling for more guardrails.

  • hansolo@lemm.ee
    link
    fedilink
    English
    arrow-up
    92
    arrow-down
    2
    ·
    2 days ago

    It’s just a custom LLM for records management and regulatory compliance. Literally just for paperwork, one of the few things that LLMs are actually good at.

    Does anyone read more than the headline? OP even said this in the summary.

    • null_dot@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      7
      ·
      1 day ago

      It depends what purpose that paperwork is intended for.

      If the regulatory paperwork it’s managing is designed to influence behaviour, perhaps having an LLM do the work will make it less effective in that regard.

      Learning and understanding is hard work. An LLM can’t do that for you.

      Sure it can summarise instructions for you to show you what’s more pertinent in a given instance, but is that the same as someone who knows what to do because they’ve been wading around in the logs and regs for the last decade?

      It seems like, whether you’re using an LLM to write a business report, or a legal submission, or a SOP for running a nuclear reactor, it can be a great tool but requires high level knowledge on the part of the user to review the output.

      As always, there’s a risk that a user just won’t identify a problem in the information produced.

      I don’t think this means LLMs should not be used in high risk roles, it just demonstrates the importance of robust policies surrounding their use.

    • cyrano@lemmy.dbzer0.comOP
      link
      fedilink
      English
      arrow-up
      28
      arrow-down
      4
      ·
      2 days ago

      I agree with you but you could see the slippery slope with the LLM returning incorrect/hallucinate data in the same way that is happening in the public space. It could be trivial for documentation until you realize the documentation could be critical for some processes.

      • hansolo@lemm.ee
        link
        fedilink
        English
        arrow-up
        12
        arrow-down
        4
        ·
        2 days ago

        If you’ve never used a custom LLM or wrapper for regular ol’ ChatGPT, a lot of what it can hallucinate gets stripped out and the entire corpus of data it’s trained on is your data. Even then, the risk is pretty low here. Do you honestly think that a human has never made an error on paperwork?

        • cyrano@lemmy.dbzer0.comOP
          link
          fedilink
          English
          arrow-up
          8
          arrow-down
          1
          ·
          1 day ago

          I do and even contained one do return hallucination or incorrect data. So it depends on the application that you use it. It is for a quick summary / data search why not? But if it is for some operational process that might be problematic.

    • technocrit@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      6
      ·
      1 day ago

      Don’t blame the people who just read the headline.

      Blame the people who constantly write misleading headlines.

      There is literally no “artificial intelligence” here either.

  • dumbass@leminal.space
    link
    fedilink
    English
    arrow-up
    64
    arrow-down
    2
    ·
    2 days ago

    Huh, it is really Russian roulette with how we’re all gonna die, could be WW3, could be another pandemic or could a bunch of AIs hallucinating and causing multiple nuclear meltdowns.

    • scarabic@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      1 day ago

      It’s literally just a document search for their internal employees to use.

      Those employees are fallible humans trying to navigate tens of thousands of byzantine technical and regulatory documents all published on various dinosaur platforms.

      AI hallucination is a very popular thing to get outraged about right now but don’t forget about good old fashioned bureaucratic error.

      My employer implemented AI search/summarization of our docs/wiki/intranet/JIRA systems over a year ago and it has been very effective in my experience. It always links to the source docs, but it permits natural language queries and can do some reasoning about the contents of the documents to pull together information across a sea of text.

      Nothing that is mission critical enough to lead to a reactor meltdown should ever be blindly trusted to these tools.

      But nothing like that should ever be trusted to the whims of one fallible human, either. This is why systems have protocols, checks and balances, quality controls, and failsafes.

      Giving employees a more powerful document search doesn’t somehow sweep all that aside.

      But hey, don’t let a rational, down-to-earth argument stand in the way of freaking out about a sci-fi dystopia.

  • besselj@lemmy.ca
    link
    fedilink
    English
    arrow-up
    49
    arrow-down
    1
    ·
    2 days ago

    The LLM told me that control rods were not necessary, so it must be true

    • twice_hatch@midwest.social
      link
      fedilink
      English
      arrow-up
      10
      ·
      2 days ago

      The chatbot said 3.6 Roentgen is just fine and the core cannot have exploded, maybe we heard a truck driving by

  • Sterile_Technique@lemmy.world
    link
    fedilink
    English
    arrow-up
    19
    arrow-down
    2
    ·
    2 days ago

    Diablo Canyon

    The nuclear power plant run by AI slop is located in a region called “Diablo Canyon”.

    Right. We sure this isn’t an Onion article? …actually no, it couldn’t be, The Onion’s writers aren’t that lazy.

    Fuckin whatever, I’m done for the night. Gonna head over to Mr. Sandman’s squishy rectangle. …bet you’ll never guess what I’m gonna do there!!

    • pyre@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 days ago

      using AI in a nuclear plant at Diablo Canyon… it’s so on the nose you’d say it’s lazy writing if it were part of the backstory of some scifi novel.

    • hansolo@lemm.ee
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      8
      ·
      2 days ago

      Well, considering it’s exclusively for paperwork and compliance, the worst that can happen is someone might rely on it too much and file incorrect, I dunno, license renewal with the DOE and be asked to do it again.

      Ah. The horror.

      • pivot_root@lemmy.world
        link
        fedilink
        English
        arrow-up
        13
        arrow-down
        1
        ·
        2 days ago

        When it comes to compliance and regulations, anything with the literal blast radius of a nuclear reactor should not be trusted to LLM unless double or triple checked by another party familiar with said regulations. Regulations were written in blood, and an LLM hallucinating a safety procedure or operating protocol is a disaster waiting to happen.

        I have less qualms about using it for menial paperwork, but if the LLM adds an extra round-trip to a form, it’s not just wasting the submitter’s time, but other people’s as well.

        • hansolo@lemm.ee
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          6
          ·
          2 days ago

          All the errors you know about in the nuclear power industry are human-caused.

          Is this an industry with a 100% successful operation rate? Not at all.

          But have you ever heard of a piece of paperwork with an error submitted to regulatory officials and lawyers outside the plant causing a critical issue inside the plant? I sure haven’t. Please feel free to let me know if you are aware of such an incident.

          I would encourage you to learn more about how LLM and SLM structures work. This article is more of a nothingburger superlative clickbait IMO. To me, at least it appears to be airgapped if it’s running locally, which is nice.

          I would bet money that this will be entirely managed by the most junior compliance person who is not 120 years old, with more senior folks cross checking it with more suspicion than they would a new hire.

          • gedhrel@lemmy.world
            link
            fedilink
            English
            arrow-up
            8
            ·
            1 day ago

            I’m not sure if that opening sentence is fatuous or not. What errors in any industrial enterprise are not human in origin?

  • pyre@lemmy.world
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    5
    ·
    edit-2
    2 days ago

    to people who say it’s just paperwork or whatever it doesn’t matter: this is how it begins. they’ll save a couple cents here and there and they’ll want to expand this.

      • pyre@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        edit-2
        1 day ago

        it’s not actually. there’s barely an intermediate step between what’s happening now and what I’m suggesting it will lead to.

        this is not “if we allow gay marriage people will start marrying goats”. it’s “if this company is allowed to cut corners here they’ll be cutting corners in other places”. that’s not a slope; it’s literally the next step.

        slippery slope fallacy doesn’t mean you’re not allowed to connect A to B.

        • scarabic@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          1 day ago

          You may think it’s as plausible as you like. Obviously you do or you wouldn’t have said it. It’s still by definition absolutely a slippery slope logical fallacy. A little will always lead to more, therefore a little is a lot. This is textbook. It has nothing to do with companies, computers, or goats.

      • TheOakTree@lemm.ee
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        1 day ago

        True, but it you change the argument from “this will happen” to “this with happen more frequently” then it’s still a very reasonable observation.

        • scarabic@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 day ago

          All predictions in this vein are invalid.

          If you want to say “even this little bit is unsettling and we should be on guard for more,” fine.

          That’s different from “if you think this is only a small amount you are wrong because a small amount will become a large amount.”