• ☂️-@lemmy.ml
    link
    fedilink
    English
    arrow-up
    68
    ·
    edit-2
    7 days ago

    thats a golden opportunity for some sweet malicious compliance.

    let ai fuck their codebase then get paid for the long time you’d need to fix it. punish their money for being dumb, and do it by giving them exactly what they want.

    • Lka1988@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      6 days ago

      More like they’ll fire you for not babysitting it, then hire some “techy” dudebro at half the wage to keep babysitting it until they get the prompts right (by sheer dumb luck), then fire the dudebro.

      • Echo Dot@feddit.uk
        link
        fedilink
        English
        arrow-up
        1
        ·
        5 days ago

        The dudebro doesn’t know how to program, they’ll just vibe code all over the place and it won’t be any better.

      • ☂️-@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 days ago

        not such thing as getting the prompts right.

        ai can’t write good code, and they will sooner or later need actual coders back.

  • disgrunty@feddit.uk
    link
    fedilink
    English
    arrow-up
    10
    ·
    6 days ago

    I cannot wait for Shopify to go away. Yet another company that feels like an infestation.

    • Lka1988@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      3
      ·
      6 days ago

      Right?

      “Oh you typed in a phone number/email address in a required field? Here’s some spam you never asked for that we want you to confirm so we can continue spamming you, please bro just confirm it bro, just type in the code we sent you bro”

  • SabinStargem@lemmy.today
    link
    fedilink
    English
    arrow-up
    21
    ·
    7 days ago

    I like AI, but we are still in the biplane era of development. It will take a long time before it can handle most things, let alone unsupervised.

    If Shopify goes follows through with imitating Musk’s stupidity, I expect the company to end up as a case study.

    • MangoCats@feddit.it
      link
      fedilink
      English
      arrow-up
      14
      ·
      7 days ago

      Well, first the CEO is asking for proof of a negative, so anyone with a logical brain cell just has to shake their head and repeat “it’s for the paycheck.”

      We can assume CEO means “show me you tried to use AI and it’s not working well enough,” which isn’t all that bad of a directive but it’s got the huge gaps of “do your people really know how to use AI?” and “are they using the correct, latest versions of AI for the task they are attempting?” But, it may stand up a few use cases for AI that would have otherwise used expensive meat sacks to do what must be fairly boring rote recitation work if they can be adequately replaced by AI.

      The problem comes when senseless metrics get pushed down that amount to: a certain number of AI projects must be greenlighted, regardless of how dreadful they are in practice.

      AI is a tool, it can save labor, it can relieve human employees of tedious work, it can’t do everything. All this “big personality” top level management of large and very large organizations with broad stroke metrics leads to mass stupidity when the underlings blindly follow orders, and I suspect - within its limitations - AI will always follow orders, so getting AI into middle management will only magnify the idiocrazy.

  • ImmersiveMatthew@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    7
    ·
    6 days ago

    I love AI and use it everyday, but right now it absolutely lacks logic, even the reasoning models and thus it really cannot replace a whole person outside of what 1 prompt can give you which is not a career.

  • futatorius@lemm.ee
    link
    fedilink
    English
    arrow-up
    7
    ·
    6 days ago

    So it Latke going to fund the resources needed to validate whether AI will work or not?

  • FriendBesto@lemmy.ml
    link
    fedilink
    English
    arrow-up
    5
    ·
    7 days ago

    These weird, creepy attempts at upboarding onto AI, sound like they are projecting FOMO onto people, for profit, of course.

  • friend_of_satan@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    6 days ago

    My company just did the same thing. I wonder if it’s the VC’s dictating that down through the board of directors, or if this is just a viral leadership trend.

  • Keener@lemm.ee
    link
    fedilink
    English
    arrow-up
    232
    ·
    8 days ago

    Former shopify employee here. Tobi is scum, and surrounds himself with scum. He looks up to Elon and genuinely admires him.

    • Paradox@lemdro.id
      link
      fedilink
      English
      arrow-up
      27
      ·
      8 days ago

      Shame, because I used to actually admire how he handled layoffs. Was a far sight better (from outside looking in) than the “thanks, here’s one extra paycheck, send your laptop back at your expense please” I’d experienced

  • besselj@lemmy.ca
    link
    fedilink
    English
    arrow-up
    87
    arrow-down
    1
    ·
    edit-2
    8 days ago

    What these CEOs don’t understand is that even an error rate as low as 1% for LLMs is unacceptable at scale. Fully automating without humans somewhere in the loop will lead to major legal liabilities down the line, esp if mistakes can’t be fixed fast.

    • CosmoNova@lemmy.world
      link
      fedilink
      English
      arrow-up
      30
      ·
      8 days ago

      Yup. If 1% of all requests result in failures and even cause damages, you‘ll quickly lose 99% of your customers.

      • VanillaFrosty@lemmy.world
        link
        fedilink
        English
        arrow-up
        12
        ·
        8 days ago

        It’s starting to look like the oligarchs are going to replace every position they can with AI everywhere so we have no choice but to deal with its shit.

    • wagesj45@fedia.io
      link
      fedilink
      arrow-up
      11
      arrow-down
      1
      ·
      8 days ago

      I suspect everyone is just going to be a manager from now on, managing AIs instead of people.

      • vinnymac@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        7 days ago

        Building AI tools will also require very few of the skills of a manager from our generation. It’s better to be a prompt engineer, building evals and agentic AI than it is to actually manage. Management will be replaced by AI, it’s turtles all the way down. They’re going to expect you to be both a project manager and an engineer at the same time going forward, especially at less enterprising organizations with lower compliance and security bars to jump over. If you think of an organization as a tree structure, imagine if the tree was pruned, with fewer branches to the top, that’s what I imagine there end goal is.

    • NuXCOM_90Percent@lemmy.zip
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      21
      ·
      8 days ago

      What error rate do you think humans have? Because it sure as hell ain’t as low as 1%.

      But yeah, it is like the other person said: This gets rid of most employees but still leaves managers. And a manager dealing with an idiot who went off script versus an AI who hallucinated something is the same problem. If it is small? Just leave it. If it is big? Cancel the order.

          • ebolapie@lemmy.world
            link
            fedilink
            English
            arrow-up
            16
            ·
            edit-2
            8 days ago

            What would happen to such a human? Do you suppose that we would try to give them every job on the planet? Or would they just get fired?

      • FourWaveforms@lemm.ee
        link
        fedilink
        English
        arrow-up
        6
        ·
        8 days ago

        Error rate for good, disciplined developers is easily below 1%. That’s what tests are for.

      • taladar@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        5
        ·
        7 days ago

        The error rate for human employees for the kind of errors AI makes is much, much lower. Humans make mistakes that are close to the intended task and have very little chance of being completely different. AI does the latter all the time.

      • oxysis@lemm.ee
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        2
        ·
        8 days ago

        I mean it is also generous to the Artificial Idiot to say it only has a 1% error rate, it’s probably closer to 10% on the low end. Which humans can be far better than in terms of just directly following the assigned task but does not factor in how people can adapt and problem solve. Most minor issues real people have can be solved without much of a fuss because of that. Meanwhile the Artificial Idiot can’t even draw a full wine glass so good luck getting it to fix its own mistake on something important.

        • NuXCOM_90Percent@lemmy.zip
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          10
          ·
          8 days ago

          Which humans can be far better than in terms of just directly following the assigned task but does not factor in how people can adapt and problem solve.

          How’s that annoying meme go? Tell me that you’ve never been a middle manager without telling me that you’ve never been a middle manager?

          You can keep pulling numbers out of your bum to argue that AI is worse. That just creates a simple bar to follow because… most workers REALLY are incompetent (now, how much of that has to do with being overworked and underpaid during late stage capitalism is a related discussion…). So all “AI Companies” have to do is beat ridiculously low metrics.

          Or we can acknowledge the real problem. “AI” is already a “better worker” than the vast majority of entry level positions (and that includes title inflation). We can either choose not to use it (fat chance) or we can acknowledge that we are looking at a fundamental shift in what employment is. And we can also realize that not hiring and training those entry level goobers is how you never have anyone who can actually “manage” the AI workers.

          • WanderingThoughts@europe.pub
            link
            fedilink
            English
            arrow-up
            2
            ·
            7 days ago

            how you never have anyone who can actually “manage” the AI workers.

            You just use other AI to manage those worker AI. Experiments do show that having different instances of AI/LLM, each with an assigned role like manager, designer, coding or quality checks, perform pretty good working together. But that was with small stuff. I haven’t seen anyone wiling to test with complex products.

            • NuXCOM_90Percent@lemmy.zip
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              3
              ·
              7 days ago

              I’ve seen those demos and they are very much staged publicity.

              The reality is that the vast majority of those roles would be baked into the initial request. And the reality of THAT is the same as managing a team of newbies and “rock star” developers with title inflation: Your SDLC is such that you totally trust your team. The reality is that you spend most of your day monitoring them and are ready to “ask a stupid question” if you figured out they broke main while you were skimming the MRs in between meetings. Or you are “just checking in to let you know this guy is the best” if your sales team have a tendency to say complete and utter nonsense for a commission.

              Design gets weird. Generally speaking, you can tell a team to “give me a mock-up of a modern shopping cart interface”. That is true whether your team is one LLM or ten people under a UI/UX Engineer. And the reality is that you then need to actually look at that and possibly consult your SMEs to see if it is a good design or if it is the kind of nonsense the vast majority of UX Engineers make (some are amazing and focus on usability studies and scholarly articles. Most just rock vibes and copy Amazon…). Which, again, is not that different than an “AI”.

              So, for the forseeable future: “Management” and designers are still needed. “AI” is ridiculously good at doing the entry level jobs (and reddit will never acknowledge that “just give me a bunch of jira tickets with properly defined requirements and test cases” means they have an entry level job after 20 years of software engineering…). It isn’t going to design a product or prioritize what features to work on. Over time, said prioritizing will likely be less “Okay ChatGPT. Implement smart scrolling” and more akin to labeling where people say “That is a good priority” or “That is a bad priority”. But we are a long way off from that.

              But… that is why it is important to stop with the bullshit “AI can’t draw feet, ha ha ha” and focus more on the reality of what is going to happen to labor both short and long term.

  • darkpanda@lemmy.ca
    link
    fedilink
    English
    arrow-up
    82
    ·
    8 days ago

    Dev: “Boss, we need additional storage on the database cluster to handle the latest clients we signed up.”

    Boss: “First see if AI can do it.”

    • NuXCOM_90Percent@lemmy.zip
      link
      fedilink
      English
      arrow-up
      30
      arrow-down
      1
      ·
      8 days ago

      Currently the answer would be “Have you tried compressing the data?” and “Do we really need all that data per client?”. Both of which boil down to “ask the engineers to fix it for you and then come back to me if you are a failure”

    • ramielrowe@lemmy.world
      link
      fedilink
      English
      arrow-up
      24
      ·
      8 days ago

      A coworker of mine built an LLM powered FUSE filesystem as a very tongue-in-check response to the concept of letting AI do everything. It let the LLM generate responses to listing files in directories and reading contents of the files.