I want to let people know why I’m strictly against using AI in everything I do without sounding like an ‘AI vegan’, especially in front of those who are genuinely ready to listen and follow the same.

Any sources I try to find to cite regarding my viewpoint are either mild enough to be considered AI generated themselves or filled with extremist views of the author. I want to explain the situation in an objective manner that is simple to understand and also alarming enough for them to take action.

  • NoSpotOfGround@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    3 months ago

    What are some good reasons why AI is bad?

    There are legitimate reasons people worry about AI. Here are some of the strongest, clearly framed concerns:

    1. Bias and unfair decisions

    AI systems often learn from biased data and can unintentionally discriminate—against certain races, genders, ages, or socioeconomic groups—in hiring, lending, housing, policing, and more.

    2. Lack of transparency

    Many AI models act as “black boxes,” making decisions that are hard to explain. This creates problems when the stakes are high (medical diagnosis, legal decisions, etc.).

    3. Privacy risks

    AI can analyze huge amounts of personal data, track behavior, or identify people through facial recognition—often without explicit consent.

    4. Job displacement

    Automation threatens certain categories of work, particularly routine or repetitive jobs. Without proper planning, this can increase inequality and unemployment.

    5. Misinformation and deepfakes

    AI makes it easier to create convincing fake audio, video, or text. This can undermine trust in media, fuel propaganda, and destabilize democratic processes.

    6. Weaponization

    AI can be used in autonomous weapons, cyberattacks, targeted surveillance, or manipulation—raising serious security and ethical issues.

    7. Overreliance and loss of human skills

    As AI does more tasks, people may become too dependent, reducing critical thinking, creativity, or expertise in certain fields.

    8. Concentration of power

    Powerful AI tools tend to be controlled by a few big companies or governments, potentially leading to monopolies, inequality, and reduced individual autonomy.

    9. Alignment and control risks

    Advanced AI systems may behave in unexpected or harmful ways if their goals aren’t perfectly aligned with human values—even without malicious intent.

    10. Environmental impact

    Training large AI models consumes significant energy and resources, contributing to carbon emissions.


    If you want, I can also provide reasons why AI is good, help you construct an argument for a debate, or analyze specific risks more deeply.

    Were you looking for this kind of reply? If you can’t express why you have an opinion maybe your opinion is not well founded in the first place. (Not saying it’s wrong, just that it might not be justified/objective.)

  • canofcam@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    3 months ago

    A discussion in good faith means treating the person you are speaking to with respect. It means not having ulterior motives. If you are having the discussion with the explicit purpose of changing their minds or, in your words, “alarming them to take action” then that is by default a bad faith discussion.

    If you want to discuss with a pro-AI person in good faith, you HAVE to be open to changing your own mind. That is the whole point of a good faith discussion - but rather, you already believe you are correct, and are wanting to enter these discussions with objective ammunition to defeat somebody.

    How do you actually discuss in good faith? You ask for their opinions and are open to them, then you share your own in a respectful manner. You aren’t trying to ‘win’ you are just trying to understand and in turn, help others to understand your own POV.

    • krooklochurm@lemmy.ca
      link
      fedilink
      arrow-up
      0
      ·
      3 months ago

      Chiming in here:

      Most of the arguments against ai - the most common ones being plagiarism, the ecological impact - are not things people making the arguments give a flying fuck about in any other area.

      Having issues with the material the model is trained on isn’t an issue with ai - it’s an issue with unethical training practices, copyright law, capitalism. These are all valid complaints, by the way, but they have nothing to do with the underlying technology. Merely with the way it’s been developed.

      For the ecological side of things, sure, ai uses a lot of power. Lots of data enters. So does the internet. Do you use that? So does the stock market. Do you use that? So do cars. Do you drive?

      I’ve never heard anyone say “we need less data centers” until ai came along. What, all the other data centers are totally fine but the ones being used for ai are evil? If you have an issue with the drastically increased power consumption for ai you should be able to argue a stance that is inclusive of all data centers - assuming it’s something you give a fuck about. Which you don’t.

      If a model, once trained, is being used entirely locally on someone’s personal pc - do you have an issue with the ecological footprint of that? The power has been used. The model is trained.

      It’s absolutely valid to have an issue with the increased power consumption used to train ai models and everything else but these are all issues with HOW and not the ontological arguments against the tech that people think they are.

      It doesn’t make any of these criticisms invalid, but if you refuse to understand the nuance at work then you aren’t arguing in good faith.

      If you enslave children to build a house then the issue isn’t that youre building a house, and it doesn’t mean houses are evil, the issue is that YOURE ENSLAVING CHILDREN.

      Like any complicated topic there’s nuance to it and anyone that refuses to engage with that and instead relies on dogmatic thinking isn’t being intellectually honest.

      • Frezik@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        3 months ago

        I’ve never heard anyone say “we need less data centers” until ai came along. What, all the other data centers are totally fine but the ones being used for ai are evil? If you have an issue with the drastically increased power consumption for ai you should be able to argue a stance that is inclusive of all data centers - assuming it’s something you give a fuck about. Which you don’t.

        AI data centers take up substantially more power than regular ones. Nobody was talking about spinning up nuclear reactors or buying out the next several years of turbine manufacturing for non-AI datacenters. Hell, Microsoft gave money to a fusion startup to build a reactor, they’ve already broken ground, but it’s far from proven that they can actually make net power with fusion. They actually think they can supply power by 2028. This is delusion driven by an impossible goal of reaching AGI with current models.

        Your whole post is missing out on the difference in scale involved. GPU power consumption isn’t comparable to standard web servers at all.