• 1 Post
  • 78 Comments
Joined 2 years ago
cake
Cake day: June 11th, 2023

help-circle

  • From the article (emphasis mine):

    Having read his chat logs, she only found that the AI was “talking to him as if he is the next messiah.” The replies to her story were full of similar anecdotes about loved ones suddenly falling down rabbit holes of spiritual mania, supernatural delusion, and arcane prophecy — all of it fueled by AI. Some came to believe they had been chosen for a sacred mission of revelation, others that they had conjured true sentience from the software.

    /…/

    “It would tell him everything he said was beautiful, cosmic, groundbreaking,” she says.

    From elsewhere:

    Sycophancy in GPT-4o: What happened and what we’re doing about it

    We have rolled back last week’s GPT‑4o update in ChatGPT so people are now using an earlier version with more balanced behavior. The update we removed was overly flattering or agreeable—often described as sycophantic.

    I don’t know what large language model these people used, but evidence of some language models exhibiting response patterns that people interpret as sycophantic (praising or encouraging the user needlessly) is not new. Neither is hallucinatory behaviour.

    Apparently, people who are susceptible and close to falling over the edge, may end up pushing themselves over the edge with AI assistance.

    What I suspect: someone has trained their LLM on somethig like religious literature, fiction about religious experiences, or descriptions of religious experiences. If the AI is suitably prompted, it can re-enact such scenarios in text, while adapting the experience to the user at least somewhat. To a person susceptible to religious illusions (and let’s not deny it, people are suscpecptible to finding deep meaning and purpose with shallow evidence), apparently an LLM can play the role of an indoctrinating co-believer, indoctrinating prophet or supportive follower.


  • The University of Zurich’s ethics board—which can offer researchers advice but, according to the university, lacks the power to reject studies that fall short of its standards—told the researchers before they began posting that “the participants should be informed as much as possible,” according to the university statement I received. But the researchers seem to believe that doing so would have ruined the experiment. “To ethically test LLMs’ persuasive power in realistic scenarios, an unaware setting was necessary,” because it more realistically mimics how people would respond to unidentified bad actors in real-world settings, the researchers wrote in one of their Reddit comments.

    This seems to be the kind of a situation where, if the researchers truly believe their study is necessary, they have to:

    • accept that negative publicity will result
    • accept that people may stop cooperating with them on this work
    • accept that their reputation will suffer as a result
    • ensure that they won’t do anything illegal

    After that, if they still feel their study is necesary, maybe they should run it and publish the results.

    If then, some eager redditors start sending death threats, that’s unfortunate. I would catalouge them, but not report them anywhere unless something actually happens.

    As for the question of whether a tailor-made response considering someone’s background can sway opinions better - that’s been obvious through ages of diplomacy. (If you approach an influential person with a weighty proposal, it has always been worthwhile to know their background, think of several ways of how they might perceive the proposal, and advance your explanation in a way that relates better with their viewpoint.)

    AI bots which take into consideration a person’s background will - if implemented right - indeed be more powerful at swaying opinions.

    As to whether secrecy was really needed - the article points to other studies which apparently managed to prove the persuasive capability of AI bots without deception and secrecy. So maybe it wasn’t needed after all.



  • A well placed encrypted backup on two separately located microSD cards (in case mice eat the other), located within a few hundred meters of your actual residence, should be beyond the ability of common goons (ICE, cops, impatient FBI agents) to locate. They’d have to engage in long-term surveillance.

    If curious kids find one, it’s still encrypted and you still have the other, and curious kids won’t take your primary data carrier by raiding your house either. You just replace the backup then and put it elsewhere.




  • Trump can issue an excecutive order and have them executed.

    An “insane” ex-employee of a three letter agency can fire a wire-guided missile at his helicopter too, or someone may leave a drop of nerve agent on his door knob. I mean to say: possibilities for violence are endless. They’re an entirely different dish than legal possibilities, which are limited. With violence, imagination is the limit.

    The supreme court however - I think they’re not bound by their previous rulings. If the court sees a justification, they can rule differently next time.








  • A note about dating apps: most of them aren’t better than this. Their interest is keeping the user clicking, paying for services and coming back. If you find the right person for yourself, you will do none of that. So they:

    • build awful card stack systems with no search function
    • build superficial profile systems with no metadata about personality, habits or world views

    …and of course, with such systems, people fail to find suitable partners. They come back and pay, but society suffers, because someone needs to make money.

    I would vote for a politician who would promise that the ministry of health and social security will order a publicly funded dating site that’s built by scientists, with data privacy managed by the leading university in the country.



  • I’m not from the US, but I straight out recommend quickly educating oneself about military stuff at this point - about fiber guided drones (here in Eastern Europe we like them) and remote weapons stations (we like those too). Because the US is heading somewhere at a rapid pace. Let’s hope it won’t get there (the simplest and most civil obstacle would be lots of court cases and Trumpists losing midterm elections), but if it does, then strongly worded letters will not suffice.

    Trump’s administration:

    “Agency,” unless otherwise indicated, means any authority of the United States that is an “agency” under 44 U.S.C. 3502(1), and shall also include the Federal Election Commission.

    Vance, in his old interviews:

    “I think that what Trump should do, if I was giving him one piece of advice: Fire every single midlevel bureaucrat, every civil servant in the administrative state, replace them with our people.”

    Also Vance:

    “We are in a late republican period,” Vance said later, evoking the common New Right view of America as Rome awaiting its Caesar. “If we’re going to push back against it, we’re going to have to get pretty wild, and pretty far out there, and go in directions that a lot of conservatives right now are uncomfortable with.”

    Googling “how to remove a dictator?” when you already have one is doing it too late. On the day the self-admitted wannabe Caesar crosses his Rubicon, it better be so that some people already know what to aim at him.

    Tesla dealerships… nah. I would not advise spending energy on them. But people, being only people, get emotional and do that kind of things.


  • perestroika@lemm.eetoTechnology@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    21
    arrow-down
    1
    ·
    edit-2
    2 months ago

    As an exception to most regulations that we hear about from China, this approach actually seems well considered - something that might benefit people and work.

    Similar regulations should be considered by other countries. Labeling generated content at the source, hopefully without the metadata being too extensive (this is where China might go off the handle) would help avoid at least two things:

    • casual deception
    • training AI with material generated by another AI, leading to degradation of ability to generate realistic content


  • In my experience, the API has iteratively made it ever harder for applications to automatically perform previously easy jobs, and jobs which are trivial under ordinary Linux (e.g. become an access point, set the SSID, set the IP address, set the PSK, start a VPN connection, go into monitor / inject mode, access an USB device, write files to a directory of your choice, install an APK). Now there’s a literal thicket of API calls and declarations to make, before you can do some of these things (and some are forever gone).

    The obvious reason is that Google tries to protect a billion inexperienced people from scammers and malware.

    But it kills the ability to do non-standard things, and the concept of your device being your own.

    And a big problem is that so many apps rely on advertising for its income stream. Spying a little has been legitimized and turned into a business under Android. To maintain control, the operating system then has to be restrictive of apps. Which pisses off developers who have a trusting relationship with their customer and want their apps to have freedom to operate.