• 0 Posts
  • 24 Comments
Joined 2 years ago
cake
Cake day: July 1st, 2023

help-circle

  • The idea is that the string of lights has a male end and a female end. That way you can have several daisy chained and just plug the one with the male end into the outlet. But if you plan it wrong then you may end up with the wrong end in the wrong place, in which case yeah, use an extension cord or hang the lights all over again.

    Oh and it’s actually relatively safe this way… Each string of lights normally has a fuse in it, so it prevents the cords from carrying more current than they are designed for.



  • The closest analogy is specific tech skills, like say DBs, for a small firm its just something one backend dude knows decently, at a large firm there are several DBAs and they help teams tackle complex DB questions. Same with say Search, first Solr and nowadays Elastic.

    Yeah I mean I guess we’re saying the same thing then :)

    I don’t think prompt engineering could be somebody’s only job, just a skill they bring to the job, like the examples you give. In those cases, they’d still need to be a good DBA, or whatever the specific role is. They’re a DBA who knows prompt engineering, etc.


  • I’m totally willing to accept “the world is changing and new skills are necessary” but at the same time, are a prompt engineer’s skills transferrable across subject domains?

    It feels to me like “prompt engineering” skills are just skills to compliment the expertise you already have. Like the skill of Google searching. Or learning to use a word processor. These are skills necessary in the world today, but almost nobody’s job is exclusively to Google, or use a word processor. In reality, you need to get something done with your tool, and you need to know shit about the domain you’re applying that tool to. You can be an excellent prompt engineer, and I guess an LLM will allow you to BS really well, but subject matter experts will see through the BS.

    I know I’m not really strongly disagreeing, but I’m just pushing back on the idea of prompt engineer as a job (without any other expertise).



  • Not a “hater” in terms of trying/wanting to be mean, but I do disagree. I think a lot of people downvoting are frustrated because this attitude takes an issue in one application (yay), for one distro, and says “this is why Linux sucks / can’t be used by normies”. Clearly that’s not true of this specific instance, especially given that yay is basically a developer tool. At best, “this is why yay sucks”. (yay is an AUR helper - a tool to help you compile and install software that’s completely unvetted - see the big red banner. Using the AUR is definitely one of those things that puts you well outside the realm of the “common person” already.)

    Maybe the more charitable interpretation is “these kinds of issues are what common users face”, and that’s a better argument (setting aside the fact that this specific instance isn’t really part of that group). I think most people agree that there are stumbling blocks, and they want things to be easier for new users. But doom-y language like this, without concrete steps or ideas, doesn’t feel particularly helpful. And it can be frustrating – thus the downvotes.


  • 100% monitoring and control doesn’t exist. Your children will find a loophole to access unrestricted internet, it’s what they do.

    Similarly, children will play in the street sometimes despite their parents’ best efforts to keep them in. (And yes, I would penalize Ford for building the trucks that have exploded in size and are more likely to kill children, but that’s a separate discussion.)

    I get what you’re saying, I just think it’s wrong to say “parental responsibility” and dust off your hands like you solved the problem. A parent cannot exert their influence 24/7, they cannot be protecting their child 24/7. And that means that we need to rely on society to establish safer norms, safer streets, etc, so that there’s a “soft landing” when kids inevitably rebel, or when the parent is in the shower for 15 minutes.





  • the_sisko@startrek.websitetoMemes@lemmy.mlEVs
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    edit-2
    1 year ago

    Yeah, but they require somewhere in the neighborhood of a thousand pounds of batteries to do so. Some of the more egregious ones need multiple thousands, e.g. the electric hummer whose battery alone is heavier than an ICE Honda Civic. Whereas a dozen gallons of gasoline (roughly 72lbs at 6lb/gal) can power that same ICE Civic for a nearly equivalent range, while causing much less wear & tear on the roads, and likely releasing less tire particulates due to the reduced weight. Of course it still releases CO2 and other nasties…

    But yeah, the energy density of EVs is still super bad. It’s just “good enough” that we’re making it work.


  • the_sisko@startrek.websitetoMemes@lemmy.mlEVs
    link
    fedilink
    English
    arrow-up
    11
    ·
    edit-2
    1 year ago

    As I understand it, the big issue is energy density? A tank of gasoline takes you quite far compared to an equivalent tank of hydrogen.

    And don’t get me wrong, lithium batteries are super bad at this too, but I do think that has been a limiting factor for H cars.

    And then there’s the whole tire dust issue which is definitely a conversation worth having.






  • someone playing music on their phone though the car audio (super common now) tapping the phone to ignore a call is just as much a crime as texting a novel to an ex.

    They are all crimes. Set up your music before you go, or use voice command. Ignore the call with voice command or just let it go to voicemail. Lol. It’s not hard.

    And you are kidding yourself if you think almost every person driving for a living is not at some level forced to use their phone by their company (I was)

    This is a great of the strength of this system: this company will find its drivers and vehicles getting ticketed a lot, and they’ll have to come up with a way to allow drivers to do their jobs without interacting with their phones will moving at high speeds.

    I would much rather have someone pulled over when driving erratically then the person getting an automated ticket 3 weeks after mowing down a pedestrian.

    The camera doesn’t magically remove traffic enforcement humans from the road. They can still pull over the obviously drunk/erratic driver.


  • I literally watched cops driving while on their phone everyday after it was made illegal. Nothing was done, Nothing changed, they hand out tickets while breaking the same rules.

    I mean yeah, fuck the police :) Seems like we’re in agreement here.

    Might kill someone is a precrime, a issue with these tickets in this case is that without the AI camera nothing would have been seen (literally victimless). If someone crashes into anything while on their phone the chances it will be used in prosecution is low.

    Using your fucking phone while driving is the crime. This isn’t some “thought police” situation. Put the phone away, and you won’t get the ticket. It’s that simple. We don’t need to wait for a person to mow down a pedestrian in order to punish them for driving irresponsibly.

    In the same spirit, if a person gets drunk and drives home, and they don’t kill somebody – well that’s a crime and they should be punished for it.

    And if you can’t handle driving responsibly, then the privilege of driving on public roads should be revoked.

    I don’t think texting while driving is a good idea, like not wearing a seatbelt. However this is offloading a lot to AI, distracted driving is not well defined and considering the nuances I don’t want to leave any part to AI. Here is an example: eating a bowl of soup while operating a vehicle would be distracted right? What if the soup was in a cup? What if the soup was made of coffee beans?

    This is such a weird ad absurdum argument. Nobody is telling some ML system “make a judgment call on whether the coffee bean soup is a distraction.” The system is identifying people violating a cut-and-dried law: using their phone while driving, or not wearing a seatbelt. Assuming it can do it in an unbiased way (which is a huge if, to be fair), then there’s no slippery slope here.

    For what it’s worth, I do worry about ML system bias, and I do think the seatbelt enforcement is a bit silly: I personally don’t mind if a person makes a decision that will only impact their own safety. I care about the irresponsible decisions that people make affecting my safety, and I’d be glad for some unbiased enforcement of the traffic rules that protect us all.


  • I’m definitely a fan of better enforcement of traffic rules to improve safety, but using ML* systems here is fraught with issues. ML systems tend to learn the human biases that were present in their training data and continue to perpetuate them. I wouldn’t be shocked if these traffic systems, for example, disproportionately impact some racial groups. And if the ML system identifies those groups more frequently, even if the human review were unbiased (unlikely), the outcome would still be biased.

    It’s important to see good data showing these systems are fair, before they are used in the wild. I wouldn’t support a system doing this until I was confident it was unbiased.

    • it’s all machine learning - NOT artificial intelligence. No intelligence involved, just mathematical parameters “learned” by an algorithm and applied to new data.