Inside of you there are two wolves. One looks like a fox with owl makeup and the other like a puck-nosed shepherd dog with strabism.
Inside of you there are two wolves. One looks like a fox with owl makeup and the other like a puck-nosed shepherd dog with strabism.
Which fucker moved the garlic rope and opened the coffin of Nazi eugenics theory?
Not to mention that the “more and better teachers” mantra should be applied all the way down to primary education.
Unfortunately our societies prioritise these things differently.
I’m not excluding hiring good teachers and TAs from the picture. I’m not excluding paying them a good enough wage to attract talent either. But that’s another conversation.
In my university days lectures were paired with seminars. And those had a max size of about 30, and a TA who would explain and help apply the lecture knowledge. The lecturer would visit seminars on rotation and ensure the quality of TAs. And the kicker? The whole gang would be there for the (free form) exam, including the grading.
In short: it can be done because that’s where we come from, actually.
And personally I hate multi choice tests, there is no opportunity to see the thought process of the student, or find and be lenient towards those that got the theory, but forgot to carry a 1 somewhere. They simplified the grading, sure, now you can have a machine do it, but thats about it.
Here’s a novel idea, maybe it needs less students per teacher. Or more teachers per student, however you want to call it.
I would laugh too if this wasn’t going to be a major influence on US policy towards Ukraine in the coming months.
The best thing you can be while on the road is predictable. Applies to everyone in traffic too.
Huh, I was spot on with martial arts 😂
I speak three languages and I can count in ten.
Not a hard guess, to be honest, lots of people pick up numbers from popular culture (Spanish songs are big on counting, but weirdly, German ones as well). And if you study an Eastern martial art, chances are you’ll learn to count to ten in the corresponding language from your instructor.
Or I don’t know, maybe my brain is weird and I’m collecting numbers, that’s a non-zero possibility.
You seem to mistake popularity for acceptance. Warhol was hugely controversial in his day, especially in the beginning of his career (the Campbell Soup expo?). Most great artists are controversial, because they tend to push the status quo until it shatters.
And tone down the ad hominem, this isn’t reddit, we’re just having a conversation about art.
Hey, I have that banana duct taped in my living room! 🤣
Art is subjective, always has been. I remember visiting a modern art museum in Germany years ago, and seeing a weed growing at the base of the wall in one of the rooms. Looking closer, I could see the weed was a very lifelike bronze cast, but in that moment the juxtaposition was jarring enough to make me question what art really is. I doubt it will have the same effect on everyone, but for me that was significant. And memorable, as you can see.
Of course not, and I was not implying that either. I was merely illustrating the influence of technology on artistic expression.
Case în point: silkscreen collages, to stay in the analog domain. Andy Warhol is widely recognised as an artistic genius these days. That wasn’t the case iback then.
No judgement, mate, art is a matter of taste. Always has been.
My point was more along these lines: every single piece of AI imagery in the public space has been selected and put there by a human. We are the feedback loop in this space. And if the vast majority of it sucks, well, that’s saying something about the people doing the selection, doesn’t it?
I read an article recently about the difficulties of using AI by artists in animation studios, which partly inspired my original reply. Sure, AI is great at, say, generating a magical fairy forest. But if it’s almost good enough and you want it to do small, incremental improvements to an existing image, that’s where it fails. Sure, it will generate another magical forest, but even using almost the same prompt can lead to wildly different results.
To wit: for me and you, almost is probably good enough. But that’s not the case for a professional.
The man is a genius, no doubt about it. I didn’t know about the mathematical analysis of his paintings though, that’s really cool. Thanks for the link.
I do, but not for the reasons you think.
What makes a Jackson Pollock painting so valuable? I’ve heard time and again people saying “I could do that too”, “it’s just paint thrown at canvas” etc. But it’s not the actual paint on the canvas that makes the painting. It’s Pollock’s aesthetic sense that chose that color, that pattern, and that’s what you get to see when you look at his paintings. It’s an image that said something to him, and we have decided to put value on that.
The vast majority of AI generated imagery is not art just like the vast majority of people throwing paint at canvas won’t get a Jackson Pollock painting. It might become art if used by an artist with purpose and intention. Which at the moment is pretty hard, given that small, iterative adjustments are really hard to do with AI. But in the end, AI is yet another tool that would allow humans a bit more freedom of expression.
It used to be that a painter had to literally prepare his palette from raw ingredients. Then he could buy pre-made paints. When digital art came along, we gave up paints entirely. Now we skip the painting part. The one common thread though is the honest expression of intent, and the feedback loop given by the artist’s aesthetic sense. If either is missing, you get kitschy garbage. And that’s most AI generated imagery these days.
I think you nailed it. In the grand scheme of things, critical thinking is always required.
The problem is that, when it comes to LLMs, people seem to use magical thinking instead. I’m not an artist, so I oohd and aahd at some of the AI art I got to see, especially in the early days, when we weren’t flooded with all this AI slop. But when I saw the coding shit it spewed? Thanks, I’ll pass.
The only legit use of AI in my field that I know of is an unit test generator, where tests were measured for stability and code coverage increase before being submitted to dev approval. But actual non-trivial production grade code? Hell no.
You know, I was happy to dig through 9yo StackOverflow posts and adapt answers to my needs, because at least those examples did work for somebody. LLMs for me are just glorified autocorrect functions, and I treat them as such.
A colleague of mine had a recent experience with Copilot hallucinating a few Python functions that looked legit, ran without issue and did fuck all. We figured it out on testing, but boy was that a wake up call (colleague in question has what you might call an early adopter mindset).
A 100% accurate AI would be useful. A 99.999% accurate AI is in fact useless, because of the damage that one miss might do.
It’s like the French say: Add one drop of wine in a barrel of sewage and you get sewage. Add one drop of sewage in a barrel of wine and you get sewage.
Pros and cons of breadbox? Any paladins out there willing to enlighten us?