This is correct, but there's also some meta comment to be made that research has an "instragram problem". What he's talking about isn't really even AI so much as deep learning: there are lots of other branches that now get virtually no play relative to deep learning. And then there are all sorts of adjacent CS, image processing (when was the last time you saw someone talk about wavelets), language analysis, and other research avenues, probably often better suited to many problems, that get buried under the hype.
This is why academic research is so important. All the corporate money is flowing to deep learning, grant agencies should be sustaining the other more fundamental areas.
Academic research has the same problem. When I studied neuroscience, flashy fMRI studies received substantially more funding than fundamental research. Without understanding how small neural nets work, it's difficult to construct bottom-up theories of the brain.
This is the bane of my existence. I work in a lab that uses as close to industry standard coding practices that we can do with our current skeleton crew numbers (unit tests, CI, version control, unit tests, code quality standards and norms). We lost half our senior student programmers in the last two years, replacing them with first years. You know what happened? Nothing. Our development continued as normal. I’m leaving in the next six months, and my contributions to our lab’s projects will continue to be maintained. I will receive no emails at 2 am if somebody finds a bug.
In the average academic lab, the GUI they just published had one maintainer, and it’s a grad student that’s been gone for a year already. Good luck getting an email about support, even though it doesn’t work out of the box and the code is unreadable.
The problem is academia just doesn't pay a livable salary these days. When I'm getting hit with thousands of dollars of medical bills on a regular basis, despite having insurance, having a corporate salary cushion helps for peace of mind. A lot.
The goal of academic funding is to help the military and also/later the economy.
To think otherwise is naive. The whole point of research is to have an advantage. Unless that money has no strings attached, you’re beholden to someone. Every historical scientist had a major donor whether it was some kingdom or nation.
Call me naive then. While I agree that a fair amount of academic research funding is defense/military related, it’s certainly not all of it. If said research is funded by organizations like DARPA, then yes, absolutely. There are other sources of funding out there though that isn’t defense backed: NSF, NIH funding comes to mind among others, including corporate backed academic research. When i was in academia, I worked on research funded by grants from NSF, Intel and Microsoft to name a few.
Im just saying that the end goal of your research is return on investment.
So many folks get no funding, even if their field of work is important to society at large.
Intel and whoever else funded you because they see some long term value in your work.
I wish research was done with “purer” intentions but hey.. I mean we are having this conversation based on a technology invented by the gov/military, Arpa. On my drive home I used GPS. And at home I’m using a microwave to make some popcorn lol.
>"What he's talking about isn't really even AI so much as deep learning: there are lots of other branches that now get virtually no play relative to deep learning."
Can you say what are some of non-Deep Learning branches in AI that aren't getting play relative Machine Learning/Deep Learning?
Andrew Ng in the linked message said himself: "Maybe you’re training XGBoost on a structured dataset and wondering if you’re missing out on ChatGPT. You may well be onto something even if XGBoost isn’t in the news."
Boosting generally is very competitive with (or better than) neural networks on small to midsized datasets.
Bayesian techniques are always interesting.
Personally I think there are great opportunities linking knowledge engineering approaches (which are very manual-labor heavy) with neural network approaches so we can bridge the taxonomy/unstructured data divide.
All AI projects are valuable, but it's annoying and demotivating to work hard on a unique and useful project and have no one read/use it because it doesn't stand out in the extremely rapidly-evolving ecosystem.
Most of my AI text-generation and image-generation projects and tools have already become obsolete by technology released in the past few months, and I've almost given up competing.
One advise I'll never forget (from an interview with late mathematician Maryam Mirzakhani) is that you don't have to race faster than everyone else (although if you can, that's great!). You can just run in a direction no one is running toward (and hopefully be persistent). Then you're only racing yourself and speed is largely irrelevant.
I myself know I'm very slow, specially developing projects. So I don't try to race anyone, except contribute to things I know are neglected. There are so many neglected problems, you don't need to work on the most coveted fields.
“Run in the other direction” is so generic that it isn’t helpful. No one else going in that direction is, at best, unrelated to the quality of the idea and, at worst, a heuristic that the idea is bad.
It's not the other direction, it's an other direction. The distinction is important.
If there are only two directions and everybody runs in one of them, there's probably a good reason. If there are virtually infinite directions, trusting your gut about an unexplored direction that you find interesting is a much better bet. See also: startup advice.
When asked how to keep up with all of this AI/ML stuff when starting your own AI business, Jeremy Howard (general genius and co-creator of fast.ai) replied that you don't need to know everything and keep up with everything. Pick a niche and focus just on that. And if that keeps changing too fast or is too much, pick a niche inside that niche.
don't give up max, you're tremendously inspirational to the rest of us who are even further behind in trying to keep up with what's happening. you're by far one of the best simplifiers/explainers of whats going on who actually regularly builds useful stuff. its just a matter of time until you hit the thing that goes vertical, and all this work to date will have been practice. you can only connect the dots looking back.
I've called this the rapidly shrinking innovation-disruption-adaption cycles. It quickly becomes the new rat race.
"It will be unlikely we will be able to plan for the disruption as it will simply happen before anyone can reason about what is likely to happen next or what to do about it. Any such planning efforts will be obsolete before they begin"
The problem is the dynamic of AI is leveraging massive computing power into something flashy - OpenAI and DeepMind have been going back and forth on this for a while, with OpenAI leverage a lot of human also in ChatGPT.
It's a bit like the original dot-com bubble, when people were throwing what was then vast amount of money out just to be in the space. And not knowing what the business model was going to be. The business model then turned out to be advertising but it's certain that advertising is well suited to a chatbot or other new AI stuff, stuff that is computationally and dollars-wise expensive.
AI projects are often valuable if given away. I assume people will ask how valuable per watt they are after a while.
And remember we are 6-12 months off having good, open source chatGPT class open source models and the software support to make it possible to run them at home.
I've almost given up just trying to keep up with the field... let alone competing. I'm not sure how people are doing it (keeping up) at this point given the pace of change.
I've definitely been feeling deflated. Every angle I've come up with... some project has narrowly shipped before me. And they gain so much popularity so quickly that mindshare per project becomes a power law overnight. It's like the first person to release is at the top of the App Store, and there's no unseating them; the feedback loop is already too powerful. It very much feels like first to release wins. And I swear, the quality of the code I've seen in some of these projects that are getting thousands of stars is abysmal. It's making me think releasing vaporware in order to capture mindshare, then actually building something valuable is the only way to compete.
It is not enough to be the first one to ship. I shipped a Mac app that uses whisper to transcribe audio before anyone else, I even implemented a dictation algorithm on top of it. It was my first app, I am not an ios developer. However, after a while, someone with a better track record in app development came up and made a similar stuff. He knows how to build an audience, and he knows how to market an app. As a result, smashed out the app I developed.
So I don't think being the first one is that much important, it is not also important to take care clean coding etc. Someone with better marketing skills probably would ruin every work you did.
I may be out of the loop, but I have seen very little apps with new AI tech get any traction. Or, I've seen them, tried for a few minutes, and then went on with my day. Most of these are just flash in a pan; a cool tech demo or some initial hype from an over-promising landing page, but not many viable businesses. Just someone downloading a model, fine-tuning it on some dataset and trying to pawn it off.
And if some project "narrowly shipped before you", how much value or moat is there really if we're talking multiple projects and they're so quick&easy to do? Sorry to be a bit harsh, I just don't get feeling deflated by others making the same MVP as you do if it's just a natural idea on top of some existing tech. Your focus on code quality is also moot, it's delivering value that counts, not the tech stack beneath.
Isn't this extremely common advice? All the way down to "just do it all manually then build the software" where applicable (obviously not in AI). No one cares if the code is crap if it does what they want and you can always iterate on code after securing a revenue/funding stream.
I would not put so much stock in the first mover effect. I can't bring it to mind immediately, but there was an excellent podcast that brought up how the second movers oftentimes do better in the end, at least as a company.
Case in point -- once upon a time, there was a big race called Dawnbench for training CIFAR10 to 94% accuracy in the shortest amount of time a little while ago by Stanford University. During that time, there was a lot of cool movement, and there were a few notable people who really moved the bar (Chen Wang is underrecognized for their contributions, while David Page is relatively well known for his, which indeed truly are excellent).
I remember reading Page's notes on it and thinking that I could never come up with the caliber of ideas that he brought to the training table for these networks, and plus, 24 seconds on a V100?!?! Crazy.
That was years ago that I saw it. I didn't touch it at all -- not anyone really did, transformers were sorta the big thing now, and still are. And the one or two times I did try to do anything with it...anything I tried made it worse, and I really struggled with his code (it's very functionally-written stylistically, very cool but didn't jive with my rapid experimentation style).
In any case, I thought maybe I could do better though if I really and truly took a cool crack at it. And even if I didn't, I sorta needed a good living resume to prove that I could make a good software project. So I reimplemented it in a more hackable (to me, at least) kind of way ala karpathy's nanoGPT (and was almost way too meticulous with writing, organizing, and documenting my code), reorganized and streamlined a few things, and moved it to a more-accessible-to-me GPU, an A100. ~18.1 seconds or so (17.2 with some other open-source code). So that was the line.
Since then, every single time it feels like I've found all that I can find, there's something else (eventually, at least) waiting behind that wall for me. 18.1 seconds turned to 12.7, which I thought was about as far as I could go. Then 12.7 turned to 12.3, which turned to ~9.91 seconds. Then ~9.91 seconds became, incredibly, ~7.7 seconds or so.
Earlier this week I released an update that brought it to roughly ~6.97-6.99 seconds or so. That is unreal, to me. At first, I was numb to how much things could improve, now I'm sorta in denial. The throughput is totally insane, roughly 88,389 training images through the GPU _every second_. This also means that our step time is roughly ~11.35 microseconds per batch, which is...blistering, to stay the least. Hard really for me to wrap my own head around it.
I'd say from the experience that I've had, I've felt similar feelings to what you've talked about here, especially if someone already with a lot of followers from a more hype point of view does something like glue huggingface code together, make a fancy GIF that's well stylized, and gets a ton of adoration from it.
But that said, the market for quality software is small, and the market for hype is large. Not that the above project doesn't have hype, but it's meant to be more valuable as a researcher's workbench than a toy. It did thankfully get a huge boost early on because Karpathy tweeted it out, but even the last release, for example, maybe got 10 likes on Twitter, and an additional 10-20 (or 30) stars on Github from the sum total interactions (including a Reddit post), even if that.
But! The good thing in some senses is that the people that I get to talk to if I'm proactive, that like this software, are often people who are known or are skilled in their field of work. And I honestly don't have too many warm fuzzies about that from lived experience as that is new to me. But I can say that I appreciate the opportunity.
Everytime I've thought about going down the hype/vaporware road just to get eyes on the project(s) I do, I have to ask myself -- "Do I want these eyes on the project? Do I want this kind of attention from this kind of person to make up most of my interactions and what I am building?"
Sure, if you have to feed a family, that sort of make sense. And we have to feed ourselves and our emotional needs too. But maybe we can be okay with being content with the smaller audience, as it is. At least, that's what I'm working towards, though I do fear that I'll stumble and give in to the allure of chasing the hype every now and again. And if I do, I'm sure that particular extreme emptiness (of a sort) will help pull me back towards just working on being content with the little things I have.
I want to close with a video that was made almost exclusively for you, and would like to ask you to watch it in its entirety if you have the time. It talks about content creation (which is what we do, in a sense), but is taught in a way that is very general and I think is the best take I've ever heard on this topic in a condensed/beginner-friendly way, that I can remember at least.
It should not only help alleviate some of your concerns or negative feelings from the shipping arms-race, it'll give you clarity on good next-step solutions that will help hopefully contextualize and give a good 'path forward' to making software that people like. I really cannot recommend this video enough, the wisdom is simple, practical, distilled, and hard-won (and has certainly helped me, I am glad I got to learn this earlier rather than later): https://youtu.be/lNzWsp5UUPA
Happy to discuss or offer any thoughts on any questions. I do recommend the video first, I often enjoy talking about that kind of particular topic.
This is so true, but to follow this advice takes a respectable amount of personal discipline, combined with picking a very specific niche to focus on. A business with an R&D core such as AI should only be considering generalizing (if at all) after developing concrete confidence that you are on, and will stay on the bleeding edge of your initial niche. It's easy to feel ashamed when you tell someone you're doing X, and enough lay (tech) persons ask if it uses the trending research technology, and you're not. It may also make you think like you should use that tech, but it's likely a costly distraction more often than not.
Our company [0] developed a cutting edge computer vision system focused on detecting cardio machine exercise cadence (hyper specific!) and became the only reliable camera-based solution to do so. We then tried to generalize to all exercise motion (rep tracking, a still unsolved problem), achieved mediocre success, and put the exploration to sleep later because we think waiting for other technologies to mature would be easier and faster (better 3D cameras, AI pose models, etc). On the other hand we've picked other niches that meet our business needs to expand our CV R&D into, with pretty good success but mostly just for internal use (video content creation tools). More importantly, we're still the best camera-based indoor cardio detection tech out there, and that's a big part of why we're still alive as a bootstrapped business founded in 2010.
Idk about AI having an Instagram problem, but I know Instagram has an AI problem.
So many fake accounts with AI as actors and they send you chat messages trying to pretend to be real people.
They even react to comments and can discern good from bad comments.
At first it was interesting. Now it's just annoying.
Then the Instagram algorithm, if you comment on coffee ads that you drink tea, you'll get tea ads.
It will show you progressively naked women, cameltoes and similar softcore erotica.
Sometimes even real porn. When you report that porn accounts your report will not even be reviewed and denied.
But God beware you post the word tits. Instant harassment automoderation.
All the women that follow you instantly message you saying hi, how are you, how old are you, what is your name, where are you from etc.
I think they're infobots trying to build profiles and sell your data.
Some play the old " I'm a hot girl but can't post my picture, please buy me an iTunes gift card ", like I was born yesterday. And how dumb their game is.
They see you commenting on a picture you liked, take that picture, claim to be that person, only on a private account, and they picked you, their loyal fan.
Many of them are cheap porn actresses or accounts of people having built profiles from the pictures those porn actresses post, only modified by AI.
I am so tired of that platform. And then they serve you small 1 minute clips of stand up comedy, cats or dogs, or some stupid 1 person having a chat with themself. Let's not forget those "how do I say it at my workplace" "expert" vids.
And lots of ads in between.
Why are you still using it? (Not trying to troll you, I'm legitimately curious -- it really doesn't sound like it's offering you anything positive at this point)
As someone using Instagram but has a Todo list item to close my FB/Instagram/Meta account, the reason I still use it (and only Instagram) is to follow artists. A hyper curated list of individuals that post interesting things that I can check up on. It's what I lost when I shuttered my Twitter account. I use it as a glorified RSS feed rather than as a way to interact socially.
But yeah, the bad outweighs the good, and other than spending the time to archive data, I do plan to shut it down.
No, I follow athletes related to cycling, swimming, skiing etc. What content does Instagram promote me? Female models dressed in cycling clothes with their zippers down. Maybe because their algorithm says that it's the "most trending cycle-tagged content at the moment". But it's not related to the professional cyclists I'm following at all. Same goes for almost all content/tags: there will be some half-nude version of it that gets promoted.
I follow art-related stuff and nature photography, mainly digital illustration and I do NOT get any soft-erotica stuff. I only get what I'm interested in.
I will say that I prefer going to the Explore tab rather than the Reels tab because this one starts by showing relevant stuff and then as I scroll down it turns into tik-tok skits.
Not the OP, but it seems like the actual usage of the platform caused the most trouble for them. From my own anecdotal experience of using modern day social media, it seems who you follow establishes the baseline of what garbage they send your way. But any sort of engagement, even down to which posts you hover over the longest, quickly influences your feed.
I almost never use instagram, mostly follow military-related fitness[1] and car stuff....and I still get an endless feed of curvaceous women thrust in my face.
That viewpoint baffles me because it has never been easier to make text, image and other classifiers. I think though that ChatGPT has attracted people who were into NFTs two months ago and people like that find downloading a model from huggingface about as hard as Medium users seem to find blogging.
The viewpoint described in the article, particularly the emotion of envy.
I have been working on this stuff since 2005 or so and I can do more than I ever did and if there is one thing I am certain of it is that other people are going to research things and publish even better models in the future that I'm going to download from huggingface or whatever comes next.
(On the other hand the fact that I know how to pose well-defined problems and I am willing to work hard to make training sets really does put me in the top 5%. If I am kicking myself it is for the times when I didn’t have the courage to ask clients to put the resources in to make training sets.)
These people wanted an audiovisual matching engine a bit beyond the state of the art. The only way they were going to get that then (or now for that matter) is hiring people to label the data. They were funded and had a lot of traction and I think I could have talked them into it if I’d been thinking a little bit.
The article is correct, but it comes from someone that is already on the AI field. LLMs will peak and pass to become mature, that's high-frequency signal. But there's also a low frequency component, a more fundamental signal if you like, that tells us that AI is definitely out of the academia --it's been for some years, it's only reaching general public now. It cannot be ignored.
It has already reached the public a while ago, it just wasn't announced or marketed so heavily. Google for example has been using BERT to improve search results since 2019. If you did slightly more sophisticated english language queries back then, you've already been benefiting from LLMs directly.
How does DDG and Kagi competes with ChatGpt ? In my understanding their “responses” above search results works like Google graph, but I’ll be happy to be corrected.
There is a pretty big gap between what’s talked about at conference and blogs vs the actual, not very sexy, hard data science work that goes into doing “machine learning” at scale.
where can i get more content of the flavor hard data science work that goes into doing “machine learning” at scale? i'm tired of indefensible projects built on OpenAI's APIs
Well some conferences tend to reward more concrete / practical work. I helped found Haystack (http://haystackconf.com), as one such idea. But still, people who propose talks, aren't going to talk about the mundane work. In the same way people who submit talks about software aren't going to talk as much about the mundane, everyday, uninteresting -- but essential -- unit tests they wrote.
Aren't we maybe jumping the gun here a little? Human creations are far better still than AI ones.
People seem to (very understandably) conflate "impressive because of the technology" with "good," but either way thats not going to last forever, even for the most passionate AI-heads. We will take this stuff for granted pretty soon here, and then inevitability remember again what actually enjoying and connecting with a work of art or song or film is.
I think its probably impossible to actually appreciate an AI work on its own terms right now, simply because its too loaded with external significance. You can't take it in without considering what it means to like it or not. In this, we seem collectively to be erring on the side of saying its positive or worthwhile if only because of the promise each AI work carries with itself.
This makes the fervor and proselytizing of some make sense to me at least. Most people have better taste than enthusiasm for AI art would seem to show.
> Human creations are far better still than AI ones.
I have seen many examples of AI-generated art where it's beyond my taste level to see how it could be improved on by human hands. And stuff that I genuinely appreciated for what it is, divorced from how it was made.
I am reminded that very early on with Midjourney, before this wave of AI gen art was wider known, there was a story of someone winning an art competition with a piece generated with it. [1]
But, aside from disagreement on the dimension of "good", the one area AI gen seems clearly unparalleled in is "fast". That's likely enough to disrupt.
> I have seen many examples of AI-generated art where it's beyond my taste level to see how it could be improved on by human hands
This is my overriding point, beyond matters of taste. Everyone who talks about this stuff can't seem to talk about its value without reference to the hypothetical human artist or writer or whoever who didn't make it. The awe of just the fact it was made by AI can't be separated from one's appreciation of it, especially if you are someone who is incredibly passionate about AI. You say you could appreciate it without reference to the origin, but how can any of us claim that? In the same way one's thoughts about George W Bush's paintings are necessarily influenced one way or another by the artist himself, I just don't think we have the ability to separate these things. But that's not like a failing or anything, its just how it is. The total almost dizzying excitement of this AI moment/new future can only reinforce this. The more you are excited about AI art, the less you are probably really looking at it.
As for the competition thing, that looks like a fine illustration for some random fantasy novel or something! It has to me so many interesting but superficial markers of artists who use AI: somewhat baroque but stunning complexity, figures and faces of slender and/or beautiful women, a vague sense "humans right beside a large and great thing." But otherwise, can't you feel a little emptiness in it? If you really try to take anything from it at all other than a fleeting aesthetic experience?
Maybe so, maybe not, but either way, can we even speak at all to what the painting itself is trying get us to feel or think? The artist states that they were making a statement about AI artwork, but what is the statement in reference to the work? I can't tell, other than, of course, what I am saying: it is nothing about what we see represented in the painting, but simply the painting itself as a painting.
The article goes on indeed to say that the judges knew AI made it, with only speculation that maybe "they didn't understand," but I suspect that's just coming from the vocal anti-AI crowd on twitter; such that they can say its unfair or a trick or whatever. Either way, the artist's point wasn't in anything they drew, but simply seeing if they would win, and then telling everyone they did. "..and I'm not sorry." It's all quite telling!
The painting, and all the others like it, can only speak one thing: themselves as technically novel works. Which is why all the pro-AI people find themselves emphasizing this bare aspect of a work. Is a sea sponge in the bona fide style of Francis Bacon a novel work? Definitely. An amazing achievement for a computer? Certainly. But what is it after that?
After the dust settles with all this we will remember what we like and don't like again. This stuff wont go away at all, and people will continue to make interesting things! But I have a feeling eventually 95% of it will be considered in the same cultural breath as kitsch art pieces and elevator music. But this is a stronger claim I could definitely be wrong about. The main thing is, right now, its just too early to tell.
Taste is always sticky. I don't really believe in "levels," just differences. But one thing I know for sure is AI art to me is not great, even if it has masterful technique and style its just got no inspiration!
> You say you could appreciate it without reference to the origin, but how can any of us claim that?
Easy, you tap the like button without wondering if it was made by/with AI or not. I'm sure many people that don't really care about the tech and just want a endless feed of cool pictures do that.
Yes, and I bet if you were keeping score of someone liking an image or not on their feed, and it was truly blind, and there was even distribution, humans would still win.
> Everyone who talks about this stuff can't seem to talk about its value without reference to the hypothetical human artist or writer or whoever who didn't make it.
Consider that it's because that's the interesting areas of conversation to be had. The overwhelming majority of art I see all day does not warrant comment, and so it goes with AI art that is of a quality to replace that everyday art. The AI art that I do share and comment on tends to be works that are hilariously broken in some bizarre way. And I agree that that's possibly a novelty that will pass, but that would be due in part to AI gen improving to the level where it's consistently indistinguishable from artist-made art.
> I just don't think we have the ability to separate these things.
Perhaps not entirely, but it seems to be an overreach to claim the impact is necessarily debilitating. With my very modest amount of formal arts training, I approach analysis piece by piece. Composition, color, shape. Line and form, light and shadow, anatomy, texture. Subject. Abstraction and realism, style and tone. While not objective, it's easy to clinically assess the subjective quality achieved with an individual element, up to one's limit of taste.
Further, when I am knowingly looking at AI art I'm not doing it from a place of enthusiasm, but criticism. I am aware of a set of flaws that current art gen models are prone to include, and I actively look for them. Each one found speaks to the limitations of the technology, and the patterns suggest challenges that may be un-resolvable with consistency.
So yes, when I say that I genuinely appreciate an AI-made work for itself, I am indeed confident in the truth of that.
> The painting, and all the others like it, can only speak one thing: themselves as novel works.
Your concern with "statements" suggests that the area of art you are talking about is narrow, indeed. To clarify, I speak of the wide world of works made for primarily aesthetic goals. Well no, that's not entirely true. For the purposes of this discussion I've been speaking of static visual arts. But that ranges from museum oil paintings to product labeling, from the stunning to the mundane. That's all preamble to this blunt point: being visually pleasing is sufficient to provide worth. AI gen can provide that.
I am sympathetic however to what may be your position: what meaning can be learned from a work if no mind was behind it to intend meaning? If your appreciation of art requires the belief that you are part of communication between souls, then clearly knowing a work is authored by AI would render that work worthless in your eyes. And I can understand that, or some version of it. It does not apply to me, however. In a death-of-the-author way I care much more about the meaning I can find myself from a work, considering the meaning intended as optional spice.
Lastly, no: I'm aware of what I like and do not, thanks.
> And I agree that that's possibly a novelty that will pass, but that would be due in part to AI gen improving to the level where it's consistently indistinguishable from artist-made art.
Why must it be indistinguishable?? Why not good, but distinct? This is our narrowness in mind right now.
Consider, let's say, 20th century art and music. Abstract expressionism, minimalism. Things like Pollack or Rothko, Reich.
It probably takes much less than a huge array of GPUs and such to recreate something "indistinguishable" from a Pollack, granted you knew you're colors and are really trying. And its, like you say, not art that necessarily "says" anything in the form of statement.
But regardless, the thing is, that while it's not imbued with a statement or meaning that can be told as a story, it can't help but be full of something like "meaning". Post-WWII 20th century art like Pollack's, its turn to deep abstraction or otherwise nonrepresentational things, came from a shared belief about what art should become at that point. That is, now that they had all seen how broadly shared cultural narratives were so easily integrated by facism, the very idea of a work that "speaks" something was suspect.
The fact that Pollack made that kind of art when he did, and made precisely the art he did, is all a part of it. And the content of the work couldn't be any different, or he wouldn't of made it. Unconsciously or not these decisions enter our mind when we view it, they can't help but to, even if you don't have the training I'd argue. But either way its meaning or "worth" as you say can't be tied to maybe any kind of direct communication.
You could make a model do a million Pollack's or Rothko's, but nobody really is going to care, because the moment kind of passed for that kind of art, right? What should we fill our canvases with now? Nobody is going to have the same answer, but the point is that artists want to answer that question, not see how quickly or how perfectly they can fill it up with whatever, but simply, find what should be painted, what is called for. I'll remind you the specific article I have talking about, where the fear is humans will be demoralized by AI in reference to what they are passionate about. Not like, cereal boxes or whatever. Simply pleasing, generative works is somewhat a solved art anyway isnt it?
> I am sympathetic however to what may be your position: what meaning can be learned from a work if no mind was behind it to intend meaning?
I feel like thats not giving the models enough credit honestly! Isn't it rather filled with a million intentions, which can come together in new wholes infinitely. It is in fact, I think, almost like a strange but also beautiful literalization of Plato's dialogue Meno, if that's what you are referencing with the communication between souls bit.
I'm not sure I ever implied you didn't know what you liked. How was I even supposed to know you indeed like that piece that won the state fair? Why, just in general, be so defensive. Am I really saying anything at all that calls for it? I just don't understand the stakes here I guess.
How can so many people both feel that this is a paradigm shifting moment in art, and be utterly sure of how its all going to play out, much less how they will feel about it? Is this not the time for some humility? Wont that all allow us to navigate this soberly and fruitfully?
I've spent a lot of time thinking about this as well. I think you have some valid points. For better discussions, I'm thinking we really need to begin to differentiate AI art on the values of utility, creativity, meaning, craft and visual appeal as I think very different answers are appropriate for each.
I've written a fairly long piece with some related content, the most relevant portion is the section "AI owns creativity". Would be interested in your point of view.
Maybe, if you have the time, do me a favor: read the (very short) piece I am responding to, its the link at the very top of the page. If you still have a thoughtful or even snarkey comment to make to me after that, then please go ahead. But you're not really giving me anything to work with here, and around here we do strive to have real conversations. I just know you have some good points to make to me!
Ok, so lets split the conversation into fine art versus generic commercial art and design we see everywhere.
Because as I see it, this is how the future is splitting out. This goes back to the blacksmith producing unique items. People still buy things like to this day and pay a healthy sum of money for it. But this is not the economy either, the vast majority of consumption is items that are mass produced with as little human interaction as possible. I don't see it in any way controversial that large chunks of this market will be automated away.
And example of this is in game asset design. There is a still a lot of manual human work here, but the workflows are being highly automated by 'smart' tools. I can promise you that my friend in this industry don't want to color in 50 bajillion pieces in an armor set. The tools for transfering things like style across multiple pieces massively reduce the workload. And as time goes on I expect we'll see these tools affect other areas of art. Another good example is music. A single artist can produce and distribute their own digital music easily. Entire portions of the music can be handed off to tools to put in things like drums and such.
Now when you're playing a game, or in the elevator listening to whatever's being piped out of the speaker, do you think about or care about if it's made by a human or not? It won't stop you from buying a human painted mural for your wall. You'll still listen to your favorite human artist. But expect huge swathes of the ambient art/design/possibly music around you to be automated in one form or another.
This doesn't make sense to me as a response. I think it surely goes without saying the models are good for things like game asset design, or ambient music, I say exactly that in a sibling comment. (although the ambient music one is a little silly to me personally. Brian Eno already invented procedures and ideas for generative ambient works without a single GPU like 50 years ago. Not to mention Terry Riley, John Cage, Steve Reich. Consider also the algorithmic nature of classical Indonesian gamelan music. Would be an accomplishment if anything from a statistical model could match the beautiful, simple, almost dumb elegance of such works. It feels like such overkill. But I digress..).
The article which I'm responding to seems to me to be about not being demoralized as an artist in the face of advances in AI, presumably to artists who aren't just trying to get a commission check for some elevator music, but care about what they are making, hence the possibility of being demoralized. And my point is that they don't need to worry, and that it only seems this way right now because of enthusiasm for the idea of AI art itself, not any one work.
In a way by, intentionally or not, eliding this crucial context, you are helping me prove my point! We cant separate the value/worth/goodness of an AI work without reaching outside it to talk about what it means, its reference to a coming future, its bare ability to be "human like", or even just matching the spec, as it were.
People are just so caught up in justifying this stuff right now, they don't have even the ability to consider it in itself. This is all I'm saying.
I think if you read what I'm saying you could agree with me without giving up any of your commitments here, just dont be quick to jump on something that seems wrong without 1. giving charity in what the other is saying, 2. understanding the context in which it is said, in this case the article I was responding to.
got to love how consistently grounded and reasonable and accessible Andrew Ng always seems to be. i think and hope a really good role model for the industry, even if most of us will never see anything like his career success.
Mostly its not inspiration. Just chewing of what is trendy, to fits in narrative and conformity. To be liked. Have more attention, more time qlued to screen...
This article is NOT entirely correct.
From personal perspective to improve yourself and doing your own art knowing that no one will ever see it, and that you doing your art only for your own sake, then it is true, the other people art does not matter.
But it is completely opposite if your livelihood depends on it, your income or your business, all trades depends on money, and there is limited amount human attention out there equal to number of people online multiplied by number of minutes spent.
So, there is uneven spread, and the same reason why some artist thrive and many starve. Same for writers, and many other arts and sports.
If you take tennis as example, you will see that only maybe 100 or so tennis players earn amount of money that is good for the comfortable life style. Others are struggling to get by.
Same in any arts, or creative work ... only very small will actually manage to live out of their art ...
Especially as most projects are ‘join the waiting list’ when you go there, which I assume means ‘waiting for the chatgpt api to become available’ or ‘self hosted cheap alternative that I don’t have to charge for tokens’ or, and this is probably actually the reason for most, ‘we have nothing, but figma works’.
I would really like a different category in ‘show HN’ which excludes vapourware.
> AI develops so quickly that waves of new ideas keep coming: quantum AI, self-supervised learning, transformers, diffusion models, large language models, and on and on.
I wonder what’s wrong with us as a species. We love to run into one direction all together chasing the next big thing just until the next big trend comes along.
We ask for faster horses, when automobiles would get us where we want to go. It is a lack of imagination for what is possible in the future that keeps us risk-averse and focused instead on a quest for incremental improvements to the status quo.
This was the focus of the idealistic vision of the whole "disruption" meta-meme as it spread through Silicon Valley.
Not to say that maintenance and refinement aren't important, but innovation doesn't look like just adding more parameters and ways to scale massive computations.
The main concept that AI laypeople (even those who burn OpenAI credits on a hobbyist basis) are missing to really grasp the possibilities of these kinds of statistical analysis techniques is a deep understanding of the notion of inductive bias. For instance, Transformers are so powerful precisely because they have generalized that idea of inductive bias one level up by using multi-head attention to project onto many different linear spaces and attend to them differently in different contexts, rather than just filtering through effectively one linear perspective at a time as with your traditional dense NNs.
The kinds of innovations we have witnessed most recently are actually pretty small steps in the space of possible architectures, not to mention training methods, etc. For instance, predictive coding approximates gradient descent[1] which opens up all sorts of new architectures (feedback arch, modular/federated/local compositions, etc.) that are intractable with traditional backprop-based techniques unless you can manage the infrastructure around periodic global parameter/state syncs.
That's definitely not "us as a species". I would say this is a specific effect of post-industrial, market-driven economy - and even there, probably restricted to specific communities such as entrepreneurs.
By and large, I believe, people are conservative and would prefer for things to stay as they were, unless the change solves a specific problem.
In general your most staunch conservative doesn't mind things changing if it personally benefits them greatly (especially if they don't bear most the burden of its effects).
Your most avid progressive commonly want the things that bring them comfort to stay the same, even if its potentially harmful to someone else.
I believe the issue here is we're playing "everything everywhere all at once". Everywhere on the globe is pretty much instantly connected to everything else via digital communications. There is no more sit by the sidelines zone that gets to avoid this connectivity. You may not want it, but you have to stop everyone else from bringing that connectivity too, and somehow maintain casual connectivity economically unless you want to play caveman.
New technology = race to figure out how to make the most money off of it. Not that complicated, nor does it have anything to do with "us as a species".
Are these really separate phenomena though? I'd strongly argue that between self-supervised learning, transformers, diffusion models, and LLMs we're looking at phases of the same phenomenon, each building on top of the knowledge that came before.
Like vaccines - inactivated viruses isn't a different "wave" than mRNA, they are logical successors within the same intellectual pursuit.
Or Vulkan vs. OpenGL. They aren't separate "fads" that gained traction.
Or ARM vs. x86. They aren't "fads" so much as entirely reasonable evolutions of technologies based on the evolving knowledge of the field and its needs.
This is why academic research is so important. All the corporate money is flowing to deep learning, grant agencies should be sustaining the other more fundamental areas.