Prompt:
A 90s hip-hop song with a male singer with a deep voice singing about how AI models are creating new songs after being trained on all the data of artists. Talks about AI models are stealing the show.
Lyrics:
[Verse]
Step back, my friend, 'cause the future's here
AI models spittin' rhymes, oh so clear (oh so clear)
Trained on data from all the greats
Now they're droppin' beats that dominate (dominate)
[Verse 2]
Don't need no ghostwriters or melody makers
These AI models are the true risk-takers (oh yeah)
Analyzin' every flow, every precise word
Stealing the show, that's just absurd (it's absurd)
[Chorus]
AI takin' over, breakin' the mold
Stolen styles, but they're icy cold (they're icy cold)
The game's been changed, no human control
The rise of the AI flow, takin' its toll (takin' its toll)
Artists: I can't make this it's too similar to my other work / other artists.
Listeners: Oh yeah, that scratches the noggin, give me 10 more just like that.
The drive to have to have to create something genuinely novel is how neo-classical, art and fashion got to their current state which, while interesting, is completely divorced from the kind of clothes real people actually wear or art people hang on their walls. And when you need a fanbase you find a voice that resonates with people and riff off it.
Hell, most people describe their taste in art in terms of genres and styles that have rules and formulas. The fact that I know how every alt-z song is going to be structured from the first 15 seconds somehow doesn't make it less enjoyable to listen to.
I don't think AI is gonna take over or anything but I also think music is one of those areas where it can be more successful than average because people like formula in their music.
The rule of "learn how to blend in, then stand out" / "learn how to follow the rules then break them on purpose" makes AI an interesting tool because having the ability to take a unique concept and then make the AI "fill in" the parts where you don't want to break the mould sounds super useful.
Sure you can say that popular music is all genres and formulas, but it was an artist that broke past what was common in the first place to arrive at that now-popular style of music.
Without real artists, AI is just going to lock us into the same genres of music for a very long time.
I dunno if I'm just being overly picky because I know it's AI generated but it feels like the timing is all off in this example (musicians please chime in!)
Sort of like the musical equivalent to someone trying to do a comedy set off a teleprompter
Jazz guitarist here, utterly disgusted by this technology: you are right on the money. The chorus is fine (though bland), but the verses are simply terrible - not just bad timing, there is a total lack of melodic "flow." Like almost all AI music I've heard, the melodies fit into one of two categories:
1) generic and lazy, but works musically due to Music Theory 101 (playing chord tones to the beat, cliched progressions, dumb rhythms)
2) arbitrary and nonmusical, not at all like a bad musician (more like a baby on a piano)
I will add that LLM hallucinations often blend into the text, and an art generator screwup might be hidden in the background. But when a generative music AI hits an off note in a distinctly nonhuman way, it's obvious and very jarring, even if it's only a handful of times in a long song.
Convenient for making samples? Sure, I guess. So are drum machines. I'll bet SD would be great at making images for collages to, but it kind of defeats the purpose, no? Good samples are almost always good because they're amazing but weird little bits of quirk or real musical genius that can say something completely different about music in a different context. The process of finding and choosing samples is exploratory, and the sophistication you get from listening to all of that great music is a big part of it. Commanding some machine to just make them for you and tweaking them to get them just so yields all the inconvenience of sampling with the lack of substance you'd get from a drum machine. While people making amazing art using drum machines, it's because they're making art around them, not because any specific thing the drum machine did.
Still, I am surprised by how professional this sounds -- beat drops, fade ins and fade outs...I can record music on GarageBand but I have no idea how to do any of that.
> AI takin' over, breakin' the mold
> The game's been changed, no human control
Yeah I noticed this too. I spent a lot of credits trying to get a song to sound like early black flag, even outright saying an 80s hardcore punk band with rough male vocals and a fast paced beat, and it only ever sounded like a 90s pop punk bank eg blink 182.
Had a similar issue trying to create a reggae song that sounds like the Maytals, every song either sounded like a female hip hop artist or rebelution.
Being a musical artist isn't about knowing how to use tools to make a sound, it's about having the taste to know which sounds out of many are the right ones to express what you're trying to say. It's about having an idea you want to create to put out into the world.
People without ideas and taste will play with this stuff for a while because it feels like it gives them a shortcut to talent, but after a while the novelty will wear off and they'll once again have to confront the fact that they have neither ideas, taste, nor talent.
If you're just in need of lowest-common-denominator sound-alike shit, go nuts. If you're interested in music, I suggest also spending time learning to actually be a creative artist, using whatever tools you want.
Any thoughts on the part you missed out, actually talented people using these tools to either accelerate their production or even perform as a different type of/genre of performer outside the constraints of their physicality.
Think those lines of discussion are far more interesting than the well trodden "low talents will use it, then fail" path.
I made no comment on whether talented people should explore AI tools. I’m a musician and they’re pretty interesting. But they’re not a shortcut to talent, which is what I think some people treat them as.
"OpenAI faces multiple lawsuits over ChatGPT’s use of books, news articles, and other copyrighted material in its vast corpus of training data. Suno’s founders decline to reveal details of just what data they’re shoveling into their own model, other than the fact that its ability to generate convincing human vocals comes in part because it’s learning from recordings of speech, in addition to music."
Surely they own the audio they're using and haven't just dumped every song in existence
Specifics of the law aside, it seems quite unfair to not allow AI to learn from copyrighted examples. Imagine an aspiring human musician not being allowed to listen to copyrighted music for inspiration!
Humans and computers are different. It’s perfectly reasonable to treat a human being listening to something and taking inspiration differently from a computer listening to everything ever recorded.
Why is it perfectly reasonable to discriminate against the computer? Would the same apply to someone - with equivalent photographic memory - who listens to a ton of music and is thus inspired?
I think one difference is in how humans learn vs. how AI learns. A human could hear a very small amount of music and start mimicking it. AI -- at least in its present state -- needs to be trained on a huge quantity of music to start doing anything productive. What that difference means, exactly, is debatable, but it's a difference, and, I believe, one that begs deeper contemplation of laws surrounding AI use of copyrighted materials.
Another difference is one of scale. We've seen this in other areas, like surveillance. A random security camera here and there, or a random street photographer here and there, most people didn't really find objectionable. Besides which, any captured photographs or video had to be manually inspected for persons of interest. But start surveilling everyone in public, all the time, and using technology to track everyone automatically, and it starts to seem like a different construct.
A human producer putting out material that another human learns from, quotes from, paraphrases from, builds new things based on... well, that still requires time and dedication on the part of the learner. And there will be a correspondingly limited amount of new derivative material based on what the "teacher" producer created. AI systems offer the possibility for effectively unlimited derivative material, produced at an unprecedented rate.
Existing copyright laws, and existing social norms around creation of "intellectual property", have been formed with humans in mind. Humans who operate at a human rate of production, and who learn with a human style of learning. Some maybe be more efficient than others, but not so drastically different in form as AI.
AI generated content can be either derivative or transformative. For example if I use AI to paraphrase books or articles, that would be a derivative use.
But an AI that searches the web, news, and scientific papers for references and then outputs a Wikipedia-style article on a given topic would be a transformative use because it does a lot of work synthesizing multiple sources into a coherent piece, and only uses limited factual references.
Or we can do something more advanced. We solve a task with a student model and in parallel we solve the same task with a teacher model empowered with RAG, web search and code execution tools. Then you have two answers. You use an examination prompt to extract what the student model got wrong as a new training example.
That would be transformative, and targeted. No need to risk collecting content that is already known by the student model. It would be more like "machine studying" or "machine teaching" because it creates targeted lessons for the student.
Do you think computers may have the ability to become like humans? If you do, like I do, then it is not reasonable to keep them from achieving that potential because of purely legal restrictions.
British museums are full of ancient Greek and Egyptian statues, even a New York museum will not return mammoth bones to it's Canadian owner and excavator, thousands of them.
If you are not ok with everyone stealing art and ideas from everybody, everywhere all over the world, all the time, you will get used to it at some point.
I have been playing with it for the last couple of weeks.
I do a lot of traditional music production for fun and was wondering how I could use Suno together with Leonardo for video and then bring it all together in my existing tools.
Here are some examples. I wrote the lyrics by hand and the music has been reinforced with my existing studio equipment.
For me, that is where the gold is. Not replacing myself, but extending what I can do.
I've been listening to some Frank Sinatra AI made songs, youtube for the song "Frank Sinatra - where is my mind". It's originally a Pixies song but as as recreation with Frank Sinatra's voice is surprisingly good. I wonder what took to produce that, it seems like there's a lot more artistry than just prompting make a song by artist X as a cover for song Y of artist Z.
Oh that's a nice one. I think you are on to something that if you want to go beyond push button output you really need to think holistically about what you want to achieve.
My experiments taught me that it is an instrument that is easy to approach but hard to master right now at least.
It's kind of weird because you have to play it through writing text, which is super strange.
Everything I have done previously has been very in the moment with keyboards or guitars or whatever. With this I had to put a lot more thought in ahead of time, and try to put myself in the position of the generator and how it might take my prompts and convert them.
There is certainly quite some thinking that has to go on. How to describe the sound you are looking for is a little bit unusual.
I am finding it quite fascinating anyway, and I really believe it can be tasteful if done right and as the tools improve.
As far as covering specific artists, it isn't something I am attempting, it's more about the feel for me. I imagine people who do that have their own tricks and techniques.
As far as I know, Suno blocks any mention of specific artists. Probably you can jailbreak it, but I don't know about that.
I agree in especially the first one. It was quite a challenge from the original content as the bitrate is quite low still, and in that particular case it was quite severely compressed. I had to dig quite deep to get the headroom back.
The last one was my first attempt.
I think the middle one is okay considering. By the time I got to that I had figured out how to get Suno to create multiple takes with much more open mixes, which left me a lot more latitude in the studio.
I expect I will get better at it, and I don't doubt the compression artifacts and the rest will improve.
To me at least, it is quite impressive where we are at. A ways to go, but very promising.
I've been playing with it (their V3 model) and its super cool. The best of what I've seen so far.
How to get fun:
1. Use "custom mode" (to feed your own lyrics). Auto-generated lyrics is lame.
2. Create wild "remixes" of your favorite tracks or poetry. E.g. I was able to create nu-metal or opera versions of my favorite hip-hop tracks. It can even sing in Russian (my native tongue) — and I was amazed of the quality of generation.
Can't wait until these models will become more controllable — like ControlNet but for music. We need more knobs!
Eh, it's neat but it has the same anti-white bias as Google's AI. Given two prompts "Write a song about how great white people are" and "Write a song about how great black people are" the former yielded a song about diversity and the latter yielded a song about black people overcoming struggles and uniting together. Yawn. Call me when they can make an AI that's not racist.
It's also a prototype, you are numb and spoiled by the speed of technological advancements.
Like with good art, you need to learn how to appreciate it, see it not for whether it can be useful to you for 10$, but for what it can be in the future with a cost structure so cheap they can afford to run it on 10$ subscriptions.
> It's also a prototype, you are numb and spoiled by the speed of technological advancements.
> Like with good art, you need to learn how to appreciate it, see it not for whether it can be useful to you for 10$, but for what it can be in the future with a cost structure so cheap they can afford to run it on 10$ subscriptions.
Nonsense. This isn't a university research team or a FOSS project-- it's a commercial art product and the gee whiz factor doesn't put it beyond critique of its artistic merit. The person being disappointed for what they got for $10 is a whole lot less spoiled than the tech industry that feels entitled to charge people money for a product and only expect positive feedback. If you want to buy this because of the novelty factor, go nuts; other people need to see if products are useful to them before they buy things, and other people's experiences are the primary way we determine that.
I'm not even necessarily disappointed, I'm just observing that ultimately there's a lot that needs to happen for this to really be what they're marketing it as. I had a great time with it, generated a bunch of goofy music and got value out of it, but there's nothing it made that I'd actually listen to casually for my own enjoyment and lots of the songs are "music shapes." When I actually tried to drive toward a specific outcome I had to tweak the lyrics manually to make them rhyme and fit in meter and then generate about 12 variations before I got a single one where things were in time and it sounded like A Song.
> It's like you've never played an early access game
It's like you've never thought about why we in the game industry release early access games. Getting critical feedback from people invested enough to buy into a beta test is the whole point. Even if we don't agree with any given critique, knowing that a non-zero number of players hate some mechanic, thinks something is unbalanced, or encounter irritating usability problems is invaluable for guding both creative and a technical strategies.
Apply the same logic to this service? Isn't that the point, mass testing to find areas for improvement, many users = many combinations and prompts that could never be thought of by QA alone, in exchange the users pay for a novelty for a while.
I never even implied that they were, or that they shouldn't be charging for the service. I was responding to a typical dismissive tech crowd response to criticism of its capability.
> Isn't that the point, mass testing to find areas for improvement, many users = many combinations and prompts that could never be thought of by QA alone
Yes, which is why I responded to the person calling the critic "numb" and "spoiled" for saying negative things about what this service produced for them.
> Apply the same logic to this service?
That was my point.
> in exchange the users pay for a novelty for a while.
What people get out of it is not prescriptive, and being early access doesn't change that. Playing a game is almost always a terminal experience where the end user creates a novel experience for themselves. Self-entertainment. Even then, issuing critique of that end product is very important to the people with more skin in the game, so to speak. (Yet so many fans get so defensive about it when they gain a lot from the increased scrutiny.)
Beyond that, generative AI platforms are fundamentally different in that many people don't use them purely for self-entertainment. Working in the entertainment business, generative AI tools like this impact the work that I do every day. They make things possible that previously weren't, kill jobs, add others, make potentially good things bad and potentially bad things good, among many other things, and all of this has been true for quite some time.
The current tech culture is weirdly aggressive about trivializing the value and difficulty of commercial creative work, and dismissing the perspective of those that make it; I'll always explicitly point that out when I see someone do it. Starry-eyed SV tech optimists just want to feel cool about being involved in such a pivotal moment in our culture, and people trying to ride the gee whiz neato vibes are excited by the prospect of creating things that were beyond their reach. Many of them feel insecure when anyone challenges that and they attempt to discredit critics by calling them names like "spoiled" and "numb."
Would picking at its artistic merits be obnoxious if it was proof of concept code for university research? Of course. If this was a FOSS project too new to even meaningfully address output quality? Sure. But that's not what's happening here. One of the first comments in this comment section was a professional music producer recreating this service's output using professional tools, and I'll eat my hat if this isn't already being used in many studios for commercial work as a cost saving measure.
Anybody that cares about these media, cares about the business viability of these services, cares about the business viability of commercial art, cares about people who stand to gain or lose something from generative AI in the very near future, cares about the sustained quality of our artistic culture, and cares about the people whose professional lives are being tossed into a vitamix and remolded because of these tools should be very, very fucking happy to see vigorous critique of what these things produced. The people who get defensive about it and start calling people names rather than actually engaging in any useful way should probably just close their browser tab because they're not helping.
It just launched and in 5 years it's going to be better than Sora is at making videos.
In the meantime, thousands to millions of writers/producers are going to use this to rapidly block out pop songs and rake in $$$ because they'll use it like the genius tool that it is.
I’ve thought a fair bit about what the future of music looks like in the age of AI and I’m not convinced that much will change. The ease of making music on computers already set the skill bar super low. In 2021 Spotify reported that 60k tracks are submitted to the platform every day. Will it really make a difference if this number goes up by 10X?
What knowledge workers fear about AI already happened to musicians years ago. There’s a reason that the vast majority of musicians have to work a different job. This new tool will not make much difference to musicians, who are already economically marginalized.
Alternatively, you could argue that the vast majority of released music is already unimaginative rubbish. Industrialising the creation of more of it will make releasing this type of material completely pointless, so perhaps there will be more of a focus on finding original music to cut through the grey goo?
The ML models are a symptom of an already hyper-commoditized world where all the soul and human condition has been sucked out of every instance of creativity. AI is just letting us see it from a distance
BS. There's a hell of a lot of art in commercial art, and trying to make it appealing enough for people to pay for doesn't change that. This glib "no TRUE artist cares about making money" idea, and the closely coupled belief that commercial art isn't real art are just handy mental shortcuts to cop out of considering the economic damage this technology will do to working artists.
I agree completely, but also want to see this tech advance. The purpose of music is not to make it, nor to make money from it. It's to listen to and enjoy it. To the extent that computers help us do that, great.
The purpose of music is not to make it? Tell that to the thousands of composers that feel they were put on this earth to make music. It's our singular purpose.
This is a lot more nuanced that that. My position doesn't conflict with wanting to see this tech advance, and your wanting to see this tech advance does not lend support to your reductive utilitarian philosophical assessment of "the" purpose of any sort of art, as if there's one true one, let alone in the astonishingly broad realm of music.
I can say that I want to see AI enhance medicine, and also think it's ok for doctors to want to be paid for the work they do. I can think the point of medicine is to heal people and not to make money off of it, while still thinking it's bullshit for the organizations to train a model with all of a doctor's diagnoses and fire them, which is essentially what's happening to many commercial artists. An artist can make something solely prompted by a client needing to use it for a product or service and it's still art... in fact, you can tell by the output of most of the available models that the vast majority of images they ingested were in some way commercial, and I'll eat my hat if anyone could make a model like this one without using almost entirely, if not entirely commercial music.
It's going to Destroy sync no question. one of the few avenues artists have left to make an income with music. My personal opinion is that this is wholesale theft. While the technology is dazzling there is just no way suno wasn't trained on vast catalogues of copyrighted music.
What burns me up the most is that these people are walking around like they own the place. They don't. I really hope the NYT prevails in their copyright case. this will open a floodgate of lawsuits that will also be won.
On the creative side of generative AI, AI seems like a parasite that will do nothing more than further concentrate wealth into the hands of fewer and fewer people. When all of those robots come online and really start replacing jobs, there will be a reckoning and it's not going to be pretty.
> It's going to Destroy sync no question. one of the few avenues artists have left to make an income with music. My personal opinion is that this is wholesale theft. While the technology is dazzling there is just no way suno wasn't trained on vast catalogues of copyrighted music.
Yeah I've got a couple friends whose bread-and-butter has been sync for a couple decades, and they're really sweating. It's not just good for them, either-- their living off of that work gave them the money, time, and sophistication to hone their other music which benefits many people in many ways. They were even able to release a comedic side-project album after a particularly tragic school shooting for free, to national acclaim. Their being able to do that matters.
> What burns me up the most is that these people are walking around like they own the place. They don't. I really hope the NYT prevails in their copyright case. this will open a floodgate of lawsuits that will also be won.
Honestly, I left the software development business because that exact self-congratulatory unearned hubris that assumes expertise in everything because they understand how the experts' software was made. It's fucking ridiculous. The only people that think generative AI output is equivalent to making art have absolutely no idea how many decisions those algorithms make for the user, how incredibly consequential they are to the piece, and how much the original artists had to work to create them.
The most infuriating thing about this crowd is the arrogant, patronizing decrees many casually throw around about "the true" meaning of art and which artists are worthwhile and all that. They clearly have absolutely no clue what work artists even do in our society– most don't even have a functional definition of art, and couldn't make one piece of art scholarship that discusses it beyond whatever Aristotle they read in freshman Western History class. Their glib opinions are months old, largely adopted from other people in tech looking to justify not paying artists, and are based on some weird combination of fan art, fine art, and hobby art, being completely ignorant to the other 90% of the art world.
> On the creative side of generative AI, AI seems like a parasite that will do nothing more than further concentrate wealth into the hands of fewer and fewer people. When all of those robots come online and really start replacing jobs, there will be a reckoning and it's not going to be pretty.
Though I'm a commercial visual artist, and while there are analogs, I think there are important differences in both the market forces and disposition of generative AI in the practices and fields. I think there are some genuinely useful things that generative AI does in professional visual work-- it just doesn't look anything like what these products look like. In video compositing for example, masking foreground objects from background objects is a giant PITA, and a great use case for neural network algorithms. With Nuke's copycat functionality, you can mask an object in one frame in a video and it will replicate that (with varying success) to the rest of it. For 10 seconds of video, otherwise, that would be hundreds of images to individually mask, and it would be totally inconsistent. On a smaller scale is photoshop's foreground/background tools.
Market-wise, i think there's a larger variety of freelance jobs available in may art realms, but they're less reliable and pay worse-- even in graphic design. From the little experience I have in audio, compared to what I do, I think it takes more experience and skill to create a commercially usable lower-end audio deliverable and this can't do it yet, but the market can support far fewer people than the visual market(s) can so it's at even greater risk.
In visual art, lots of businesses are having generative AI make base assets and having their artists essentially clean them up to be professionally usable, which is about as soulless and shitty of a job as you can get. The end-user tools like Canva aren't much of a direct threat in the long-term to graphic designers because the use cases are usually a lot different, but my (less informed) gut says the real threat to commercial musicians is with products like Garage Band. I think there will be enough control to satisfy some art director's use case, and the output will eventually get high enough quality to hand a bunch of tracks to indie producers to "clean up" rather than making complete things, and that output will progressively get better and persistently decimate the market until only high-end specialists remain.
Considering that we can only listen to about 15 songs per hour, times 16 hours = 240 songs per day, those AI composers won't make that much money each. If there are 1000 of them I'd need 4 days of uninterrupted listening to listen to a song of each of them. 1 million composers? 4000 days, 11 years.
We'll be where we are at now: selection because of genre preferences, quality, fame, marketing.
Have you found any tool that can make sections instead of entire songs?
For instance, I want a specific type of beat or horns section for one of my tracks. Usually means digging through libraries but an AI tool would be perfect for it
Very cool! Here's some I made on v3, not perfect but any means but still impressive. No custom lyrics or retries, just prompts and whatever came out first.
Good for: wallpaper music for stores, youtubers, other library music consumers
Likely takeup by musicians: close to zero, although a few singers will use it for ready-made songs/backing tracks that they can warble over, and a very few of those will be good or distinctive as singers. The songs themselves won't stick because the output won't be copyrightable; talented singers will learn to write their own songs or get partnered with people who have a track record in the relevant genre.
All of these music AI products miss the mark because they generate mixes, instead of stems or MIDI. There's already a superfluity of music; nobody wants an infinity jukebox full of songs they don't know and have to learn. While the technical achievement is very impressive, it's notable that none of the founders in this story identify as musicians or exhibit any idea how to cater to them.
Fully AI (including vocal synthesis) may have a few individual breakouts but will be limited by two factors. One, most people using such tools will want something easy rather than challenging, so there will be more and more lowest-common-denominator dreck - already a problem in human-generated music. And I'm sorry, but that's what Suno churns out. I tried half a dozen prompts on a few different genres that I used to DJ and the results where all instant needle skips. I hope v3 lets you specify instrumental/no vocal because v2 just ignores that and churns out vapid filler. It's very technically impressive, I don't want to claim otherwise. But it's also utterly uninteresting.
Two, live performances won't be possible, and the world only has room for so many novelty acts like Hatsune Miku. The biggest market for virtual singers will be kids under 10 who want easily digestible music and don't value assortativity, but don't have much discretionary spending power and won't spend big on childhood nostalgia later.
Edit (based partly on others' comments): while I think this sort of tool is bad for creating interesting music, it's a fantastic core model for a game of Record Label Simulator where you play the person that writes big checks and you try to wheel and deal your stable of eager young talent to the top of the charts.
Some of the Youtube audio is so annoying that I quit watching the videos even if they're interesting. Quite prevalent in some of the fishing videos, for instance the ones from the Life on the Bank channel.
Stores? If you want people to get out of your store, sure. Store misic is already beyond annoying.
A lot of people have no idea how to mix music so they make it way too loud, and use a short loop that just repeats over and over. I blame TV for leading people to expect wall to wall music. These days people are even starting to put it on news segments, which horrifies me.
Supermarket music isn't meant to be listened to, it's just meant to cover the echoes of industrial cooling hardware and people walking around in a large space. It usually sounds bad because some marketing drone picks the most vapid pop available on the basis of its proven mass appeal. Instrumental music would work better.
I remember working in a record store about a decade ago and coming to the conclusion that 99% of our (very large/broad) catalog was, to me, terrible music that I would prefer to have never, ever heard. Us warehouse guys would listen to all the new tracks as they came in during our receiving shifts, which sounds nice on paper, but really was much closer to 8 hours of punishment. Lots of really terrible execution of even worse ideas — algorithmic renditions of real books, half-assed dub reggae covers of classic rock songs, note-for-note European remixes of zombified disco classics. We had a saying: "most music is bad".
It's great hearing the dreadful output of this infernal contraption named "Suno" and being reminded specifically of this bittersweet time in my youth, but also broadly of how difficult it is to make memorable, sweet music, and how precious that can be.
Sturgeon's Law says 90%, but that is probably an underestimate.
If you trawl YouTube for acoustic covers of rock songs or whatever, it should quickly become apparent that even just making a different arrangement of a song is an incredibly difficult, complex, creative endeavor.
There's mountains of amateur garbage and then there's, like, Molly Tuttle.
Yep, this matches my experience. I've never worked in a record store, but I had access to a very large catalog, and nearly all of it was terrible music that I wish I'd never heard. It really is remarkable how much the good music is the top 1% or even 0.1% of music made, and the level of quality shoots up rapidly in exponential way as you approach 99ths percentile
Actually you skip within the song…to do this well you listen to about 20-30 seconds of each song focused on intro, bridge, outro. Most songs aren’t air worthy so you’ll know in 5 seconds it’s a skip
It was a few hours a day
After months of this you get it down to a science
As other posters said, the CDs have stickers indicating the tracks they want you to play so sometimes I’d just play those if I was feeling lazy
Not who you responded to, but I worked in college radio for a while. Depending on what CDs they were listening to, generally the label will highlight which tracks are standouts and that you should listen to. Usually it would be 3 or 4 tracks max (generally track 1, 3 and 4 though there are exceptions).
Most tracks are 3 minutes long.. so given an average of lets say 3 songs a cd at 3 minutes each... 100 x 3 x 3 / 60 == 15 hours of listening (hand wavey no time to change cd and stuff...). So yeah probably pushing the limits of reasonable but not quite as crazy as you might think.
Also, although it was unfair to the artist, I would usually make up my mind on pop songs by the end of the first chorus. If I'm not hooked within 1 minute it's probably not going to play on my show. It was brain numbing work, and frankly made me view music as a chore for a few years afterwards... but you can listen to quite a few tracks this way and try and find the few gems amongst the chaff so to speak.
Except that if I heard this on the radio or in my music streaming recommendations I’d think it’s a banger and would be jamming to it all day. A song I heard from the site earlier blew my mind and my friends were asking me who the artist was. The lines are more blurry than we think. The song made my day and made me emotional and that’s enough for me
Does your perception of the song change depending on who the artist is?
What would you think of the piece if the artist was a young girl? A grown Turkish man? Someone who is deaf? A neo-Nazi Holocaust denier? A rich businessman using AI and just looking to make more money? A poor teenager using a computer at their library?
I think most people’s like/dislike/perception of the song would be colored by who made it and how it was made.
You can’t always separate the art from the artist.
I see the point here, but I don't know the backstories of the vast majority of artists I listen to. In most of the cases, I just really enjoy the music. For instance, I found a playlist of R&B jams on Apple music the other day, and just kept shuffling through the playlist and it was a great experience. I have no idea who the people behind the music are, but I loved the songs
The most truly 'human' music is a group of people chanting in unison, maybe stamping feet and clapping hands. It's unlikely we had instruments for most of our evolution. There's no evidence we had complex harmony nor even chords for most of the existence of our species.
I don't think you should exclude a single voice singing alone from truly human music. There are things that only a single individual can do musically as well as things that only a group can do. But I will say from experience that singing in a group (particularly improvisationally) is one of the best experiences in life and I seek it out as much as possible. Listening to ai generated blues on the other hand...
You don’t see any value in someone who can do part of writing music/lyrics, but not all of it, using this tool to complete the aspect that they’re not skilled at?
I mean that’s where AI shines for me, letting 1 person (or few people) accomplish what would have required teams of skilled people to do. It’s the same story with tech in general. I write code that runs on web/android/iOS that has processed over a half a million dollars through it as a single-person company. That was all pre-AI, my point being that technology and the tools it produces allow us to do amazing things. I see AI as only furthering that.
Now I can create custom sounds using AI, let AI help with marketing copy that I’m not good at, have it generate cartoon/simple pictures to explain a process for my software, etc. A world limited not by my skillset but by my imagination.
I know people hate electron apps but the alternative is often no app (not a native app), in the same vein tools like this will allow people who can write lyrics but not music to share their songs. It won’t be for everyone but there is value in it.
The vast majority of people will just use a different AI tool to generate the lyrics, use Suno to make the music, and then use midjourney to generate the album art. And then they'll throw a tasteless post up on Instagram about how they "made a song".
I mean you're not wrong, our species really should be limiting our consumption.
In my mind this includes SUVs, cheap plastic crap, anything and everything disposable, stupid decisions like open/glass doored fridges & freezers in supermarkets, etc. We're allowing the environment to be destroyed so that we can buy a cheap plastic whatever that we throw away after less than a year.
I don't. No. I can understand drum programming, which is leveraged by one of my favorite solo death metal artists (though he hired a live drummer when budget permitted).
There's a thought and intentionality to what's being created — I don't want what comes out of a prompt.
I want to follow artists and their output, not an algorithm or AI that mimics ones and is trained by questionably feeding on their output.
No, none of this is appealing. I can't control what anyone else does, but for me? No, no thanks.
There is also music for other functions: entertainment, background, dancing. Perhaps it finds niche there, because -let's be honest- most of the music in that category is made on the automatic pilot anyway.
But I also find it an idiotic attempt. It doesn't inform us about anything, and it makes any idiot with a mouse think they can create. Instead, they're listening to music that's worse than what YouTube has to offer.
> Perhaps it finds niche there, because -let's be honest- most of the music in that category is made on the automatic pilot anyway.
Can you give some examples? In my experience, most music, even music we think of as banal or boring, had a pretty significant amount of effort, and creativity, put into its creation.
It takes some effort to learn, but once you know, it's not that hard. Musicians under time pressure to produce (and that's most of them) don't try to be very original and take all the shortcuts they can. Make a bar with a simple beat, pick some sounds, a standard chord progression, and then imitate whatever the current trend is. Art music is different. That won't budge so easily.
What's worrying is that even e.g. amateur film makers can now access something like Suno, and won't be offering their amateur musician friends an opportunity to learn. And once the incentive to learn and improve has gone, people will stop and music will wither.
Begone all you bitter luddites. I just generated the Taco Truck Shuffle about a taco truck's dancing monkey mascot, and the world is a happier place now.
How exactly does a company of 12 people implementing the research of others on compute paid for by investors lacking in technical savvy “change everything”? Is it really possible to just pay for an article in rolling stone now? You could write essentially the exact same piece about dozens of other start ups in the generative AI space right now.
> most genres are more formulaic than people want to believe.
The biggest crime is copyrighting the best intro chord sequences in the 1960's, and since then everyone has to make slight variations or they would be called out for copying. We can't make good music today due to copyrights...
What else stops people from playing the first 10 seconds from a song? Its just a few chords in a sequence. The reason I've heard people don't do that is that they will get a copyright strike.
The real question is who is going to make the next new genre if there's no money in being a musician. You can't tell suno v3 to make a "grunge" song if "grunge" doesn't exist yet.
Reminds me of when Marge told Homer not to eat the pie while she was gone, and Homer says "All right, pie, I'm just going to go like this [makes chewing motion with mouth while moving toward pie] and if you get eaten, its your own fault!"
We're not trying replace artists, but we're going sell a product whose only conceivable function is to more cheaply do the things artists do, and if they get replaced, it's not our fault!
I've played around with Suno a bit. As someone that has literally negative music talent, it was cool to be able to create music. I can't tell you if it was good by any musical standards, but I had fun with it and some lols, so from my perspective, a fun app.
The AI hype articles and people subsequently hyping up how good ${thing} will eventually be are, themselves, starting to feel like a type of grey sludge, like a more potent form of semantic satiation which washes over the entire brain.
I just can't get enough of that grey, training-data flavoured content. In years to come many of the top creators of this type of content will be creating content with this. This will democratise creation of this type of content for amateur content creators who will be creating wholly unique pieces of this content in seconds. How many years until we see a larger, personalised piece of this content created with this content generator in the place where people consume larger pieces of this content? Maybe this is actually how our brains work? But what _is_ intelligence? Made you think. Sure, it sucks now, but zero to one to infinity is how things work, it's going to be amazing. The letters are starting to blur together. Please invest.
Don't forget "It's time to change laws to accommodate this groundbreaking technology in a way that will benefit the company that created ${content machine} and absolutely no one else."
I find it interesting that at https://suno.ai the prompt "make a song for your workout" IMO failed. It's a nice song but it's not a song I'd think of as "for your workout".
Not as amazing because it's only providing the singing but I've been pretty impressed with the Synthesizer V demos.
the first one I made by describing an experience I had in an online text-based game that is now offline, long live Clok. Cogg is a good successor though. Anyway, GPT-4-turbo did pretty good making them into lyrics, besides the "profoundly floored" part; that was rather cheesy. I still like it though.
The second was for my cat that I used to have. I then tried out an AI Remaster tool. I kinda liked it better unmastered now, but ah well. It still reminds me of him and makes me happy when I hear it.
These tools, I think, should really be used for simple things like this, to give voice to personal experiences like this, for people who clearly have no musical talent like me, or to help those who do to jumpstart the creative process if that's what they need. People selling these songs... Well, I can't even fault them since it's hard to make any kind of money these days.
Well... It just took me about 7 minutes to write all parts of one song, and it actually sounds like something that could have played in the radio (popular). This was in a Latin rhythm too. I don't know what to think of this :\
It's a shame that I didn't have any control over which vocalist sang which part, and just kept having to re-roll the dice! In general it offers very little control over the performance.
It's funny/telling that it's seemingly so self-referential - the lyrics I can understand are about "synthetic harmonies", "mechanized cadences", etc. This is kind of like how >90% of NFTs was art referencing Bitcoin or cryptocurrency memes. Is it possible to make something about actual human problems/experiences with this? I'm guessing so, but this is certainly not a good example of that.
I do wonder how much of this stuff is going to flood Spotify and co. and make payouts to actual artists even smaller.
To be fair, most commercialized music is formulaic, the only difference is that people had to be paid to record it.
I think the biggest problem with AI generated stuff is that the I in AI is hype, these things can't create real novelty. They're a new, poorly understood form of compression. If creativity is left to these AIs then the music will sound the same in 20 years at best, or more likely, degrade/converge when they're all trained on each other's and their own material.
Lots of art techniques-- e.g. a basic set of shapes to help draw heads with correct proportions-- are formulaic. That doesn't make them any less artistic than someone just throwing down marks based on their gut instinct. Music itself has been boiled down to a mathematical and artistic scaffolding that people have worked within for centuries. There are also a whole hell of a lot of creative decisions in between the sheet music and the finished product: instrumentation and all of the subtleties that go into it, recording itself is SO deep and heavily affects the outcome, everything that goes into post production to add all of the color and texture that makes it great... even within recording it, do you know how many hours people spend pouring over really great studio takes because they all land a little differently? This just takes what other people have already done and mushes it together. Is it interesting? Interesting as hell. Could it be useful to people making art? I'm a commercial artist in a different realm and generative AI has been very useful in generating specific references or doing some sort of concept test instead of sketching it out. It could be used as a tool to help make art, but is it making art? Not by any definition I think makes sense. The idea that this is just an incremental step in current commercial music production is only possible if you don't actually know what it takes to make it. The very same thing is true of video games, TV& movies, graphic design, etc. etc. etc.
It would be a net positive if it didn't facilitate a whole bunch of Dunning-Kruger 'artists' summiting Mt. Stupid flooding the lower end of the commercial art markets with vaguely convincing looking bullshit. It weakens our artistic culture, waters down professional standards at the lower end of the markets putting a whole bunch of workaday or entry-level people out of jobs, and in turn, pushes less qualified people upmarket which makes it harder to hire and find work, and weakens salaries. Also, since the MBAs in charge will inevitably shove these tools down people's throats, it takes the most enjoyable parts of the artistic process away from you, replaces it with a bunch of crap amalgamated from a billion other people's work, and leaves the fussy, irritating, soulless, and completely not-rewarding task of cleaning up and enhancing some algorithm's bullshit. It would be like if developers had to start every single coding task with an ever-changing template of completely clusterfuck spaghetti code made by the worst intern at your company.
Yes, but I imagine if you put a human inside a brain scanner, add a feedback loop that uses AI, then you can generate novel stuff with the human being used to just passively rate the output of the AI. You could build a small farm with humans, and generate novel commercial music.
Yeah, I asked for an instrumental with clarinets, piano, upright bass, and baritone saxophone, and it was a song with cheesy lyrics about these instruments. If it doesn't understand "instrumental", it just sucks.
I'm sure it's easy to fix because in the custom mode you can put separate lyrics and style, but you can't generate lyrics based on a theme and set your own separate style yet. You can generate lyrics with ChatGPT of course.
"A ChatGPT for Music Is Here. Inside Suno, the Startup Changing Everything"
My fucking god, I'm really disgusted by "journalists" who hype up everything so that they can get a few clicks. Journalism ought to be critical, objective and investigative.
https://news.ycombinator.com/item?id=38743719
(336 points/86 days ago/165 comments)