It takes a significant amount of time (few hours) on a single consumer GPU, even 4090 / 5090, on personal machines. I think most people use online services like runpod, vast ai, etc to rent out high-powered H100 and similar GPUs for a few cents per hour, run the fine-tuning / training there, and just use local GPUs for inference on those fine-tuned models generated on cloud-rented instances.
It used to be that way! Interestingly I find people in large orgs and the general enthusiast don't mind waiting - memory usage and quality are more important factors!
I think the author's main point is that it's product revenue is flat (iPhone) or dropping (all other products), in comparison to (I guess) other tech companies that are still growing.
For anyone curious, this video is about the Cloudsurfing glitch in Final Fantasy VII which allows players to walk around unaffected by the game's geometry. It goes into a technical deep dive on the 3D geometry that causes this.
> It costs $29 per thousand to run an ad in my videos, and I get $10 per thousand. Where does the other $19 go? To YouTube, of course. That’s a 2:1 split in favor of the platform. Lord, give me strength.
I thought the split was something like 55-45 for Youtube vs the creator. This sounds more like 66-33, is this typical for other creators / influencers as well?
It is 55-45. I suspect the author read some data incorrectly (the YouTube analytics page can be a bit confusing).
The RPM she achieved is excellent (average in many niches is $1-$2), she could have been incredibly successful if she trimmed down her production cost.
She could never have been incredibly successful by cutting costs. She posted a video a week which cost $3,500 to make and brought in $1,000.
If production had cost her nothing she would only have made $1k a week, for something that would have taken up half of her time.
There are very successful people making high budget video and very successful people making shoestring video on YT. She didn't really have the following to do either. The success that she had on YouTube was driven by her following elsewhere, and she only continued losing money for so long because she was motivated by non-YouTube goals.
Curious, wouldn't doing something like this qualify as illegal:
> Google has gone through great lengths to obfuscate its involvement, funding, and control, most notably by recruiting a handful of European cloud providers, to serve as the public face of the new organization. When the group launches, Google, we understand, will likely present itself as a backseat member rather than its leader. It remains to be seen what Google offered smaller companies to join, either in terms of cash or discounts.
> Google offered CISPE’s members a combination of cash and credits amounting to an eye-popping $500 million to reject the settlement and continue pursuing litigation. Wisely, they declined.
> ... putting forward paid commentators to discredit us.
> Google pivoted to stand up its own astroturf lobbying organization. It hired a lobbying and communications agency in Europe to create and operate the organization. And it recruited several small European cloud providers to join. One of the companies approached, who ultimately declined, told us that the organization will be directed and largely funded by Google for the purpose of attacking Microsoft’s cloud computing business ... [the document] omits any mention of Google’s involvement and the actual purpose of the organization.
The difference is that Meta and the FAANG companies make hundreds of billions of dollars in annual revenue, and are capable of hiring top talent to solve this problem of their AI running well on any GPU they choose for their data center.
Consumers, open-source solutions and smaller companies unfortunately can't afford this, so they would be dependent fully on AMD and other providers to solve this implementation gap; so ironically smaller companies may prefer to use Nvidia just so they don't have to worry about odd GPU driver issues for their workloads.
But Meta is the main company behind Pytorch development. If they make it work and upstream it, this will cascade to all Pytorch users.
We don't have to imagine far, it's slowly happening. Pytorch for ROCm is getting better and better!
Then they will have to fix the split between data-center and consumer GPU for sure. From what I understand, this is on the roadmap with the convergence of both GPU lines on the UDNA architecture.
I'm guessing most recent dissertations have been digitized, but this is probably the norm only in the last 10-15 years? Most universities likely have never given thought to digitize anything from before then due to the extra costs that would be involved in digitizing those physical copies. I am curious how much such an effort would cost though.
I should have qualified this with "the engineering departments at UC Berkeley". Everything we put out (papers, technical reports, open source software) was on the Internet. Formats were varied; LaTeX and Postscript were commonly used. PDF a bit later.