I actually just finished making a service that does something similar, but it also transforms the transcripts to make them into polished written documents with complete sentences and nice markdown formatting. It also can generate interactive multiple choice quizzes. And it supports editing of the markdown files with revision history and one click hosting.
I'm still doing the last testing of the site, but might as well share it here since it's so relevant:
The pricing is confusingly giving counts of videos of short length, rather than time per price.
The vodcasts that most need transcription are long form. After the "don't make me do math" pricing, you do have a table of minutes, up to 60, so for a typical, say, ContraPoints vodcast episode, you multiply by 3, and find out that could cost $30 to turn into the optimized transcript. (Which the creator might well pay for if they value their time, but viewers might not.)
Thanks for the feedback. I'll try to clarify the pricing table a bit better. And yes, this is targeting creators more. If it turns out that viewers are the better target market, I might pivot it a bit. And I'm considering adding a discount for longer videos.
I signed up, and it's a beautiful UI, with impeccable results for the PDF or Markdown flavors in particular. Speed was impressive on a video that had subtitles off. Bundling all formats into a zip is a stroke of genius.
Does your tool work on 3 hour vodcasts? There are quite a few long series I would far prefer to read than listen.
Yes, I'm also working on another version that is document-centric. It's a bit of a different problem. In the case of YouTube video transcripts, we are dealing with raw speech utterances. There could be run-on sentences, filler words and other speech errors, etc. Basically, it's a very far cry from a polished written document. Thus we need to really transform the underlying content to first get the optimized document, which can differ quite significantly from the raw transcript. Then we use that optimized document to generate the quizzes.
In the case of a document only workflow, we generally want to stick to what's in the document very closely, and just extract the text accurately using OCR if needed (or extract it directly in case we don't need OCR) and then reformat it into nice looking markdown-- but without changing the actual content itself, just its appearance. When we've turned the original document into nice looking markdown, we can then use this to generate the quizzes and perhaps other related outputs (e.g, Anki cards, Powerpoint-type presentation slides, etc.).
Because of that fundamental difference in approach, I decided to separate it into two different apps. But I'm planning on using much of the same UI and other backend structure. The document centric app also seems like it has a broader base of potential users (like teachers-- there are a lot of teachers out there, way more than there are YouTube content creators). I started with the YouTube app because my wife makes YouTube videos about music theory and I wanted to make something that at least she would actually want to use!
This approach really doesn't make sense to me. The model has to output the entire transcript token by token, instead of simply adding it to the context window...
A more interesting idea would be a browser extension that lets you open a chat window from within YouTube, letting you ask it questions about certain parts of the transcript with full context in the system prompt.
That's initially what I thought this was. Seems like somebody had the same concept, there's an extension called "AskTube" which looks like it does exactly this.
For sure, that's an interesting idea, but potentially very costly (for longer videos). A plus side of this strategy is that the Transcription gets clean up a lot and also the math notation fix up too. So, it's just a cleaner text, well formatted for people who like to read videos instead of mindlessly watching a video.
We're at Emergent Mind are working on providing bits of a technical transcript to a model and then asking follow up questions. You can check it out here http://emergentmind.com if curious.
Until I read other comments here, I assumed that's what they were doing since it bugged out on me and didn't regurgitate the transcript back to me yet still let me ask questions about it.
How is it supposed to work? When I open this, I just see a prompt that says "Get the full transcription of any Youtube video, fast. Studies suggest that reading leads to better retention of complex information compared to video watching. Only English videos currently."
I tried pasting the URL of a YouTube video and I get the message
"I'm unable to access the video directly, as the tool needed for that is disabled. However, if you'd like, you can summarize the video or let me know how I can assist with it!"
Coming soon! Currently, it works for videos under one hour. This limitation is due to ChatGPT's context window when using Plugins. I don't know why since it should support 200k tokens... Alternatively, you can use https://textube.olivares.cl to get the full transcription for any video in English.
You can get transcripts of any length using textube.olivares.cl or the API directly. The limitation lies in the current model used by Plugins, not in the API itself.
I get what this is doing, but calling it "chat with a transcript" is weird. Like, documents and videos don't chat. We chat with a bot who has seen the document/video.
In the enthusiast community, I suppose. It's not too late to adopt clearer terminology- this will be important as those things try to reach mainstream users.
I don't know if everyone has access to it (might just be yt premium), but many videos have an "ask gemini about this video" button, where you can directly ask questions about the video.
It might be a preview or something because I have YT premium and doesn't show up that anywhere. Can you share a video that works for that? Like this one.
It is a beta feature in YouTube premium and doesn't seem to be for all videos, but it has been extremely useful in my experience. You can even ask where in a video things are discussed etc.
It’s really ironic that YouTube basically pushed videos to be at least ~ten minutes long through commercial incentives, then offers AI features to cut through that filler garbage.
While this is true, the thrust of what youtube was doing was to incentivize creation of videos that are 10+ minutes because they need to be 10+ minutes, not 10+ minutes because you are trying to game the system.
I’d love this but from the yt home page and search results page. That would let me ask chatgpt if the video really contains the info its thumbnail/title suggest it does without having to leave the current browser tab.
I’ve done this by manually copy/pasting a yt transcript into chatgpt (and later streamlining it into a bash function), and it was quite effective, allowing me to dodge a couple of click bait time wasters. (videos that looked important but really were just fluffing up unimportant nonsense).
Very nice. I made a thing in Python which summarizes a YouTube transcript in bullet points. Never thought about asking it questions, that's a great idea!
I just run yt-dlp to fetch the transcript and shove it in the GPT prompt. (I think also have a few lines to remove the timestamps, although arguably those would be useful to keep.)
My prompt is "{transcript} Please summarize the above in bullet points"
The trick was splitting it up into overlapping chunks so it fits in the context size. (And then summarizing your summary because it ends up too long cause you had so many chunks!)
These days that's not so important, usually you can shove an entire book in! (Unless you're using a local model, which still have small context sizes, work pretty well for summarization.)
You can run it locally, and it's really fast. But since YouTube transcription is really good, I don't see why you'd use Whisper and get a worse transcription (unless maybe it's on videos that Google did not transcribe for whatever reason).
Are you sure you're looking at automatic transcripts? YouTube transcripts are bizarrely low quality if they're not provided by the creators (I've actually used my Google Pixel's live transcription to make better captions occasionally).
I just checked a video my girlfriend uploaded a week ago and the auto-transcript was still pretty messy. I've used Whisper for the same task and it's significantly better.
That's crazy, months ago I compared whisper v2 transcripts with YouTube transcripts generated on my video and found them to be identical, down to the timestamps.
I know people who upload a video on YouTube unlisted just to get transcripts generation for free and then delete the video.
I've been using Voxscript [0] for a while, after comparing the two I think voxscript is better, gives longer more detailed summaries, TexTube just seems to give a very brief impersonal overview. Easy to try both and see which you prefer.
Hmm it didn’t work that way for me, first I asked it to summarise a video, then I simply posted the link to the video assuming it would give the transcript, in both cases it summarised the transcript.
But if I start a new session and simply paste the link to the video it gives the transcript. I’m not sure an llm is the best solution to getting full transcripts.
First, I would say that reading is faster than watching. Therefore, it is more time-efficient to read a YouTube video, especially if it covers technical content or interesting ideas. Additionally, you can ask follow-up questions about the content, and since it's in an OAI conversation, you can leverage the "intelligence" of the model to help you understand the parts that you find difficult. Sometimes, I watch technical YouTube videos and wish I had a written version; so here it is.
Nothing, it means nothing, like most of this "AI" hype nonsense.
They copy paste text transcripts into an Llm and have it generate more text based on its training and prompt data. You can't "chat" with a text document of course.
You could ask it for a couple of verbatim sentences from the transcript that are most related to what you are interested in, then find the timestamp for that text. (There could be UI for this.)
Another solution would be to skip the LLM prompting part altogether and
1. break the transcript into short sections
2. create embeddings from them and remember the timestamps for each
3. embed your query (what are you interested in)
4. calculate the closest embedding in the transcript to your query
That's a good idea. However, I believe the challenging part lies in first reconstructing the short utterances into coherent, meaningful paragraphs.
Currently, with the API [1], you can retrieve a JSON with timestamps. The main issue, though, is how to parse the text effectively into meaningful sentences, and then add the timestamps at the beginning of the paragraph. WIP.
Is anyone even chomping at the bit to hear a pedant explain how "chatting with a text document" isn't the most precise way to phrase this concept that we all understand?
allofus.ai already congregates all of the thinking of any creator on YouTube into a single mental model and allows you to interact with their synthetic self.
Sometimes, the model used by Plugins gets confused, especially when the transcript is too long. It might just load the content into memory as a response without saying much more. You can then engage in follow-up chat interactions. But now I just tried again the link and it seems to work. Sometimes you have to try a bunch of times, or explicitly ask for the transcript if not shown.
Even just experience with `man`-pages, "/<term>", show that it is a suboptimal strategy that leaves querying an understanding reader engine to be desired.
Mine is that directly asking a question ("How to...") would be much faster than finding the information through grep or highlight aided skimming. It would be just more efficient.
Also since in order to find a feature through a literal string you first have to guess it... But language is inherently fuzzier, so literal searches are in this purpose weaker than an interface dealing with the fuzzy aspect of expression.
Awesome work, OP! I really believe we’ll soon be able to get a full four-year education just from YouTube. The challenge right now is sifting through the infotainment that the algorithms tend to push.
We’ve added features that promote curiosity and deeper learning, like ELI5 explanations, suggested queries based on transcripts, quizzes to track retention, and more.
If you’re interested in joining us to build out the platform, feel free to reach out at neil at lectura dot xyz
I'm still doing the last testing of the site, but might as well share it here since it's so relevant:
https://youtubetranscriptoptimizer.com/
There might still be a few rough edges, so keep that in mind!