Hacker Newsnew | past | comments | ask | show | jobs | submit | tootyskooty's commentslogin

I suspect one can go a lot further by adopting some tweaks from the GPT-2 speedrun effort [0], at minimum Muon, better init and carefully tuning learning rate.

[0]: https://github.com/KellerJordan/modded-nanogpt


I gave it a shot with periplus.app :). Not perfect by any means, but it's a different UX than chat so you might find it interesting.


Looks like a great start, played around with it a bit yesterday and today, I've basically been doing the same with my own CLI but the UI you came up with helps a great deal with navigation and resuming learning :)

One issue I found is the typical "LLM accuracy" issue, with seemingly no recurse. I tried to generate some courses for topics I already know well, just to review how accurate it is, and while popular subjects (ex: "Electronic Music Fundamentals") it gets most of the details correct, less popular subjects (ex: "Scene Transitions with Octatrack") are riddled with errors (both in the "docs" and the quizes/exercises), and I cannot find a way of correcting/adjusting/reporting the errors.


Yeah it's still hard to deal with LLM gaps (fwiw Study mode would also be prone to this). I do try to catch the super obvious stuff and put up a disclaimer but it's far from perfect.

I had some prototypes basing the generations in websearch but the APIs are still super expensive on that front + the models tend to overindex on the results.


This looks super cool—I've imagined something similar, especially the skill tree/knowledge map UI. Looking forward to trying it out.

Have you considered using the LLM to give tests/quizzes (perhaps just conversationally) in order to measure progress and uncover weak spots?


There are both in-document quizzes and larger exams (at a course level).

I've also been playing around with adapting content based on their results (e.g. proactively nudging complexity up/down) but haven't gotten it to a good place yet.


Nice, I've been playing with it a bit and it seems really well done and polished so far. I'm curious how long you spent building it?

Only feedback I have so far is that it would be nice to control the playback speed of the 'read aloud' mode. I'd like it to be a little bit faster.


Glad you like it!!

I've been working on it on-and-off for about a year now. Roughly 2-3 months if I worked on it full-time I'm guessing.

re: playback speed -> noted, will add some controls tomorrow


Just added a proper playback control component on desktop, allows changing rate, rewinding & persists across pages :)!


Awesome! Will try it soon.

What's your GTM plan? You built an amazing app—I hope you are focusing as much on marketing as adding features! I think a lot of people will like this if you get it in front of them.


I have no idea how to market this honestly. Most of my users came from a single discord server, but the feedback is overall pretty positive.

If you have any tips I'd be super grateful (gave you a follow on X).


Here are some ideas, in no particular order:

- Find relevant subreddits on Reddit and post about the tool in an authentic way (not promotional, but a "here's what I built" style).

- Do a Show HN. Look at other people's successful ones to get an idea of the tone/style that works.

- ProductHunt (read this for suggestions on how to do it well: https://www.lennysnewsletter.com/p/how-to-successfully-launc...)

- Content marketing: do SEO keyword research and create pages/posts that target promising keywords. This product in particular probably has a lot of potential with this strategy, since you can use the tool itself to create interesting content which also shows off the features.

- YouTube: similar to above, but find relevant videos from other creators that are doing well and try to create your own videos in a similar style, looping in the product where it makes sense. Like with reddit and HN, better to be authentic and not promotional.

- Media: reach out to popular blogs and publications that cover learning tools.

If you have any cash available to invest (even a small amount), you can also try:

- Reach out to influencers on YouTube, X, TikTok, etc. either directly or via promotion marketplaces. You'll likely have to pay, but it can be high ROI since you already have a paid plan.

- Similarly, you can try ads on reddit, adwords, facebook, X, etc. and see if any of them offer immediate positive ROI.

Depending on your goals, you could also consider applying to YC. It will give you a significant marketing boost if you get in, but requires thinking in terms of how to build a big business, which isn't for everyone. The product might be good enough to give you a shot, if you're interested in that route, but it would also probably help your chances to try some organic marketing strategies first to prove that you can do it. Also helps if you have a cofounder.

--

On a different note, the new audio playback controls are great! Would be nice to also have them on mobile :) My preferred way to consume these courses would be on mobile, like an audio book or podcast.


Thanks a lot! I did do some of these things (namely Reddit) and that worked well, just the number of places that allow posting is limited and I don't want to get too spammy. Will continue there.

Main conceptual issue I've been having with other marketing (e.g. influencer) is that there isn't a well-defined audience to market this to. Usually edtech targets students/schools and Periplus doesn't fit there too well. Need to find what works I guess.

I'll spend more time on it from now on :). Thanks again.

re: mobile playback controls -> on it


Hey, I've been running into some bugs with audio playback. Where should I report these?


Hey, sorry for seeing this late!

You can send me a message on discord (dcbcdefb), or email (support (at) periplus dot app).

Also available on Twitter!


Honestly thought they would take this a bit further, there is only so much you can do with a prompt and chat. It seems fine for surface level bite-sized learning, but I can't see it work that well for covering whole topics end to end.

The main issue is that chats are just bad UX for long form learning. You can't go back to a chat easily, or extend it in arbitrary directions, or easily integrate images, flashcards, etc etc.

I worked on this exact issue for Periplus and instead landed on something akin to a generative personal learning Wikipedia. Structure through courses, exploration through links, embedded quizzes, etc etc. Chat is on the side for interactions that do benefit from it.

Link: periplus.app


Still working on https://periplus.app!

It's an environment for open-ended learning with LLMs. Something like a personalized, generative Wikipedia. Has generated courses, documents, exams, flashcards, maps and more!

Each document links to more documents, which are all stored in a graph you grow over time (very Obsidian-esque).


No, you don't need to try to keep up with new tools. I would recommend you try the models though, even for a short time every few months. Send them questions or things you're working on, and see how they do. Provide sufficient context.

It's a good approximation to say that all tools are thin wrappers on top of the models, and having a good grasp of what the models can/can't do right now gets you 80% of the way there.


Before LLMs (not hard-set order): IDE/interface -> Stack Overflow -> Docs -> Library code -> Github

LLMs now slot in first or second, typically completely eliminating SO. Others still provide value.


Woah so I bet you are an LLM power user


One underdiscussed advantage is that an LLM makes knowledge language agnostic.

While less obvious to people that primarily consume en.wiki (as most things are well covered in English), for many other languages even well-understood concepts often have poor pages. But even the English wiki has large gaps that are otherwise covered in other languages (people and places, mostly).

LLMs get you the union of all of this, in turn viewable through arbitrary language "lenses".


Google will also have good results to report for this year's IMO, OpenAI just beat them to the announcement


I think google did some official collaboration with IMO, and will announce later. Or at least that's what I read from the IMO official saying "AI companies should wait 1 week before announcing so that we can celebrate the human winners" and "to my knowledge oai was not officially collaborating with IMO" ...


The conclusion is that research takes time to productize, and this is cutting-edge research. OAI employees stated that there isn't anything math-specific (think AlphaGeometry) about this model. It's a general system.


Honestly might be more indicative of how far behind vision is than anything.

Despite the fact that CV was the first real deep learning breakthrough VLMs have been really disappointing. I'm guessing it's in part due to basic interleaved web text+image next token prediction being a weak signal to develop good image reasoning.


Is there anyone trying to solve OCR, I often think of that annas-archive blog about how we basically just have to keep shadow libraries alive long enough until the conversion from pdf to plaintext is solved.

https://annas-archive.org/blog/critical-window.html

I hope one of these days one of these incredibly rich LLM companies accidentally solves this or something, would be infinitely more beneficial to mankind than the awful LLM products they are trying to make


You may want to have a look at Mistral OCR: https://mistral.ai/news/mistral-ocr


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: