Hacker News new | past | comments | ask | show | jobs | submit login

fine tuning AI to be limited in a topic will be better by like next week.



I think it's actually going to be non-trivial to really make these tools both useful, smart, and unexploitable. Hell, I remember being at school and we had access to the internet through a very strong content moderation system. What do you think happened? We spent our entire time trying (and succeeding) at circumventing that system. We've already seen with Bing that they've very strongly pulled back on it to mitigate downsides, I'm not sure that's a totally solvable problem. Having said that, sure, as I said, I do think it has a place, I'm just not convinced it's a panacea.


Also add that these tools have to obey the political stance of what the education system should be teaching, based on the geography of the student. So the AI needs to be intelligent enough to do the "war of northern aggression" shit in some places, the "if you have sex before marriage you will die" shit in other places, the progressive mathematics or whatever it's called in other other places...

I'm imagining the infinite amount of weird model tuning that'd be necessary to ensure only some students learn about white flight depending on what state, county, or school district they're in.


> I think it's actually going to be non-trivial to really make these tools both useful, smart, and unexploitable

I mean, you could say that about human teachers too. I don't think they need to be perfect to be useful. In the end, as Peter Thiel opined in his book, the future is humans and robots working together on solving bigger and bigger problems. A teacher working with AIs that tutor kids and then managing that relationship and the relationship with the parents, etc. while also being the adult role model that kids will always need will remain.


*humans and robots working together to find better and more fine-tuned ways to exploit labor


I'm fine with that. That's the new job, AI doctors. Prompt Counterengineering. AI suppressants.


I'm pretty sure sooner or later we'll collaboratively upload our prompts to PromptHub so that our PromptOps team can achieve AICI/AICD.

Because it's definitely not easier with 2 lines of code. Not at all.


Easy solution: just give up the moderation effort. Empower kids to do whatever the hell they want with the AI, as long as they manage to get their homework done.


I really don't understand the problem. If a kid is spending the whole class trying to trick the AI into turning into skynet then tell them off.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: