> See it something that has the potential to disrupt your life in ways that you could or could not see.
You see, this is one of the big problems. Most of the disruptive tech that has come to dominate our lives over the last decade or two has been for the worse. Yes, they bring advantages, but they bring larger disadvantages.
AI threatens the same sort of deal, but with even higher stakes.
However, the author is right. It's here and has to be dealt with somehow. I still haven't figured out how I'll personally deal with it[1]. I'm taking it one step at a time, but as the moment, the only defense appears to be disengagement.
[1] I'm talking about LLMs and generative AI here. I work on industrial DL systems and so am not a knee-jerk opponent to "AI" in general.
> AI could hurt us, but for the most part, it’s supposed to be a good thing: nature of its impact will depend on large-scale Data Citizen collaboration
I don't know what that means or looks like, and the author provided zero evidence or examples backing up their claim.
I'd be happy to clarify. I should probably make a reference to some of the statements made by Eric Schmidt. In that, it's difficult to predict the impact of AI. But if there's another point you were referring to, feel free to let me know so we can disucss.
You see, this is one of the big problems. Most of the disruptive tech that has come to dominate our lives over the last decade or two has been for the worse. Yes, they bring advantages, but they bring larger disadvantages.
AI threatens the same sort of deal, but with even higher stakes.
However, the author is right. It's here and has to be dealt with somehow. I still haven't figured out how I'll personally deal with it[1]. I'm taking it one step at a time, but as the moment, the only defense appears to be disengagement.
[1] I'm talking about LLMs and generative AI here. I work on industrial DL systems and so am not a knee-jerk opponent to "AI" in general.