"we want to be as open as possible but we fear there are dangers so we'll just close it at some point but make it as benefitting to everybody as possibe"
seem to really be both what they agreed on and are currently doing, especially with GPTs and such.
Of course open source would have been better but even if you disagree on the assessment of the danger, you can respect the fact they still indeed try to make it as cheap / useful as possible.
There's always a question that who is in control of a tech when themselves claim it can be dangerous, but what institution can pretend to be completely democratic / not prone to abuse by bad actors ? What model could have been better ?
I also don't get the profit angle... they need cash to train stuff so it's normal to monetize.
They are not (even remotely) making it “as cheap / useful as possible”. The cheapest possible is free; they could just let everyone download the models, but they don't. The most useful would be open source so that people can work on it, collaborate, innovate, etc. They're not doing any of that, just to make money.
"we want to be as open as possible but we fear there are dangers so we'll just close it at some point but make it as benefitting to everybody as possibe"
seem to really be both what they agreed on and are currently doing, especially with GPTs and such.
Of course open source would have been better but even if you disagree on the assessment of the danger, you can respect the fact they still indeed try to make it as cheap / useful as possible.
There's always a question that who is in control of a tech when themselves claim it can be dangerous, but what institution can pretend to be completely democratic / not prone to abuse by bad actors ? What model could have been better ?
I also don't get the profit angle... they need cash to train stuff so it's normal to monetize.