Michael Beijer wrote:
I wonder what that's all about?
Just before the resignation of OpenAI chief Sam Altman, scientists from the tech company warned the board by letter about a new sweeping discovery the company had made. A technological breakthrough with the potential to "damage humanity" and make millions of jobs obsolete.
With OpenAI's new technical invention, the language model might be able to go a step further. Experts envision an AI system that not only answers questions, but can achieve goals. A system that you give a goal and then independently tries all kinds of steps until it reaches its goal. That poses all sorts of risks, such as systems being used to circumvent digital security or harm individuals.
That the OpenAI board put on the brakes is thus logically explainable, according to experts. "If you give a language model, for example, the task of improving itself, it becomes very risky," they say. "A model then starts tinkering with itself. It's going to initiate its own development or perhaps copy itself. Before you know it, you're no longer in control."
[Edited at 2023-11-24 09:01 GMT]