ChatGPT seeks solution to prevent artificial intelligence apocalypse

OpenAI, the creators of ChatGPT, have approached regulators that would limit the capabilities of artificial intelligence.

A blog post about the regulation of super-intelligent programs appeared on OpenAI founder Sam Altman’s blog. This is a much stronger piece of technology than anything ever created before.

People who are well-known in the technology community, such as Elon Musk and Steve Wozniak – co-founder of Apple – have spoken out in the direction of the developers responsible for the development of artificial intelligence about the fact that it is happening too fast and has no barriers. Interestingly, the owner of OpenAI agrees with this statement.

Altman was recently questioned by the U.S. Congress, in which he advocated more regulation of this environment.

AI comparable to nuclear power

Sam Altman, along with other authors, compares the development of artificial intelligence to the development of nuclear energy and synthetic biology. Preventive measures cannot be “reactive” and instead will have to be put in place now in preparation.

The main reason for this is to prevent any effects of a super-intelligent artificial intelligence that will be able to threaten humanity. This does not mean, however, that OpenAI will stop work and development will cease.

However, Altman admits in his blog that it would be counter-intuitive, risky, and difficult to stop work on artificial intelligence, and given that open-source projects are gaining popularity and becoming more advanced, it seems that it would be downright impossible to stop their creation.

OpenAI wants development to be controllable

OpenAI wants measures put in place in case superintelligence starts to become a problem for all of humanity. One example given is the IAEA, the International Atomic Energy Agency, which is a multi-government body that wants nuclear energy to be used peacefully.

Another solution, Altman suggested, is simply to limit how much artificial intelligence could be developed in one year.

We could collectively agree… that the rate of growth in AI capability at the frontier is limited to a certain rate per year.

And of course, individual companies should be held to an extremely high standard of acting responsibly.

Artificial intelligence is booming right now, with Google and many other companies beginning to integrate it into everything from operating systems to search engines.