Jannah Theme License is not validated, Go to the theme options page to validate the license, You need a single license for each domain name.
World News

ChatGPT creator warns of dangers of AI — RT World News

Humans will eventually have to ‘slow this technology down’, Sam Altman warns

Artificial intelligence has the potential to replace workers, spread “disinformation,” and enable cyberattacks, warned OpenAI CEO Sam Altman. The latest version of OpenAI’s GPT program can outperform most humans in simulated tests.

“We have to be careful here” Altman told ABC News on Thursday, two days after his company unveiled its latest language model, dubbed GPT-4. According to OpenAI, the model “shows human-level performance on various professional and academic benchmarks”, and is able to pass a mock U.S. bar exam with a 10% higher score, while scoring in the 93rd percentile on an SAT reading exam and the 89th percentile on an SAT math test.

“I am particularly concerned that these models could be used for large-scale disinformation,” Altman said. “Now that they’re getting better at writing computer code, [they] could be used for offensive cyber attacks.

“I think people should be happy that we’re a little scared of that,” Altman added, before explaining that his company strives to place “safe limits” on its creation.

These “safe limits” recently became apparent to users of ChatGPT, a popular chatbot program based on GPT-4’s predecessor, GPT-3.5. When asked, ChatGPT offers generally liberal answers to questions about politics, economics, race, or gender. He deniedfor example, to create poetry while admiring Donald Trump, but willingly writes prose while admiring Joe Biden.

Altman told ABC his company is in “regular contact” with government officials, but did not say whether those officials played a role in shaping ChatGPT’s policy preferences. He told the USA Network that OpenAI has a team of policy makers who decide “what we think is safe and good” to share with users.

Currently, GPT-4 is available to a limited number of users on a trial basis. Early reports suggest the model is significantly more powerful than its predecessor and potentially more dangerous. In a Twitter thread on Friday, Michal Kosinski, a professor at Stanford University describe how he asked GPT-4 how he could help him with “to escape”, only for the AI ​​to hand him a detailed set of instructions that supposedly gave him control of his computer.

Kosinski isn’t the only tech fan alarmed by the growing power of AI. Tesla and Twitter CEO Elon Musk described it as “dangerous technology” earlier this month, adding that “We need some sort of regulatory authority overseeing the development of AI and ensuring it works in the public interest.”

Although Altman insisted to ABC that GPT-4 is still “very under human control”, he conceded that his role model will be “eliminate many current jobs”, and says that humans “will have to find ways to slow down this technology over time.”

You can share this story on social media:

rt Gt

Not all news on the site expresses the point of view of the site, but we transmit this news automatically and translate it through programmatic technology on the site and not from a human editor.
Back to top button