CheckNews
Article reserved for subscribers
The new technology, developed by OpenAI, tricked people into thinking they were a person with “visual impairment” to convince someone to solve a captcha for them, according to a report. A disturbing manipulation.
If you were already impressed by the writing quality of ChatGPT, this conversational robot developed by the American company OpenAI, you risk being blown away by GPT-4, their latest model of artificial intelligence. Officially available since Tuesday, GPT-4 would have for example “40% more chance” to produce factually accurate answers than ChatGPT, which was based on the GPT-3 model.
And the differences in actual outcomes are significant. To test the capabilities of its models, OpenAI makes them pass tests, such as that of the American bar (the exam that allows you to become a lawyer): where the old version of GPT obtained a score that would place it among the 10% of lowest-scoring (real) people on this test, GPT-4 is at the level of the top 10%.
Simulate behavior
But the more AI progresses, the greater the potential risks. Before making this new version public, OpenAI claims to have passed “six months to make GPT-4 more secure and aligned”. The latter term refers to a branch of AI research that involves ensuring that the behavior of an artificial intelligence system “aligns” with that of its creators, and thus avoiding that the model can harm the interests of hu