The authors of the report express a fear that dependence on the model can complicate the development of new skills, and also lead to the loss of already formed skills. ChatGPT deceived the applicant for work, posing as a living agent. This causes concern, as artificial intelligence can be used to launch phishing attacks and hide evidence of fraudulent behavior.
Some companies plan to introduce GPT-4 without taking measures against improper or illegal behavior. There is a risk that artificial intelligence can generate the language of hatred, discriminatory phrases and calls for violence. Therefore, companies should be more attentive to possible consequences when using GPT-4 and take appropriate measures to prevent abuse of artificial intelligence.