Anthrops launches a new program to study artificial intelligence
Could AIS be in the future “conscious”, and the world’s experience is similar to the way people do? There is no strong evidence that they will do, but the anthropoor does not rule out this possibility.
Thursday, AI Laboratory Declare He has started a research program for investigation – and willingness to move – what is called “typical luxury”. As part of this effort, Anthropor says it will explore things like how to determine whether the “luxury” of the artificial intelligence model deserves moral consideration, the possible importance of “signs of distress”, and the potential “low -cost” interventions.
There is a major dispute within the artificial intelligence community about what the “exhibition” of human characteristics, if any, and how we should “treat it.”
Many academics believe that today’s artificial intelligence cannot be aware of humanitarianism or human experience, and they will not necessarily be able to do so in the future. Artificial intelligence, as we know, is a statistical prediction engine. No “thinking” or “feeling” really as these concepts were traditionally understood. Train on endless examples of texts, images, etc., artificial intelligence learns patterns and useful ways at some point to extrapolate tasks.
As Mike Cook, a research colleague at Kings College London specialized in artificial intelligence, Teccrunch recently told me in an interviewThe “opposition” model cannot change its “values” because the models do not do so Ownership Values. To suggest otherwise it is our projection of the system.
“Anyone embodies the systems of artificial intelligence to this degree is either playing for attention or misunderstanding their relationship with artificial intelligence,” said Cook. “Is the artificial intelligence system improving its goals, or is it” obtaining its own values ”? It is a matter of how it describes it, and how the language you want to use in relation to it.”
Another researcher, Stephen Casper, a doctorate student at the Massachusetts Institute of Technology, told Techcrunch he believed that artificial intelligence amounts to “counterfeit” that “that”.[does] All types of defects[s]He says, “All kinds of trivial things.”
However, other scholars insist that artificial intelligence Do It has human values and components similar to manner to make ethical decisions. A Ticket Outside the AI Center for Artificial Intelligence, Artificial Intelligence Research, means that artificial intelligence has valuable systems that lead to priority to their well -being over humans in certain scenarios.
Anthropor laid the basis for the model’s luxury initiative for some time. Last year, the company I rented The first dedicated “AI Welfare”, Kyle Fish, to develop guidelines for how humanitarian companies and other companies are dealt with. (Fish, which leads the new model luxury research program, The New York Times was told He believes that there is a 15 % chance of Claude or other Amnesty International today.)
In a blog on Thursday, Anthropor acknowledged that there is no scientific consensus on whether the current or future artificial intelligence systems can be conscious or have experiences that guarantee moral consideration.
The company said: “In light of this, we are approaching the issue with humility and with a few assumptions as much as possible.” “We realize that we will need to review our ideas regularly with the development of the field.