• 0
Generating artificial intelligence brings cognitive manipulation risks, or significantly enhances the influence of cognitive warfare war
统计 阅读时间大约1981分钟

2024-02-21 03:27:24Generating artificial intelligence brings cognitive manipulation risks, or significantly enhances the influence of cognitive warfare war

来源:Fast information website
 · Cognitive war refers to the unconventional war method that affects its behavior a real time news website


· Cognitive war refers to the unconventional war method that affects its behavior and decision -making by affecting the cognition of the target population, in order to achieve a certain political purpose.The emergence of generating artificial intelligence has enhanced the quality of cognitive war information weapons, expanded the influence path of cognitive war, and enriched the combat concept of cognitive war.

· When people generally use home servicesWhen accepting their roles as family butler and assistants, even when they rely on them, once these robots are used to cognitive warfare, they may deeply affect people's cognition.In the future, if related education auxiliary technology is used for cognitive war, it will directly have a strong cognitive impact on the vast number of young people in the values of life.

With the rapid development of artificial intelligence, people's concerns about artificial intelligence have always existed.Will artificial intelligence have a human self -awareness, and even hurt humans as described by the movie "Terminator"?Artificial intelligence realizes self -conscious awakening and technical difficulties need to be overcome, but artificial intelligence manipulates human consciousness and cognition is getting closer and closer to us, or it will significantly enhance the influence of the cognitive war.

Cognitive influence of generating artificial intelligence

Since artificial intelligence has not yet formed self -awareness, how can we manipulate human cognition?Obviously, the current artificial intelligence does not have to manipulate human cognition by itself, but has the potential to be used to manipulate people's cognition, and this potential has greatly improved with the emergence of genetic artificial intelligence.

Generating artificial intelligence, refers to models and related technologies with text, pictures, audio, video and other content generation capabilities.The generating artificial intelligence represented by chat generation pre -training converters (ChatGPT) can communicate with people through natural language (rather than code), and generate various texts and pictures according to people's requirements, and have a high completion.quality.For users' questions, ChatGPT's answer seems to be considering reasoning. Unique views sometimes seem to reflect wisdom and easily get recognition from users.It can not only meet the user's needs for knowledge, but also complete text and pictures related work for users, but also accompany users to chat to meet the emotional needs of users.

As an assistant to users, such as text, pictures and other information, ChatGPT also has the potential to affect user cognition, and this potential increases with its excellent performance.It's like an assistant who obeys the instruction every day to help you collect information and provide results. One day inadvertently provides you with false information or prejudice. You may be difficult to detect and be affected by cognition.

There are three main ways to generate artificial intelligence affecting people's cognition: First, provide false errors.The production of artificial intelligence generation is not always accurate, and errors and false information will occur.Studies have pointed out that 13%of ChatGPT generate text with some form of false or misleading information.Generating artificial intelligence may also be artificially guided to generate false errors.These errors are often difficult to distinguish between true and false, and it is easy to interfere with people's cognition.The second is to show discrimination or aggression.The ability to generate artificial intelligence is based on massive data training and has the ability to answer questions, and its generating content is affected by the source of data.The value judgment in these data will affect the content it generates.People found that ChatGPT had racial discrimination or political stance when answering some questions.Some investigations found that 7%of the ChatGPT generated texts with aggressive or aggressive tone.The value judgment of the existence of content may affect user cognition unknowingly.The third is to form emotional dependence.Generating artificial intelligence may be loved by users because of its excellent communication ability, or it may be recognized by users because of its huge knowledge reserves and analyzing problems to solve problems.Users may be accustomed to providing analysis and decision -making and emotional support for artificial intelligence and more trust and dependence on them, so as to give full trust in their answers and viewpoints, and may affect their cognition.

A tool that should have been assisted by people to assist people may subtly affect and direct people's cognition and decision -making, which has to attract people's attention.What's more vigilant is that the ability to generate artificial intelligence affects people's cognition can be further used and amplified by the cognitive war.

Generating artificial intelligence will expand the dimension of cognitive warfare

The cognitive war refers to the unconventional war method that affects its behavior and decision -making by affecting the cognition of the target population, in order to achieve certain political purposes.The cognitive war is often initiated by foreign actors, using information as a weapon, and passing the tailor -made information to the target audience through channels such as the media and social media to affect the emotion and cognition of the target audience.Strategic purposes such as fighting will, disturbing its strategic decisions, destroying the unity and stability of the other country, and degrading the image of the other country.The emergence of generating artificial intelligence will expand the dimension of cognitive war.

The first is to improve the quality of cognitive war information weapons.Cognitive warfare uses false information to affect people's cognition.These false information are often formulated according to the personality characteristics of the target group and the political tendency vector, which can have a large cognitive effect.Generating artificial intelligence can generate high -quality false information according to the characteristics of the user.This information is really difficult to distinguish, and it can be produced on a large scale, which enriches the information tools of cognitive war.

The second is to expand the influence path of cognitive war.The information used in the cognitive war is mainly transmitted through mass media and social media. Nowadays, there are more channels for generating artificial intelligence.By affecting or controlling the content generated by artificial intelligence, it can provide false information, discrimination or information content with a certain political tendency to users, affecting users' cognition.Given that the communication of artificial intelligence and users is often one -to -one, the cognitive warfare of this path will be more secretive and more difficult to be discovered.

The third is to enrich the combat concept of cognitive war.The cognitive war first needs to pass the specific information to the target population, causing the attention of the target audience to further cause the other party's cognitive changes.Therefore, the cognitive war needs to consider how to accurately pass the information to the target group, and how to attract the attention of the target group.Social media technology helps to achieve a cognitive war to push specific information to specific people, which improves the effect of cognitive warfare.With the help of the cognitive war of generating artificial intelligence, there is no need to worry about the transmission of information.Because the information generated by the generated artificial intelligence is generated in the interaction with users, the information used for cognitive warfare will also be read in interaction with users.Considering the emotional dependence that users may have, the information transmitted by the generated artificial intelligence can better achieve cognitive manipulation.

The risk of cognitive manipulation brought by future generation of artificial intelligence

Artificial intelligence -related technologies are still improving.The generating artificial intelligence will win the trust and dependence of more users with better services, and also provide a larger platform for cognitive warfare to manipulate people's cognition.

It is foreseeable that the era of rapid popularization of artificial intelligence will soon come.By providing interfaces or models for more products and software, generating artificial intelligence services will be used by more enterprises and individuals.The generation of artificial intelligence will also be more intelligent and all -round. Not only can you chat with you and help you analyze decision -making, you may also help you choose to buy goods, book tickets, and help you do housework and work hard work.The generation of artificial intelligence technology and applications may be everywhere, and the threat of the recognition war will also be everywhere.

Tesla CEO Elon Musk's second-generation humanoid robot Optimus recently announced, which is speculated that it is equipped with a large model of ChatGPT. It is expectedIn the dollar, the number of mass production may reach 10 billion to 20 billion units, exceeding several times the population of the earth.The robot's goal is first to become a factory production assistant, which can be extended to a more complex environment such as families and become general service robots.When people generally use home service robots, accept their role as family butler and assistant, or even rely on them, once these robots are used for cognitive warfare, or they will deeply affect people's cognition.The target of the robot is first of all the factory production assistants, which can be extended to a more complex environment such as families and become universal service robots.When people generally use household service robots, accept their roles as family housekeepers and assistants, and even rely on them. Once these robots are used for cognitive warfare, they may deeply affect people's cognition.Some scholars have tried to improve ChatGPT technology in the personalized education counseling of students, and found that one -to -one teaching tutoring based on ChatGPT model technology can achieve better results.In the future, related education auxiliary technologies are likely to be widely used in students' learning counseling.If related technologies are used for cognitive war, it will directly have a strong cognitive impact on the majority of young people in the values of life.

It can be seen that, with the advancement of generation artificial intelligence technology and the popularization of related technologies, the risk of generating artificial intelligence is also rising sharply.

The risk of generating artificial intelligence cognitive manipulation brings new challenges

At present, the discussion of the risk of generating artificial intelligence pays more attention to the security of its generating content. If there is false information and discrimination content, it pays less attention to the risk of cognitive manipulation it brings.The "Interim Measures for the Management of Genesis Artificial Intelligence Services" issued by the country in August this year also involves this risk.However, generating artificial intelligence is used by the risk of cognitive war and the challenges brought by supervision.

The first is to generate large artificial intelligence models in the development and practice stage.During the development training stage of generating artificial intelligence, it can affect its preset values.In view of the training process of generating artificial intelligence such as ChatGPT, mainly data -based training, artificial fine -tuning and human feedback -based reinforcement training, and control the source of data through controlling the source of data.Adjusting its value judgment can realize the impact of preset values on the production of artificial intelligence during the training stage, thereby affecting the tendency of the content in its practical application and for cognitive warfare.In the practical application stage of generating artificial intelligence, it can guide them to generate false or misleading information content and spread through misleading instructions.Perhaps remote attack methods are still available to interfere with the production of genetic artificial intelligence. For example, modify the output content, put the specific information that serves the cognitive war into the production content.

Second, the artificial intelligence black box brings challenges to the supervision of the big model development stage.Based on the specific method of artificial intelligence, there are also black boxes that people cannot understand, which means that people cannot explain why artificial intelligence can show logical reasoning capabilities and how to think about.Therefore, people cannot fully predict and control the output content of generated artificial intelligence.The uncertainty of the generated artificial intelligence generates the challenge to relevant supervision.It is difficult for regulators to require developers to fully ensure the reliability of generating content, and it is also difficult to ensure the security of the content of the output content during the development stage through the spot checks during the development phase.

The third is that the supervision and personal privacy protection of the formation of artificial intelligence have tension.The application of generating artificial intelligence is mostly a one -to -one communication with users. It has certain privacy, so that the cognitive war of generating artificial intelligence services is more hidden.In order to ensure the production of artificial intelligence, the content of artificial intelligence is in conjunction with national laws and moral norms, and to prevent them from being used to cognitive manipulation, all interactions between users and generation artificial intelligence may be needed.With the popularization of genetic artificial intelligence services, such interaction may integrate all aspects of people's lives and supervise all the interaction of all users, which is difficult to achieve, and it will inevitably violate consumers' privacy.

Opportunities and risks of artificial intelligence coexist.To deal with the above challenges, the development of artificial intelligence technology is the meaning of the should.Defense artificial intelligence technology may be used to supervise the risk of cognitive manipulation of artificial intelligence and artificial intelligence.However, development and security need to be taken into account.When humans cannot control the risk of artificial intelligence, and do not even have the ability to have a suspension key at the emergency, development should be cautious.At present, the generated artificial intelligence such as ChatGPT involves only reading user language information, showing results, and not understanding the understanding of information content.They have not reached human IQ, and they are not self -aware.But this did not affect its challenge to people's cognitive security.One day, artificial intelligence reaches or even exceeds the intelligence of human beings, or has self -awareness, they will deeper challenge human security.

1、Fast information website原创文章未经授权转载必究,如需转载请联系官方微信号进行授权。
2、转载时须在文章头部明确注明出处、保留官方微信、作者和原文超链接。如转自Fast information website
)字样。
3、Fast information website报道中所涉及的融资金额均由创业公司提供,仅供参考,Fast information website不对真实性背书。