The negative effects of artificial intelligence - the damage of artificial intelligence

 The negative effects of artificial intelligence - the damage of artificial intelligence


It's no secret that technology keeps pace with the times these days, with many new multipurpose gadgets being born every day that accomplish countless tasks for human beings.

The negative effects of artificial intelligence - the damage of artificial intelligence


One such device is the rapidly advancing technology of recent times, which, while until a few decades ago it was a distant dream or science fiction, today we can see it as an increasingly frightening reality. That tool is artificial intelligence.


As John McCarthy, the father of artificial intelligence, put it in a 2004 scientific paper, artificial intelligence can be defined as "the science and engineering of making intelligent machines, especially intelligent computer programs." It involves the task of using computers to understand human intelligence, but AI need not be limited to biologically observable methods. "


Today's artificial intelligence allows us to process large amounts of data quickly. These accounts help to understand changes in market trends, public opinion and weather. The use of artificial intelligence in robotics has already led to major advances in medicine, telecommunications, and home automation. These AI applications are already changing our daily lives, our homes, and our interactions with other people. Existing AI is often described as narrow or weak because only a fraction of its potential has been exploited.


This technological device is still being developed, and while its successful completion would be one of the greatest achievements in human history, it won't necessarily be in our favor. While AI can be used to end wars or eradicate disease, it can also create automated killing machines, increase unemployment or facilitate terrorist attacks.


This article highlights the biggest risks and downsides surrounding artificial intelligence, which many fear is about to become a reality. These negative impacts include: unemployment, prejudice, terrorism, and privacy risks, which are discussed in detail in this article.


1. Artificial Intelligence and Unemployment:

According to the OECD report, the advent of AI could affect 14% of jobs globally, despite the current slow pace of AI development.


Some occupations are more vulnerable to AI than others, the group said. It created categories to account for this effect, from "least exposed" to "most exposed". Those jobs that fall into the "highest risk" category will certainly not be replaced by AI, but they will be the most affected.


The report also highlights a number of industries that fall into the high-risk category, mostly jobs that require highly skilled people in technical positions, such as clinical laboratory technicians, optometrists and chemical engineers.


However, there are benefits to bringing AI to these jobs as well, as those in high-risk jobs are now seeing significant changes in the way they work. This change made their manual work easier and also increased wages and education. It has also been found that while AI is far from replacing the workforce at the moment, it does increase worker productivity. It follows that research presented by the OECD remains divided on the impact of AI on employment and wages.


To sum up; the entire world remains uncertain about the potential impact of more advanced artificial intelligence on job supply and demand. For some, this may mean higher productivity and higher wages, but on the other hand, if allowing certain processes to be automated, limiting human intervention, means the termination of their jobs, then for others That said, the risks will be real.


2. AI and bias:

In general, bias is discriminatory behavior or prejudice that exists in humans, and AI is created by humans; it is not without bias. While programming, a programmer may create an algorithm with personal biases. They may do this intentionally or not, but either way, a degree of this bias can be seen in the algorithm, making it unfair.


Bias can enter AI systems through algorithms in a number of ways. AI systems learn from training data and make decisions based on that learning. This data may contain biased human judgment or represent historical or social inequalities. Thus, biases can be input directly from humans programming systems based on their own biases.


Likewise, lack of representation can lead to bias, as AI will not recognize the experiences of people who did not participate in programming; therefore, it will be biased towards specific demographic groups, that is, it will not be inclusive.


This does not rule out the possibility of developing AI systems that are completely unbiased. Because it is indeed possible to build a system that makes neutral decisions based on data, but it will only be neutral within the quality of the input data, so it is not expected to reach the level of artificial intelligence. Will soon be completely neutral.


Various measures can be implemented to reduce bias in AI systems, such as:


Choose a dataset.


Build diverse teams.


More inclusive, resulting in a more inclusive algorithm with less exclusion bias.


Raise awareness of artificial intelligence and how it works, and how it unintentionally follows biased patterns. If noticed, it can be modified.


Organizations that create algorithms need to be more transparent about the methods they use to collect data for encoding. Therefore, the cause of the deviation in y can be determined.


In addition to the above, another practice that can be employed is to apply a test known as a "blind taste". This means programmers biasedly reject what they already know, which allows the AI to make its own decisions and function normally. Beyond that, there are many other technical tools aimed at reducing bias in learning models or algorithms.


3. Artificial intelligence and terrorism:

Terrorism in particular, with the internet and social networks becoming indispensable; whether it be recruiting followers, buying weapons, spreading messages of hate, educational programs and more. It is also used as a weapon to specifically target a group to identify and attack them.


Terrorist groups resort to artificial intelligence as it increases their spread and speed. This enables them to expand their activities and prove useful for recruitment purposes as well.


The United Nations Office of Counter-Terrorism has released a report on the use of artificial intelligence by terrorist groups. Terrorists have long opted to embrace new technologies, especially when they appear to be heavily regulated or controlled, the report noted. So while new technological tools are being developed that seek to develop us as humans, help us evolve or sustain our existence as humans, terrorists continue to seek to manipulate and spread terror. People have also discovered new tools that can be used as weapons, and artificial intelligence is no exception.


The list of potentially harmful uses of AI is long and extensive, including:


Enhance network capabilities for denial of service attacks.


The exploitation of artificial intelligence by malware developers.


Integrate artificial intelligence into ransomware.


Make passwords easy to guess.


Crack and bypass captcha software.


Use of self-driving vehicles in terrorist attacks.


Use a drone with facial recognition.


Gene-targeted biological weapons.


On the other hand, the possibility of responsible use of artificial intelligence to combat terrorism and extremism is not excluded. Artificial intelligence is particularly helpful in carrying out counter-terrorist attacks because of its ability to predict terrorist activities. The same techniques these terrorist groups use to carry out their attacks are also used to track terrorist groups, thereby preventing their pre-planned actions to a certain extent.


Therefore, it can be said that artificial intelligence has great significance for terrorism today and in the future, whether it is positive or negative. Everything will depend on development and implementation, who has access to these systems, and the effectiveness and efficiency of governments in regulating their use. 4. Artificial intelligence and privacy:

As the range of devices or applications involved with AI increases, so does the number of problems or risks it can cause. One of the biggest concerns with artificial intelligence is privacy. Dependence on artificial intelligence, people can not get rid of. All over the world, the spread of technology has exposed us to artificial intelligence.


Artificial intelligence in the form of its "big brother" has added surveillance, as it is constantly tracking us and keeping track of the data we consume. For example, when we search for shoes online on an e-commerce site, and then visit a social media app, we immediately see an ad for the shoes on that platform. Therefore, AI also plays a role in shaping our decisions. It’s also penetrating our personal spaces, as smart home speakers like Alexa and Google Home can operate on voice commands and know what a person is doing each day. The phones also use iris recognition and biometric data. Therefore, AI has the potential to access all of our personal details.


Our knowledge of the full range of capabilities and risks involved with AI necessitates its highest standards of regulation and scrutiny, always prioritizing its application to protect human rights, including the right to privacy.


In the context of hacking, artificial intelligence is also a double-edged sword, as it can mean solving cybersecurity problems, improving antivirus tools, facilitating attack detection, automating network and system analysis, electronic scanning, but it can also be a tool for hackers. Can be a very useful tool for hackers.


how?

Artificial intelligence helps hackers get smarter about their crimes. For example, artificial intelligence is now used to hide error codes, which means that malware does not attack immediately after the application is downloaded, but only after a certain time or when the application has been used by a certain number of people. By then, malware will be hidden and protected by artificial intelligence.


Not only does AI allow malware to hide and go undetected, but it can also be used to create malware that mimics software from trusted sources and has the ability to replicate itself. It can also be used to create fake identities for people, such as Instagram bots created by artificial intelligence.


Finally, artificial intelligence will be another consideration in the technology race between hackers and cybersecurity system programmers, elevating this race to a higher level than it currently is, giving both parties unlimited time to develop new tools. will be the possibility.


5. Artificial intelligence and freedom of opinion and expression:

Today, most of the information we get is online or through social media. Alto platforms can allow social moderators or parties to control the content we consume, and as a result, we are sometimes unable to make informed decisions due to the biased narratives provided by these platforms, which affects our ability to speak and affects free speech. While these issues may seem unrelated at first glance, a closer look reveals a close relationship between AI and freedom of expression.


Today, technological tools shape the way people interact, access information and exercise freedom of expression. For example, through search engines or social networks, AI can influence the way people perform these activities.


Likewise, in terms of obtaining the information on which people's opinions are based, artificial intelligence and algorithms heavily influence the news supply, somehow shaping the opinions and decisions of entire communities based on their wishes. . Programmer.


This is why there is an inherent risk that a small number of companies will use AI systems to manipulate information disseminated via news sites, emails, social networks, etc.


This is troubling, especially when we’re talking about AI systems with little transparency that can selectively exclude or corroborate important or sensitive information, clouding a community’s decision-making process. At its core it can be manipulated, a community can be as large as a city or an entire country or continent.


Case Study: Social Media Platforms and Artificial Intelligence

As this paper highlights, the positive benefits of AI depend on the intentions and methods of its developers and users, whether they be terrorist organizations or human rights monitoring NGOs.


This could transfer to the way social media networks deal with hate or misinformation that can be spread by artificial intelligence systems. A recurring issue in complaints about the social network is Facebook, developed by the now-rebranded Meta company, whose spokespeople have repeatedly justified the use of its algorithms, saying they help address the issues cited.


In contrast, the London Stories Foundation published its findings on how Facebook’s business model contributed to the COVID-19 “infodemic” in the Netherlands, just one example of many widespread criticisms that have been published.


According to the study, Facebook is contributing to these problems by at least not effectively intervening rather than combating misinformation and hate information. In the wake of the COVID-19 pandemic, Facebook introduced a policy to reduce misinformation. However, Facebook cannot enforce this policy, thus allowing it to spread widely on the platform. Not only has it affected people's lives, it has been reported that election campaigns in the Netherlands have also been severely affected. Fake news and disinformation are among the factors that help far-right politicians get votes.


The report also highlights how misinformation about COVID-19 has fueled distrust of the Dutch government's COVID-19 measures, leading to anti-vaccine and anti-mask campaigns -19 has been changed to "fraud".


Based on the results of the study, the problem does not appear to be due to the failure or weak development of the artificial intelligence that Meta recognizes these messages, and the company expressly promises to remove and reduce them, but rather because it is in the company's interest to publish these messages, which, in addition to manipulating public opinion, can generate Substantial revenue, like most companies do with social media these days.


That's the intent, at least one can glean from its algorithm, which amplifies polarizing narratives and doesn't censor misinformation. That’s why, in Facebook’s case, AI failed simply because the company wasn’t really committed to fighting hate and misinformation. These "failures" are very common and include modifying content on social networks, especially when said content is in a language other than English.


in conclusion

We have yet to explore the exact risks associated with artificial intelligence and how it will affect our lives, and more work needs to be done in terms of regulation and use by governments, private organizations and individuals. So far, public opinion and regulatory response have remained relatively weak.


While innovation is a good thing, it turns out that it can sometimes be detrimental as well. Artificial intelligence does not exist by itself, someone has to program it, train it, and use it. We can shape it all we want, we can control it to some degree. This can be a positive force in the face of economic crises, climate change and pandemics, on the other hand, these programs can be very dangerous if they are not designed, trained or used properly.


You will find many advantages and disadvantages of artificial intelligence, which sometimes makes it a controversial issue that requires thorough analysis from utilitarian and ethical perspectives.


For now, it seems clear that the benefits of using artificial intelligence outweigh the risks. It is equally clear, however, that engineers and designers, and society at large, must take the aforementioned shortcomings into account in order to compensate for them with the depth and rigor required by the subject matter. AI My consciousness is here, it's here, and little by little, it's permeating more and more areas of our daily lives.


Examining the ethical constraints on the design of intelligent systems is an important step, but it is not sufficient. If we are indeed embraced by the "community of human agents"Around this time, we should also think about how our social relationships with intelligent systems are ethically regulated.


Making ethical choices is not easy. The recurring problem of technology in systems composed of thousands or even millions of intelligent software on a global scale is one of the classic dilemmas of coexistence in human society.


To reduce the ethical risks of AI, we need to take a more active role in its development. These challenges can be answered through a humanism of technology, which places humans at the center of the endeavor. As contradictory as it may sound; however, if you want to place humans at the center of AI-related concerns in our lives, we cannot help designing a robotics policy that guides us in a way that is competitive, rigorous, and inclusive at the same time our development. It not only shortens the distance between man and machine, but also shortens the distance between man and man.

Post a Comment