10 Threats of Artificial Intelligence that We Need to Be Aware of in 2021 | by Ashok Sharma | Jan, 2021
There is no denying that AI is the future. It will unlock doors to new vistas and make it possible to do things that once seemed impossible.
We will talk to our cars, we will argue with our virtual assistants, and even break new records in sports with the help of AI coaches.
The truth is we all are enchanted by AI, the amazing things it can do, and the real-world problems AI can solve.
But there is also another dark side to AI which we cannot ignore. Dangers of AI are no longer movie fiction. They are real and right in front of us. If we do not become aware now, it will be too late.
In this blog, I will walk you through 10 such critical AI threats that we need to become aware of this year and in the future. Let’s begin:
The idea behind AI was to create systems that are free from hate, racism, and biased opinions that are gripping us humans.
However, what if AI systems also start promoting racism, hate, and biasedness?
Something like this happened back in 2016 when a Microsoft AI chatbot called Tay went full Nazi on Twitter and started tweeting Nazi sentiments and racial epithets. Things blew so much out of proportion that Microsoft had to take the AI offline in just 16 hours.
It turned out that the AI was trying to mimic the behavior of other human users who were deliberately provoking it.
It is not the only scenario of AI promoting biasedness. There are many examples of AI systems that are partial to ethnic minorities compared to the white population.
This leads to a serious concern that all these poor values of us humans can be passed to AI systems as well and it will only make things worse.
1. The Messenger Rules for European Facebook Pages Are Changing. Here’s What You Need to Know
2. This Is Why Chatbot Business Are Dying
3. Facebook acquires Kustomer: an end for chatbots businesses?
4. The Five P’s of successful chatbots
A few days back, I was reading a post on LinkedIn which talked about how AI is going to replace most of the jobs in the future.
“Nearly 400–800 million jobs will be replaced by AI by the year 2030 and 375 million people will have to look for other career options.” The post mentioned.
Hence, AI automating jobs is a serious concern that can lead to increased unemployment, depressed youth, and even worse violence.
AI is even making it worse for job seekers to look for new jobs. Most of the job applications are rejected by applicant tracking systems right away.
While recruiters call it filtering out ideal candidates, there are times when even the most promising candidates cannot make it to the shortlist. This way both the job seekers and the recruiters suffer.
The Facebook-Cambridge Analytica scandal of 2016 shook us to the core. For the first time, we understood that our private details are not private anymore.
We are generating 2.5 million terabytes of data each day. Ever wondered how much damage can an AI system with access to this much data do?
Companies can influence our behavior and make us do what they want. They can make us buy their products, vote for people who align with their purpose, and support decisions that are in their favor.
This way our decisions will no longer be ours. We will just end up becoming a puppet in the hands of organizations who are willing to stoop to any level for profit.
One of the biggest achievements of AI is that it is capable of creating content on its own. There are apps in the market that can create faces, compose texts, write tweets, and clone voices. It can help a lot in advertising.
However, people with malicious intent are using the same software to spread fake news, rumors, and blackmail others.
So, it is not much difficult for someone to take your photo, create a fake video, and then blackmail you saying they will release it to your friends. There is even a term for all this called Face swap Video blackmailing.
Even worse they may release the video. Then what? Your hard-earned reputation will be gone in just a few minutes. The worst part is no one will even bother if it is true or not.
Celebrities and politicians are already falling prey to this threat and soon it will also haunt people like me and you.
Even the most blissful things turn into a curse when they fall into wrong hands, and no analogy can describe this better than AI.
Artificial Intelligence (AI) not only took cybersecurity to the next level but also opened doors to new threats.
We are no longer facing old-fashioned, commodity malware human-driven attacks. Cyber-terrorists have evolved. They are now leveraging AI to attack and shut down systems.
All thanks to AI, cracking safe systems, encrypting secure chats, and hacking into highly confidential websites is a piece of cake for cybercriminals.
Can you even imagine how much damage it can do?
Facebook, Microsoft, Google, Apple, Alibaba, Tencent, Baidu, and Amazon. These are the eight most powerful companies in the world. They have the power and financial capacity to take the power of AI to a whole new level.
So, such a powerful technology will end in the hands of a few players who can utilize it the way they want and serve their best interests. This puts at the risk of data monopoly and control.
Apart from making our life easy breezy by simplifying complex tasks, AI is also making us lose touch with things that made us human.
We no longer write with our hands, read books, go on little walks, or spend time with nature. Instead of talking to family members, we spend hours looking at our phones and smiling like idiots.
It is like we are losing real-life skills and becoming increasingly dependent on technology, and this is not a good thing. It will deprive us of everything that makes us human and turn us into the slaves of the same technology which was expected to make our life better.
“Mark my words — AI is far more dangerous than nukes.” Elon Musk had said in a Southwest tech conference back in 2018, warning about the impact of autonomous weapons controlled by AI.
Along with 115 other experts, Mr. Musk had pointed out the potential threats of autonomous weapons and the level of damage they can do.
What he said makes complete sense. Technology has evolved and the worst part is that it is easily accessible. You can buy a high-quality drone with a camera that you control from your phone, install facial-recognition software on the drone, and use it to track and hunt the specific person you hold a grudge against.
Would we want that? Making it so easy for someone to take someone’s life that too sitting on their couch controlling the drone from miles away?
Even worse, what if the AI takes it upon itself to make decisions about life and death? There will be a massacre which is something we would never want.
Here is something I find myself wondering the most:
What if one day we build a system that is far more intelligent than us humans and it decides to take over the world?
Although I am not the only one who is raising this concern, most of the times it is refuted in the name of fear fed by movies.
But this is the time we should start taking this threat seriously. AI is getting more powerful than before. We are developing AI systems that are defeating current chess champions, we are creating robots that are behaving like humans, we are creating virtual assistants that do a much better job than a personal secretary.
How long do you think it will take for someone to create a system that is more intelligent than humans? When that happens, things will be drastic.
The last thing we would want is a Terminator-like scenario in which machines have taken over the world and killer robots are roaming freely down the street.
“Who is responsible for the accidents caused by self-driving cars?” Experts often find themselves wondering.
It is the main reason why self-driving cars faced a lot of backlash from legal authorities when companies decided to launch them in the market. They just could not decide who they should hold accountable if accidents happen, the car owner or the company who designed the self-driving car.
It is not just the self-driving cars. Authorities have similar concerns in scenarios in which AI went rogue after learning by itself and started drawing its own conclusions.
Authorities often wonder whether they should blame the company or the AI itself, and they still do not have a reliable answer.
- We should feed the right information to AI systems because they behave as per the information, we will feed them.
- Enterprises should automate wisely. Automating routine mundane tasks is wise, but there are still areas that will need human involvement.
- Consider using a VPN software. It prevents the misuse of your crucial data by keeping your details confidential on the web.
- Employee training is a must. They should be aware of how to use AI and what things they have to keep in mind for preventing any wrongdoing.
- Recruiters should use applicant tracking systems but should avoid relying on them entirely. This will ensure they do not lose deserving candidates just because an AI did not deem them fit.
- People should keep enhancing their skills. This will always keep them in demand. Remember while AI can replace labor, it cannot replace skills.
- There should be strict laws regarding what companies are doing with the data we are sharing online.
- Social media channels should use the power of AI to keep a check on accounts that are spreading hate, racism, and biasedness. Instagram is already shadowbanning such accounts and hashtags. In fact, it is one of the main reasons for the hashtags not working on social media accounts.
- We need to be aware of how much information we are sharing online so that it will not be misused.
- We need to keep a strict check on AI systems that are capable of creating content. There should be strict guidelines that prevent the misuse of someone’s private information without their consent, and those who do so should be strictly punished.
- Cybersecurity has to evolve. Only AI can overcome the threat of AI attacks. Businesses have to start thinking about taking advanced security measures like cybersecurity mesh.
- Authorities must draw strict guidelines on how companies can leverage AI. Some governments also took this decision by banning apps that were using AI for spying and other harmful purposes.
- We should use AI as it is improving our life but not become increasingly dependent on it. We should lose touch with skills that make us human so that we can survive when technology is not around. We need to find the balance.
- Companies and scientists who are developing AI must not lose conscience as it will play a critical role in deciding how an AI system will behave.
- Companies should take responsibility and ensure the use of AI happens for the good of society.
I would love to quote a few words from the video I was recently watching on YouTube:
“AI machines do not have a conscience. The way they act is just the reflection of human consciousness.”
Hence, it is essential we humans do not lose our consciousness when developing these AI systems. Instead of passing down all the racism, greed, biasedness, and bigotry — we should pass them the right information.
After all, we would want AI to become our great partner, not the evillest version of ourselves. It is the only thing that will help keep us on the right path and ensure AI does not become a threat to our existence.
And the best way to do so is now — because it is now or never, and tomorrow will be too late. What are your thoughts?
Credit: Source link