Recently, Brett Gallant, Adaptive’s curious leader, used a program called ChatGPT to see if A.I. could create an original bedtime story that he could read to his kids. After logging into the platform, he simply entered… Bedtime story: The Mouse That Saved the Day.
Within seconds the software created a clever tale about a mouse who saved himself and all of his friends from traps that hunters planted in their favorite berry patch.
It was entertaining, and Brett’s children loved the story that “Daddy wrote,” but A.I. plays a role in all of our lives in much bigger ways. Most of the contributions to society are positive, but like anything with great power, if it falls into the wrong hands, A.I. can cause irreparable damage.
Before talking about the dark side of A.I., let’s talk about the top benefits of A.I.
7 Benefits of AI That Will Help Humanity
In excerpts from an article by Interesting Engineering, they wrote, “Fears of the potentially darker side of AI are one thing, but they are balanced by some surprising potential benefits that AI and ML could be poised to deliver. Here are some of the most notable examples.
AI has and will continue to enhance automation
Today, AI-augmented robots can easily perform a variety of automated tasks, both inside and outside of the factory, without the need for constant human intervention. AI is poised to be a transformative technology for some applications and tasks across a wide suite of industries.
AI can help eliminate the necessity for humans to perform tedious tasks
Repetitive, tedious tasks in any job are the bane of many human workers around the world. Some are so boring, that mistakes are commonplace, as human attention can be difficult to sustain when conducting repetitive tasks.
Machines excel in taking care of standardized processing work like data entry, etc., freeing human operatives to concentrate on the more creative and interpersonal aspects of their jobs or lives.
Improving weather forecasting is another way AI can benefit humans
In the past few years, we have seen the use of artificial intelligence and its associated technologies used in weather and climate forecasting. Called “Climate Informatics”, this field has already proved to be a very fruitful one, enabling greater collaboration between data scientists and climate scientists, bridging the gaps in our understanding.
Next-generation disaster response
AI has demonstrated its utility in building smart disaster responses and providing real-time data on disasters and weather events. This helps save valuable time, enabling disaster response in a more targeted and efficient manner. Once sophisticated enough, it could theoretically offer warnings with enough time to safely evacuate any people in the danger zone. It is also expected that deep learning will soon be integrated with disaster simulations to come up with useful response strategies.
AI could free humans from putting their lives on the line
In the future, militaries and machine intelligence are expected to work in tandem to conduct wars. This would likely mean AI and robots taking on the more dangerous roles in combat instead of putting human beings in the literal firing line.
AI could also help save human lives in other areas like rescue situations. We may see AI-powered firefighters or first responders to help locate and save lives during environmental or industrial catastrophes.
AI is on-call all the time
AI never sleeps. Rather than an ominous statement, this is actually potentially very beneficial to us all. This will reduce errors, maintain critical services, and enable businesses, and other organizations to provide services their users rely on (like helplines, etc). For educational and research institutions, this could lead to some major breakthroughs in future discoveries that could have wide-ranging benefits for us all.
AI could create new jobs!
The application of AI in businesses will also force the job market to evolve which, with the right preparation, could be a very good thing. From various maintenance and supporting roles, to entirely new careers not yet dreamed of, the widespread adoption of AI could be a brighter future for all of us.
Fears around AI have surfaced for most new forms of technology. Sometimes, the fears are well-founded, and sometimes not, but either way, the genie of new technology cannot be put back in the bottle. All we can do is learn how to use it wisely and to our advantage. AI could be the best thing since sliced bread… if adopted properly.”
But, what if A.I. is not “adopted properly”?
Brett recently recorded a video about hackers using A.I. to design advanced phishing attempts. The software researches people and businesses so thoroughly that when emails are eventually written to “targets” they seem legitimate.
Not only that, the software is designed to sound like a native English speaker. So, gone are the days that you could spot a phishing attempt by the broken English or vague message. The software sifts through everything from social media sites to public records.
By the time a phishing attempt reaches you, A.I. will have designed messages that are so personal, you would never suspect it was written by a stranger. For example, they might mention your favorite restaurant, your boss’s wife’s name, or even ask about your pet’s recent trip to the vet.
Now, let’s talk about the dark side of A.I…
In an article by CNBC, they wrote, “Artificial intelligence is playing a bigger role in cybersecurity, but the bad guys may benefit the most. Organizations can leverage the latest AI-based tools to detect threats better and protect their systems and data resources. But cybercriminals can also use the technology to launch more sophisticated attacks.
The rise in cyberattacks is helping to fuel growth in the market for AI-based security products. A July 2022 report by Acumen Research and Consulting says the global market was $14.9 billion in 2021 and is estimated to reach $133.8 billion by 2030.
An increasing number of attacks such as distributed denial-of-service (DDoS) and data breaches, many of them extremely costly for the impacted organizations, are generating a need for more sophisticated solutions.
Another driver of the market growth was the Covid-19 pandemic and the shift to remote work, according to the report. This forced many companies to put an increased focus on cybersecurity and the use of tools powered by AI to effectively find and stop attacks.
Looking ahead, trends such as the growing adoption of the Internet of Things (IoT) and the rising number of connected devices are expected to fuel market growth, the Acumen report says. The growing use of cloud-based security services could also provide opportunities for new uses of AI for cybersecurity.
Adding to cyber threats
On the other hand, bad actors can also take advantage of AI in several ways. “For instance, AI can be used to identify patterns in computer systems that reveal weaknesses in software or security programs, thus allowing hackers to exploit those newly discovered weaknesses,” Finch said.
When combined with stolen personal information or collected open-source data such as social media posts, cybercriminals can use AI to create large numbers of phishing emails to spread malware or collect valuable information.
“Security experts have noted that AI-generated phishing emails actually have higher rates of being opened — tricking possible victims to click on them and thus generate attacks — than manually crafted phishing emails,” Brian Finch, co-leader of the cybersecurity, data protection & privacy practice at law firm Pillsbury Law, said. “AI can also be used to design malware that is constantly changing, to avoid detection by automated defensive tools.”
Constantly changing malware signatures can help attackers evade static defenses such as firewalls and perimeter detection systems. Similarly, AI-powered malware can sit inside a system, collecting data and observing user behavior, up until it’s ready to launch another phase of an attack or send out information it has collected with a relatively low risk of detection.
This is partly why companies are moving towards a “zero trust” model, where defenses are set up to constantly challenge and inspect network traffic and applications in order to verify that they are not harmful.”
How Criminals Use Artificial Intelligence To Fuel Cyber Attacks
In an article by Forbes, they wrote, “Poisoning an AI system can be alarmingly easy. Attackers can manipulate the data sets used to train AI, making subtle changes to parameters or crafting scenarios that are carefully designed to avoid raising suspicion while gradually steering AI in the desired direction. Where attackers lack access to datasets, they may employ evasion, tampering with inputs to force mistakes. By modifying input data to make proper identification difficult, AI systems can be manipulated into misclassification.
Checking the accuracy of data and inputs may prove impossible, but every effort should be made to harvest data from reputable sources. Try to bake in the identification of anomalies, provide adversarial examples to empower AI to recognize malicious inputs, and isolate AI systems with safeguard mechanisms that make it easy to shut down if things start to go wrong.
A tougher issue to tackle is inference, whereby attackers try to reverse engineer AI systems so they can work out what data was used to train them. This may give them access to sensitive data, paving the way for poisoning or enabling them to replicate an AI system for themselves.
How could AI be weaponized?
Cybercriminals can also employ AI to assist with the scale and effectiveness of their social engineering attacks. AI can learn to spot patterns in behavior, understanding how to convince people that a video, phone call or email is legitimate and then persuading them to compromise networks and hand over sensitive data. All the social techniques cybercriminals currently employ could be improved immeasurably with the help of AI.
There’s also scope to use AI to identify fresh vulnerabilities in networks, devices and applications as they emerge. By rapidly identifying opportunities for human hackers, the job of keeping information secure is made much tougher. Real-time monitoring of all access and activity on networks coupled with swift patching is vital to combat these threats. The best policy in these cases may be to fight fire with fire.”
Scary stuff, but that information is pretty run-of-the-mill. AI can be used in MUCH more harmful ways…
Attacking Artificial Intelligence
In excerpts from (an exhaustive, but radically reduced) article by Belfer Center, they wrote, “Unlike traditional cyberattacks that are caused by “bugs” or human mistakes in code, AI attacks are enabled by inherent limitations in the underlying AI algorithms that currently cannot be fixed. Further, AI attacks fundamentally expand the set of entities that can be used to execute cyberattacks.
For the first time, physical objects can now be used for cyberattacks. Data can also be weaponized in new ways using these attacks, requiring changes in the way data is collected, stored, and used.
The terrorist of the 21st century will not necessarily need bombs, uranium, or biological weapons. He will need only electrical tape and a good pair of walking shoes. Placing a few small pieces of tape inconspicuously on a stop sign at an intersection, he can magically transform the stop sign into a green light in the eyes of a self-driving car.
Done at one sleepy intersection, this would cause an accident. Done at the largest intersections in leading metropolitan areas, it would bring the transportation system to its knees. It’s hard to argue with that type of return on a $1.50 investment in tape.
Regardless of their use, AI attacks are different from the cybersecurity problems that have dominated recent headlines. These attacks are not bugs in code that can be fixed—they are inherent in the heart of the AI algorithms.
As a whole, these attacks have the characteristics of a severe cyber threat: they are versatile in form, widely applicable to many domains, and hard to detect. They can take the form of a smudge or squiggle on a physical target, or be hidden within the DNA of an AI system.
They can target assets and systems in the real world, such as making stop signs invisible to driverless cars, and in the cyber world, such as hiding child pornography from detectives seeking to stop its spread.
Perhaps most concerning is that AI attacks can be pernicious and difficult to detect. Attacks can be completely invisible to the human eye. Conversely, they can be grand and hidden in plain sight, made to look like they fit in perfectly with their surroundings.
Compliance programs can encourage stakeholders to adopt a set of best practices in securing their systems and making them more robust against AI attacks. These best practices manage the entire lifecycle of AI systems in the face of AI attacks.
In the planning stage, they will force stakeholders to consider attack risks. In the implementation stage, they will encourage the adoption of IT reforms that will make attacks more difficult to execute. In the mitigation stage for addressing attacks that will inevitably occur, they will require the deployment of previously created attack response plans.
For hundreds of years, humans have been wary of inscribing human knowledge in technical creations. With machine learning and artificial intelligence, we take a step closer to this fear.
It is the fear of the unknown of a creation. And artificial intelligence today presents seismic unknowns that we would be wise to ponder. Artificial intelligence, like Frankenstein’s monster, may appear human, but is decidedly not.
Despite the popular warnings of sentient robots and superhuman artificial intelligence that grow more difficult to avoid with each passing day, artificial intelligence as it is today possesses no knowledge, no thought, and no intelligence. In the future, technical advancements may one day help us to better understand how machines can learn, and even learn how to embed these important qualities in technology. But today is not that day.
The current set of state-of-the-art artificial intelligence algorithms are, at their essence, pattern matchers. They are intrinsically vulnerable to manipulation and poisoning at every stage of their use: from how they learn, what they learn from, and how they operate.”
Needless to say, A.I. in the hands of malicious hackers is a dangerous force to be reckoned with. Cybersecurity for SMBs is no longer an option… if they want to remain in business.
At Adaptive Office Solutions, cybersecurity is our specialty. We keep cybercrimes at bay by using analysis, forensics, and reverse engineering to prevent malware attempts and patch vulnerability issues. By making an investment in multilayered cybersecurity, you can leverage our expertise to boost your defenses, mitigate risks, and protect your data with next-gen IT security solutions.
Every single device that connects to the internet poses a cyber security threat, including that innocent-looking smartwatch you’re wearing. Adaptive’s wide range of experience and certifications fills the gaps in your business’s IT infrastructure and dramatically increases the effectiveness of your cybersecurity posture.
Using our proactive cybersecurity management, cutting-edge network security tools, and comprehensive business IT solutions you can lower your costs through systems that are running at their prime, creating greater efficiency and preventing data loss and costly downtime. With Adaptive Office Solutions by your side, we’ll help you navigate the complexities of cybersecurity so you can achieve business success without worrying about online threats.
To schedule a Cyber Security Risk Review, call the Adaptive Office Solutions’ hotline at 506-624-9480 or email us at helpdesk@adaptiveoffice.ca