Hackers could one day use novel visual techniques to manipulate what AI sees – RisingAttacK impacts ‘most widely used AI computer vision systems’


- RisingAttacK quietly alters key features, tricking AI without changing the image’s appearance
- Vision systems in self-driving cars could be blinded by nearly invisible image modifications
- The attack fools top AI models used in cars, cameras, and healthcare diagnostics
Artificial intelligence is becoming more integrated into technologies that rely on visual recognition, from autonomous vehicles to medical imaging – but this increased utility also raises potential security risks, experts have warned.
A new method called RisingAttacK could threaten the reliability of these systems by silently manipulating what AI sees.
This could theoretically cause it to miss or misidentify objects, even when images appear unchanged to human observers.
Targeted deception through minimal image alteration
Developed by researchers at North Carolina State University, RisingAttacK is a form of adversarial attack that subtly alters visual input to deceive AI models.
The technique does not require large or obvious image changes; instead, it targets specific features within an image that are essential for recognition.
“This requires some computational power, but allows us to make very small, targeted changes to the key features that make the attack successful,” said Tianfu Wu, associate professor of electrical and computer engineering and co-corresponding author of the study.
These carefully engineered changes are completely undetectable to human observers, making the manipulated images appear entirely normal to the naked eye.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
“The end result is that two images may look identical to human eyes, and we might clearly see a car in both images,” Wu explained.
“But due to RisingAttacK, the AI would see a car in the first image but would not see a car in the second image.”
This can compromise the safety of critical systems like those found in self-driving cars, which rely on vision models to detect traffic signs, pedestrians, and other vehicles.
If AI is manipulated into not seeing a stop sign or another car, the consequences could be severe.
The team tested the method against four widely used vision architectures: ResNet-50, DenseNet-121, ViTB, and DEiT-B. All four were successfully manipulated.
“We can influence the AI’s ability to see any of the top 20 or 30 targets it was trained to identify,” Wu said, citing common examples like cars, bicycles, pedestrians, and stop signs.
While the current focus is on computer vision, the researchers are already looking at broader implications.
“We are now in the process of determining how effective the technique is at attacking other AI systems, such as large language models,” Wu noted.
The long-term aim, he added, is not simply to expose vulnerabilities but to guide the development of more secure systems.
“Moving forward, the goal is to develop techniques that can successfully defend against such attacks.”
As attackers continue to discover new methods to interfere with AI behavior, the need for stronger digital safeguards becomes more urgent.
Via Techxplore
You might also like
RisingAttacK quietly alters key features, tricking AI without changing the image’s appearance Vision systems in self-driving cars could be blinded by nearly invisible image modifications The attack fools top AI models used in cars, cameras, and healthcare diagnostics Artificial intelligence is becoming more integrated into technologies that rely on visual…
Recent Posts
- Get the dunce’s cap – experts warn pathetically weak passwords in the education sector leave classrooms at risk
- Wimbledon has an AI problem, but are tennis players just using technology as a scapegoat?
- NYT Wordle today — answer and my hints for game #1480, Tuesday, July 8
- Apple just added more frost to its Liquid Glass design
- xAI updated Grok to be more ‘politically incorrect’
Archives
- July 2025
- June 2025
- May 2025
- April 2025
- March 2025
- February 2025
- January 2025
- December 2024
- November 2024
- October 2024
- September 2024
- August 2024
- July 2024
- June 2024
- May 2024
- April 2024
- March 2024
- February 2024
- January 2024
- December 2023
- November 2023
- October 2023
- September 2023
- August 2023
- July 2023
- June 2023
- May 2023
- April 2023
- March 2023
- February 2023
- January 2023
- December 2022
- November 2022
- October 2022
- September 2022
- August 2022
- July 2022