Security Testing for Self-Driving Cars: 5 Best Practices

Self Driving Car Security System

Most vehicle computer systems work as black boxes, meaning they can show information about a hack but cannot prevent a hacking attempt. To avoid on-road chaos, self-driving car security systems need to be updated continuously. How can we make self-driving cars secure? In this article, we will highlight some of the primary security issues of self-driving cars and best practices for testing them.

Autonomous vehicles aim to reduce traffic congestion and improve safety. These vehicles are equipped with various sensors such as cameras, LiDAR, and radar. that monitor road conditions to avoid accidents. With the help of these devices, automated vehicles make driving decisions.

Automated vehicle systems go through three distinct phases every time they make a driving decision:

  • Sensing
  • Understanding
  • Acting

During the sensing phase, an automated vehicle scans its surroundings using sensors. Then, in the understanding phase, it identifies any potential obstacles. Finally, in the acting phase, the vehicle makes a decision – either to act on its own to avoid an accident or to warn the driver. When testing the security of self-driving cars it is necessary to attack vehicle sensors to see how a car responds when sensors are behaving improperly.

What is LiDAR?

Light Imaging Detection and Ranging (LiDAR) is a technology that recognizes objects on the road and helps a vehicle’s software adjust driving behavior. In other words, if LiDAR detects an obstacle, it might trigger emergency braking.

Jonathan Petit, an expert at the Security Innovation company in the United States, has published findings from a range of experiments that aimed to test self-driving car security and discover security holes in traditional vehicle hardware. One of the tests that Petit conducted was an organized attack on LiDAR systems. LiDAR is a camera-based technology, and as we know, image-based systems require good lighting conditions and clear weather. At night and in fog, cameras can become worthless for detecting obstacles on the road.

How LiDAR works

LiDAR detects objects by sending laser light signals. When a signal reaches an obstacle, it is reflected by the obstacle and comes back to the receiver. If a signal does not return, then there are no obstacles and it is safe to move forward. By relaying and spoofing LiDAR signals, it is possible to deceive a LiDAR system and thus confuse a vehicle’s computer system.

Practice #1. Signal relaying

When a signal identical to the one transmitted by LiDAR is sent from an unexpected position, then due to fake echoes, real objects appear closer or farther than they are. Only two transceivers on the market are capable of performing such an attack, but one of them is a photodetector that costs only $0.65. This cheap photodetector can generate an output voltage that exactly coincides with the pulse intensity sent by LiDAR. The other transceiver that can mimic LiDAR has a laser that emits a pulse in return and costs only $43.25.

To test for an attack against LiDAR, Petit placed a real wall one meter in front of a LiDAR system. Two transceivers created echoes that made the LiDAR think that the wall was 20–50 meters away. This deception caused incorrect planning on the part of the vehicle computer system. LiDAR does not encode emitted pulses, meaning pulses can be copied and relayed, generating fake echoes. These echoes make the sensor “see” objects closer or farther than their actual position.

Practice #2. Signal spoofing

This experiment shows how it is possible to make LiDAR see objects that do not exist. To create a fake signal, the LiDAR pulse has to be caught before it bounces back to the sensor. If it is not caught in time, then the fake signal will not be noticed and the attack will fail. The sooner the signal arrives back at the sensor, the closer the object appears to the system.

To spoof a LiDAR signal, it is necessary to have one transceiver and two pulse generators. The output of the first generator has to be connected to the input of the second. These generators create multiple copies of the original LiDAR signal and send the copies back to the receiver. This attack mimics fake obstacles at different distances.

LiDAR systems can classify obstacles and recognize up to 65 different objects. This type of attack can be successful even when tracking is enabled. In some cases, LiDAR can recognize fake obstacles as “unknown big” or even as a “car.”

Countermeasures

The following methods can prevent automated vehicle sensors from being fooled:

  • Redundancy
  • Repeated probing
  • Skipping a signal
  • Shortening the pulse period

Repeated probing, skipping a signal, and shortening the pulse period are countermeasures that should be implemented in software to detect such attacks and improve the security of self-driving cars.

Redundancy

Using different LiDAR wavelengths that do not overlap will reduce an attacker’s chance of success. It is harder and more expensive to attack multiple signals at the same time.

Random probing

When the sending and receiving interval is consistent, it is easy for a hacker to synchronize with it. Making this period vary by a random frequency can reduce the chances of a successful attack, as it will be hard to predict the next original signal period.

Skipping a signal

Modifying the software that controls laser emission to make LiDAR skip some pulses can help you detect a hacking attack. If the LiDAR system sends a signal and then skips it, but that signal comes back anyways, this means an attack is in process.

Repeated probing

If a counterfeit signal is not synchronized with the original, then the same obstacle will be detected by LiDAR at different distances. By modifying automotive software, it is possible to make the sensor recognize this measurement as invalid.

Shortening the pulse period

Decreasing the range of the LiDAR to 100 meters will halve the pulse period, giving less time for a potential hacker to attack.

Deceiving the camera

As with LiDAR, cameras can also be deceived. Therefore, another part of security testing for a self-driving car is deceiving the cameras. Automated vehicles use cameras to detect objects and traffic signs.

There are various ways to attack vehicle cameras:

  • Placing fake signs at different positions
  • Placing objects of different color and shape near-real signs
  • Painting additional road lines
  • Using different colors for road lines
  • Spoofing automatic exposure controls
  • Tricking camera auto-focus
  • Decreasing light sensitivity

Generally, cameras can normalize and balance lighting conditions through iterative processing. However, direct light can decrease a camera’s exposure and sensitivity, effectively blinding it. Direct light lowers image quality and obscures road objects: pedestrians, vehicles, lines, and signs.

Practice #3. Camera blinding

Digital cameras work much like the human eye: the aperture adjusts just as the retina dilates to account for available light. In this camera blinding experiment, a constant laser beam was used.

The success of a blinding attack depends on three variables:

  • Ambient light
  • The artificial light source
  • The distance between the camera and the artificial light source

The farther the distance, the more powerful the light source has to be. The direct infrared light of the test laser managed to blind a vehicle camera such that it could not recognize a chessboard in front of it. This experiment was conducted in a dark environment.

Practice #4. Confusing automatic control systems

Camera sensors have automatic gain and exposure controls. In response to the sudden flash of light directed onto the sensor in the blinding test, however, the camera was blinded for more than five seconds before it recovered.

Countermeasures

The most effective measure to avoid blinding is modifying the camera itself, which leads to increased cost and physical dimensions of the camera.

Redundancy

Multiple cameras will make the system way harder to blind because a potential attacker will have to direct light sources on all cameras simultaneously.

Materials and optics

Implementing a removable filter that can cut near-infrared light can protect a camera from laser blinding. Such filters are generally built into security cameras. Integrating photochromic lenses, which filter out specific light types, can protect a camera from being blinded by sudden flashes. These filters do not affect the quality of the image in low light conditions.

Mass production car hacking

Special devices that are designed specifically to hack self-driving cars have been available on Amazon for a long time. For example, the ieGeek WiFi Wireless OBD2 Auto Scanner can track car service codes via the OBD-II computer diagnostics system, making these codes accessible to anyone. Security specialists assume that these types of devices are only the beginning. Hacking methods will surely be advanced as proven by the following examples. Startups, institutes, and established companies alike are currently assessing the security of self-driving cars and working on solutions to recognize and resist hacking attacks.

Practice #5. Hacking mass production cars

In 2015, the Institution of Engineering and Technology published a report showing that 98% of correspondents’ open-source software had as many as 10–15 serious defects. While automobile producers seem to be paying attention to this issue, it is not their number one priority.

Tesla Model S

Chinese specialists from Tencent Keen Security Lab have discovered multiple security vulnerabilities in the Tesla Model S. They’ve tapped into the car’s computer system and established remote connections in both Driving and Parking modes. Their attack worked through the car’s web browser when connected to the internet via a malicious WiFi hotspot. Using this exploit, the engineers gained control over the car’s doors, dashboard screen, windshield wipers, braking system (while driving), and more. Tesla removed these vulnerabilities within ten days after their discovery by updating the Model S firmware.

Toyota Fielder

In 2015, Hiroyuki Inoue, a professor at Hiroshima University, created a small custom device for only $80 that helped him control his family station wagon. Inoue connected his device to his car’s diagnostic port and connected it to a smartphone to send commands over the internet. In this way, Inoue could control car doors, the dashboard screen, and even the speedometer. Then he launched an ordinary DDOS attack, which caused a total system freeze. The car did not even respond when the accelerator was pressed.

Countermeasures

Implementing a continuous improvement algorithm can ameliorate errant software behavior and make a car’s computer system capable of reacting to real-time threats. Providing a mechanism by which car computers can distinguish between hacking attacks and ordinary operating conditions will increase the security of self-driving cars.

Wrapping up

The Institution of Engineering and Technology (IET) has stated that hacking attacks have become one of the most pressing issues for both self-driving and driverless cars. Automotive companies spend tremendous sums of money on the development of partially and fully autonomous vehicles, and in light of this investment, computer systems must become more reliable and secure.

Until car computer systems can resist any kind of hacking attacks, self-driving cars cannot be called safe. Cprime has experience successfully cooperating with Renault and Volvo to build technical solutions for car security. Cprime is a proven international outsourcing expert in the fields of virtual security and business software development.

Maxwell Travers
Maxwell Travers