BLOG POST

Tech & Sourcing @ Morgan Lewis

TECHNOLOGY TRANSACTIONS, OUTSOURCING, AND COMMERCIAL CONTRACTS NEWS FOR LAWYERS AND SOURCING PROFESSIONALS

The importance of cybersecurity in the autonomous vehicle setting is well known, but nuance and complexity will be on our LiDAR (a pulsed laser that measures ranges) where the rubber meets the road.

The Challenging, Shifting Landscape

Cybersecurity is one of the key issues of the digital age, typically in the context of security and privacy of confidential or personal data. Cybersecurity is particularly challenging and important for technologies such as self-driving cars, where the real world and the digital, connected world meet and where cyber breaches could result in danger to life and property.

Autonomous vehicles are still in their infancy. Significant uncertainty surrounds this rapidly evolving ecosystem. Standards and regulations are still in a state of flux, and the “rules of the game” are still unclear: how, and how long, will human drivers/operators continue to be involved (along with their proclivity for risky, unpredictable and gullible behavior)? At this relative stage of immaturity, market participants are developing their own divergent solutions that will eventually need to seamlessly integrate, increasing the potential for cyber vulnerabilities. However, the opportunity (for both innovators and society at large) is clear, as smart, interconnected vehicles and systems promise remarkable improvements in efficiency and safety. The race is on.

It would be easy to understate the great physical and economic risk associated with potential failures of autonomous technology. This risk, and every failure of the technology, brings with it widespread attention and careful public and legislative scrutiny (even if human drivers are responsible for many more collisions). Early missteps could lead to costly overreactions in the form of lawsuits, regulatory burdens, and psychological barriers. The development path could become muddied and incomplete.

Further, a more subtle complication is the industry’s heavy reliance on artificial intelligence. As recently noted, engineers need to “ensure that the algorithms on the back end are secure”. The deep learning that enables, and controls, the magic of these new technologies is susceptible to attacks. For example, perceived markings on roads (whether introduced physically or virtually) that alter lanes could send self-driving cars in a perilous direction. If vehicles are trained to follow the leader, these attacks on perception and segmentation could cause an alarming pileup.

Small security flaws and even minor hacks could cause disproportionate harm to life and property. This fundamental challenge of very high risks arising from low probability events, or seemingly less important layers of software and technology, is at the heart of the autonomous vehicle challenge.

The Measured, Evolving Course

Under the circumstances described above, traditional cybersecurity practices are only the beginning. Standard security measures (e.g., encryption), standards, diligence, and agreements are helpful, but they may be inadequate.

Cybersecurity considerations should not be an afterthought; security measures should be fundamentally integrated into the designs of the relevant systems and algorithms that are otherwise largely focused on efficiency, accuracy, and performance under nonmalevolent circumstances. Cybersecurity risks and protections should be understood and considered at the highest levels, not just by the coders focusing on a specific program.

Though algorithms can be tricked, they can also be trained to detect anomalies. Anomaly detection is employed in many commercial applications, like fraud detection. Note, though, that any such heuristic is inherently imperfect and is merely one tool for limiting or mitigating attacks. Another potentially valuable technology is blockchain, which could be used to verify data integrity.

In addition to making cybersecurity a core component of the design process, other considerations include the following:

  • Vertically integrating to foster comprehensive design and to minimize third party dependence. One important risk associated with smart devices is the need for continuous maintenance and updates. Vertical integration allows for more control over critical systems (including any exposed components).
  • Bolstering diligence and contractual requirements to vet and incentivize partners. This may include (a) obtaining rights to information and cooperation related to incidents or investigations, (b) imposing enhanced security standards that address any special concerns and corresponding measures (in addition to typical requirements), and (c) requiring that all security-related obligations (including relevant maintenance and updates) flow down through the integration chain.
  • Using private networks and otherwise minimizing external exposures. Note that a proprietary system can enhance the effectiveness of anomaly detection.
  • Vigilantly updating standards. This is particularly true during the early, dynamic stages of the industry.
  • Expansive, tailored reporting requirements. To the extent practical, developers will need to understand how third-party algorithms/systems could affect their own algorithms/systems. These enhanced reporting requirements should be coordinated with the enhanced security standards (as the process for establishing each will inform the other).

Over time, though, the appropriate balance will move toward more open, connected systems that can unlock the full potential of these technologies because (a) standards will emerge, (b) comfort levels will rise in line with increased predictability and control, and (c) perhaps most persuasively, competition won’t stop and wait.

Industry participants should anticipate and design for shifting lanes before the entire landscape shifts beneath their wheels.