On December 20, a team from the Executive Office of the President released a report titled Artificial Intelligence, Automation, and the Economy that assesses and provides recommendations based on the effect that evolving artificial intelligence (AI) and automation will have on the economy and policy matters. The team included staff from the Council of Economic Advisers, Domestic Policy Council, National Economic Council, Office of Management and Budget, and Office of Science and Technology Policy. Notably, the report looks into the impact not only on jobs but also on the opportunities created for stronger cyber defense and systems to detect fraudulent transactions and messages.
Specifically, as machines continue to reach and exceed human performance on an increasing number of tasks, they have the potential to disrupt labor force; the report examines strategies to increase the benefits and mitigate the cost to the economy.
The report identified three strategies for policy responses:
- Invest in and develop AI for its many benefits.
- Educate and train Americans for jobs of the future.
- Aid workers in the transition and empower workers to ensure broadly shared growth.
The first policy response calls for developing cyber defense and fraud detection. The current environment requires a great deal of time and effort from experts to design and operate secure systems. If this work can be automated, either in part or wholly, stronger and more agile security could be made available across a broader range of systems and applications at a far lesser cost. AI (especially machine learning systems) could rapidly detect, react, and respond to the ever-evolving and more complex cyber threats and support human decisionmaking.
Eventually, AI could generate dynamic threat models from data sources that would be difficult for humans to analyze because of volume, change frequency, and incompletion. According to the report, such data includes “the topology and state of network nodes, links, equipment, architecture, protocols, and networks.” AI could then efficiently perform predictive analytics to anticipate cyber attacks before they ever occur.
DARPA’s Cyber Grand Challenge is a prime example of how this approach could be implemented to great effect. This competition was created to accelerate AI and automation development to “detect, evaluate, and patch software vulnerabilities before adversaries have a chance to exploit them.” The final event took place on August 4, 2016, and all code produced was released as open source to promote follow-on and parallel research.
In addition, AI can be an important aid to detect fraudulent transactions and messages. It is commonly used to detect fraudulent financial transactions and unauthorized log-in attempts. AI filters email messages to flag spam, attempted cyber attacks, and other unwanted messages. Some common examples include an email junk folder or an online account that locks users out after multiple log-in attempts.
For years, search engines have used advanced algorithms to find relevant features of documents and detect and demote potentially unwanted or dangerous content to maintain the quality of search results.
As companies update all these protective features, cyber attackers are constantly developing new methods. AI could be a strong tool that enables companies to fight back in a fast, methodical, and consistent way.