On October 11, the Bank of England (BoE), the Prudential Regulation Authority (PRA), and the UK Financial Conduct Authority (FCA) (together, the Supervisory Authorities) published a discussion paper (DP5/22) on the safe and responsible adoption of artificial intelligence (AI) in financial services (Discussion Paper). The Discussion Paper forms part of the Supervisory Authorities’ AI-related program of works, including the AI Public Private Forum and is being considered in light of the UK government’s efforts towards regulating AI.
TECHNOLOGY, OUTSOURCING, AND COMMERCIAL TRANSACTIONS
NEWS FOR LAWYERS AND SOURCING PROFESSIONALS
NEWS FOR LAWYERS AND SOURCING PROFESSIONALS
Despite general awareness regarding phishing (we have written about phishing in a prior post), it still remains one of the most common ways to accomplish cyberattacks. It should be no surprise that cybercriminals are constantly coming up with more elaborate and sophisticated ways to gain access to sensitive systems and data. A recent CIO.com article lists three measures designed to deter phishing and related attacks, which we have summarized below.
The White House Office of Science and Technology recently published The Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People (the Blueprint), a set of five principles to help guide designers, developers, and deployers of AI in the design, use, and deployment of automated systems with the goal of protecting the public’s rights.
On July 18, 2022, the UK government published high-level proposals for its approach to regulating uses of artificial intelligence (AI), as part of its National AI Strategy and, more broadly, its UK Digital Strategy. The government is seeking public views on the approach, which is contained in a policy paper; a more detailed White Paper will be published in late 2022.
In June 2022, the UK government published its cross-government UK Digital Strategy for creating a world-leading environment in which to grow digital businesses. The Digital Strategy brings together various initiatives on digitalization and data-driven technologies, including the National AI Strategy. The government states that it is actively seeking to grow expertise in deep technologies of the future, such as artificial intelligence, next generation semiconductors, digital twins, autonomous systems, and quantum computing.
In October 2021, it was announced that Facebook would formally change its name to Meta as part of an ambitious new initiative called the “metaverse”—a convergence of physical, augmented, and virtual reality in a shared online space. Shortly after this announcement, we wrote a blog post, A Brief Overview of the Metaverse and the Legal Challenges It Will Present. Since then, metaverse trends have experienced phenomenal growth, with the emergence of new immersive virtual reality and collaborative spaces for human interactions, transactions, and data exchanges on decentralized networks.
The Stanford Institute for Human Centered Artificial Intelligence recently published its AI Index Report 2022. In a world of near-constant advancement and innovation in technology, it’s no surprise the report found that more global artificial intelligence (AI) legislation was proposed in 2021 than ever before.
The Bank of England (Bank) and the UK Financial Conduct Authority (FCA) published their final report of discussions from the UK Artificial Intelligence Public-Private Forum on February 17. Over quarterly meetings and several workshops conducted since October 2020, the Bank and the FCA jointly facilitated dialogue between the public sector, the private sector, and academia in order to deepen their collective understanding of artificial intelligence (AI) and explore how to support the safe adoption of AI. This initiative was incorporated into the UK National AI Strategy.
Broad awareness has been made about cyberattacks in the form of phishing that typically use email messages to lure victims into divulging sensitive information or opening a link that allows malware to infiltrate their device. Companies have learned how to combat phishing by training employees to recognize such scam attempts and report them as phishing to protect their organizations. “Vishing” is another tactic used by scammers that, while less familiar, is no less invasive and dangerous.
Companies are transforming legacy systems, implementing automation and artificial intelligence tools, embedding digital capabilities into their products, shifting to cloud solutions and leveraging technology to better connect to their customers, personnel, and third parties, all at an unprecedented pace. The focus on businesses to get to market faster, reach a broader audience and provide real-time interaction has in turn put pressure on legal and sourcing documents to keep up. The complexity and volume of the numbers of projects (and contracts) can be daunting — especially for companies that have not yet elevated the importance of the technology law function within their organizations.