Using AI to Improve Safety: Managing the Legal Risks Alongside the Benefits
February 05, 2026Artificial intelligence (AI) is becoming a powerful tool in workplace safety programs—but its use also raises complex legal and governance questions. This Insight examines how employers can integrate AI into safety decision-making while preserving human judgment, meeting regulatory obligations, and managing enforcement and litigation risk.
Both general industry and construction companies are increasingly adopting AI to enhance safety performance. From analyzing near-miss reports and job hazards to generating site-specific safety briefings and forecasting elevated risk conditions, AI tools promise to help companies identify hazards earlier and allocate resources more effectively.
At the same time, AI introduces new and often underappreciated legal risks. Because safety sits at the intersection of regulatory compliance, tort liability, workforce management, and technology governance, the use of AI in safety can complicate traditional risk profiles if a company fails to manage its use carefully.
The question is no longer whether AI can support safety efforts, but rather how to deploy it responsibly and defensibly.
AI IS NOT A SAFETY MANAGER
Clarifying Roles and Responsibilities in Policies and Procedures
As set forth in the federal Occupational Safety and Health Act of 1970 (OSH Act), a foundational legal principle is that employers bear responsibility to protect employees from recognized hazards. That duty cannot be delegated to technology vendors or displaced by automated tools. AI systems increasingly used in safety programs—such as platforms that flag high-risk activities, suggest mitigation measures, or auto-generate safety documentation—are best understood as decision-support tools, not decision-makers.
In practice, risk emerges when organizations treat AI outputs as directives rather than data to analyze. For example, some vision-based safety systems generate hundreds of alerts for PPE non-compliance or proximity to hazards during a single shift. If supervisors rely solely on the system’s alerts without exercising judgment about the severity, context, or feasibility of corrective action, critical risks may be obscured by volume. Conversely, if a system fails to flag a hazard—such as a non-routine task or an unusual site condition (e.g., use of a crane at a warehouse)—an employer cannot credibly argue that the absence of an alert absolved it of responsibility.
Similar issues arise with predictive tools that rank tasks or job sites by risk. An AI model may downgrade a task because it has historically been completed without incident, even though changed conditions, workforce experience, or weather introduce new hazards. Treating that risk score as determinative—rather than as one data point—can lead to missed interventions and, in litigation, difficult questions about why human judgment did not override an algorithmic output.
From a legal and risk-management perspective, companies deploying AI in safety programs should be able to
- human supervisors and safety professionals retain ultimate authority to stop work, modify procedures, or escalate concerns, regardless of what an AI system indicates;
- AI outputs are reviewed, validated, and contextualized, rather than automatically accepted or ignored; and
- safety decisions are documented as human decisions, informed by data and AI but not controlled by it.
Clear governance around AI’s role is essential not only for day-to-day operations but also for how safety decisions will be scrutinized by regulators, plaintiffs’ counsel, and insurers after an incident. In enforcement and litigation contexts, the question will not be whether AI was used, but how it was used—and whether human responsibility remained where the law places it.
Organizations should operationalize AI governance by translating legal obligations into clear, defensible AI safety policies. This includes defining the scope and limits of AI tools, documenting that AI outputs are advisory rather than determinative, and establishing escalation, override, and documentation protocols that preserve human authority. AI governance should also be aligned with existing safety management systems, incident response procedures, and recordkeeping practices, ensuring consistency across policies and practices. When thoughtfully crafted, these policies guide internal decision-making and create a contemporaneous record demonstrating to regulators, courts, and insurers that the company exercised informed, human judgment—supported by technology, not replaced by it.
KEY LEGAL RISKS IN AI-ENABLED SAFETY PROGRAMS
Duty of Care and Foreseeability Risk
As a general matter, companies owe a duty of care to workers, subcontractors, site visitors, and, in some circumstances, the public. This duty typically requires companies to exercise the amount of care reasonably necessary under the given circumstances to prevent foreseeable harm. Failure to identify or address foreseeable risks can expose a company to liability, including tort claims.
As noted above, AI is increasingly capable of identifying, predicting, and reporting hazards that might otherwise go unnoticed, enabling immediate interventions that prevent accidents before they occur. For example, many companies market that their tools can detect unsafe conditions on construction sites and provide real-time alerts to safety personnel. But without the proper precautions, the detection of hazards by AI safety systems may also increase a company’s liability after an injury occurs.
While AI-enabled safety tools offer unprecedented visibility into workplace hazards, more data does not automatically translate into safer outcomes. In practice, organizations can face a phenomenon often described as “death by data”—where AI systems identify so many hazards, near misses, and risk signals that safety professionals are unable to meaningfully prioritize or respond.
The legal exposure associated with identifying hazards but failing to act extends beyond regulatory enforcement and can materially increase civil liability. For example, if a predictive safety tool repeatedly identifies a specific work zone as high-risk for falls—based on historical incident data, near-miss reporting, or vision-based observations—and the company does not implement additional engineering, administrative, or supervisory controls, a subcontractor or injured worker may argue that the company knowingly tolerated a recognized hazard. In jurisdictions that permit punitive or exemplary damages, plaintiffs may contend that the employer’s access to repeated AI-generated warnings demonstrates conscious disregard for safety, supporting claims for punitive damages that can reach into the tens of millions of dollars.
Similar arguments may arise where AI systems flag recurring struck-by hazards involving mobile equipment, crane swing zones, or material staging areas, yet work continues without meaningful intervention. Plaintiffs may attempt to use system logs, dashboards, or internal communications to establish not only notice of the hazard but also an internal acknowledgment of elevated risk that was not addressed. In that context, AI outputs may be framed not as aspirational safety data, but as evidence of foreseeability, knowledge, and deliberate inaction.
These risks underscore that AI-generated safety insights can meaningfully shift the litigation narrative. When hazard identification is automated and repeatable, failure to respond may be portrayed as more than negligence—it may be characterized as a business decision to accept known risk. Without documented governance, prioritization, and response protocols, AI systems intended to enhance safety may instead amplify exposure in catastrophic loss scenarios.
In addition to risks associated with failing to act on identified hazards, inconsistent or selective deployment of AI safety tools can further compound exposure. For example, where AI-driven monitoring or predictive analytics are deployed on flagship projects but not on smaller, remote, or lower-profile sites, plaintiffs may argue that the employer maintained uneven safety practices, undermining assertions of a uniform and consistently enforced safety program. In the wake of a serious incident, such disparities may be framed as evidence that enhanced protections were knowingly withheld from certain workers or locations.
Similarly, decisions to disable, down-tune, or ignore AI-generated alerts to reduce “noise” can take on heightened significance after an incident. Absent contemporaneous documentation showing a reasoned, risk-based prioritization process, plaintiffs may attempt to characterize these decisions as willful blindness rather than prudent safety management. In that context, system configuration choices—often made for operational efficiency—may be recast as conscious decisions to disregard known risks.
Taken together, these scenarios underscore that AI-generated safety data is a double-edged sword. While such tools can significantly enhance hazard recognition and situational awareness, they also create heightened expectations—internally and externally—that identified risks will be evaluated, prioritized, and addressed. Without clear governance, documentation, and follow-through, AI systems intended to improve safety may instead be used to support claims that an employer recognized and accepted dangerous conditions, amplifying liability in both regulatory and civil proceedings.
Companies should consider engaging counsel who can assist with designing and implementing AI safety governance frameworks that
- define when and how AI-generated insights must be escalated, including thresholds for supervisory review, work stoppage, and executive notification;
- establish clear lines of authority so that human supervisors and safety professionals retain ultimate decision-making responsibility;
- ensure consistency in how safety risks are evaluated and addressed across projects, facilities, and business units, reducing exposure to claims of uneven safety practices;
- document risk-based prioritization decisions, including when AI alerts are triaged, deferred, or overridden, and the rationale for those decisions;
- integrate AI tools into existing safety management systems, rather than allowing parallel or informal processes to develop;
- align AI safety governance with Occupational Safety and Health Administration (OSHA), environmental, and state-law obligations, including multi-employer worksite considerations;
- prepare defensible records that demonstrate hazards were assessed and addressed through human judgment informed by technology; and
- anticipate post-incident scrutiny, ensuring policies and documentation will withstand review by regulators, plaintiffs’ counsel, and insurers.
Regulatory and Compliance Risk
In addition to the common law duty described above, the OSH Act and its attendant standards and regulations require employers to ensure their employees’ safety. These obligations are non-delegable and remain with the employer regardless of the tools used to support compliance. While AI offers no regulatory safe harbor, it can assist companies in achieving better safety outcomes and improving compliance efforts by streamlining processes and surfacing risk information—provided that its outputs are reviewed and validated by trained personnel.
AI tools from multiple companies can accelerate the creation of safety documents such as job hazard analyses, toolbox talks, and corrective action plans. These platforms allow organizations to quickly generate and update safety materials, improving accessibility and timeliness by ensuring that employees have access to current information and guidance.
However, speed and automation do not guarantee regulatory sufficiency, and automated outputs frequently lag behind legal and regulatory developments. In fact, OSHA and state safety regulators continue to assess compliance based on substantive requirements, not technological sophistication. AI-generated materials that are overly generic, incomplete, or misaligned with jurisdiction-specific standards may therefore increase enforcement risk.
For example, AI-generated job hazard analyses may fail to account for recent regulatory changes or emerging enforcement priorities, such as updated OSHA emphasis programs, revised consensus standards incorporated by reference, or new state-plan requirements that differ from federal OSHA. Similarly, automated toolbox talks may omit obligations triggered by new state heat illness rules, expanded silica or lead standards, or evolving multi-employer worksite doctrines, particularly where those requirements vary by jurisdiction or are subject to active enforcement interpretation.
Automated tools may also struggle to capture site-specific legal nuances, such as whether a task constitutes “construction” versus “general industry” work, whether a confined space qualifies as permit-required under current OSHA interpretations, or how overlapping federal and state standards apply to non-routine operations. In these situations, AI may generate content that appears facially compliant but fails to reflect how OSHA or a state-plan agency would analyze the work in practice. Moreover, if an AI tool is designed or configured without appropriate legal and regulatory oversight at the outset, it may generate procedures and forms that are deficient from inception, even before accounting for subsequent legal or regulatory developments.
These risks are compounded when AI-generated materials are relied upon without meaningful human review. A false sense of compliance—where documents exist but are not legally sufficient—can undermine safety efforts and expose companies to citations, enhanced penalties, or adverse findings during inspections or post-incident investigations.
Key risk factors include
- reliance on generalized industry language that fails to reflect specific operations, equipment, or hazards;
- ·omission or misapplication of jurisdiction-specific standards, including recently enacted or amended requirements; and
- reduced human validation due to the perception that automated outputs are inherently compliant.
Companies should consider engaging counsel that can help them manage these risks by
- reviewing AI-assisted safety documentation for regulatory and jurisdictional compliance gaps, including alignment with current OSHA enforcement positions and state-plan requirements;
- helping integrate legal review checkpoints into AI-enabled workflows, ensuring that automated materials are vetted before implementation or presentation to regulators; and
- advising on how AI-generated materials should be contextualized and presented during inspections, investigations, or litigation, including how to explain the role of AI as a support tool rather than a substitute for professional judgment.
Litigation and Discovery Risk
The integration of AI into workplace safety practices significantly expands the universe of potentially discoverable materials in litigation, increasing both the volume and complexity of safety-related data and subjecting employers to heightened scrutiny following an incident. In addition to traditional safety records, discovery may now encompass AI prompts inputted by safety personnel, AI-generated risk forecasts, near-miss trend analyses, heat maps identifying “high-risk” work zones, internal dashboards, and automated safety summaries—many of which may not have existed in earlier safety programs.
In practice, plaintiffs and regulators increasingly seek access to system-generated data showing what risks were identified, when they were identified, and how the company responded. For example, plaintiffs may request historical risk scores for a worksite where an injury occurred, logs showing repeated AI alerts for the same hazard, or internal reports summarizing elevated risk conditions in the days or weeks preceding an incident. Even where no corrective action was taken because the risk was assessed as low priority, the existence of these records can become a focal point of discovery and expert analysis.
Discovery disputes frequently turn on questions such as
- whether AI prompts, outputs, and underlying data were retained, overwritten, or deleted as part of routine system operation;
- how changes to AI models, alert thresholds, or input parameters were documented, approved, and communicated; and
- whether AI-generated analyses were treated as operational safety tools or as part of legal, compliance, or risk-management assessments.
Absent clear protocols, companies may face allegations of spoliation, inconsistent recordkeeping, or selective preservation of safety data. For instance, if certain AI outputs are routinely deleted or overwritten while others are retained, plaintiffs may argue that unfavorable data was lost or destroyed. Similarly, if AI-generated risk assessments are shared informally without clear classification, companies may struggle to defend privilege claims or explain inconsistencies in what was preserved and produced.
These risks are compounded in the immediate aftermath of a serious incident, when automated systems may continue to generate or overwrite data unless deliberate steps are taken to preserve relevant materials. Without predefined incident-response protocols, well-intentioned operational actions—such as system resets, configuration changes, or data clean-up—can later be recast as evidence of improper handling of critical safety information.
Companies can manage these discovery and litigation risks by working with legal, safety, and IT teams to
- develop defensible data retention and deletion policies tailored to AI safety systems, balancing operational needs with litigation and regulatory exposure;
- establish clear boundaries between operational safety data and privileged legal analyses, reducing the risk that AI outputs are inadvertently treated as discoverable legal conclusions;
- design incident-response protocols that address AI-related materials, including immediate preservation steps, internal communication controls, and coordination with counsel following a serious event; and
- prepare companies to explain AI systems and data practices to regulators, courts, and opposing counsel in a clear and credible manner.
By addressing these issues proactively—before an incident occurs—companies can reduce the risk that AI-enabled safety tools intended to improve performance become sources of avoidable litigation exposure.
Workforce, Privacy, and Labor Risk
Many AI-enabled safety applications rely on workforce data, including productivity indicators, fatigue metrics, location data, video analytics, and behavioral trends. While these data streams can meaningfully enhance hazard identification, incident prevention, and safety planning, their collection and use raise significant employment, privacy, labor, and data-governance concerns, particularly in construction and other high-hazard industries with mobile, multi-jurisdictional workforces.
In practice, AI safety tools may analyze video feeds to detect unsafe behaviors, track worker movement or proximity to hazards, or infer fatigue based on productivity patterns or shift duration. These capabilities can improve situational awareness, but they also blur the line between safety monitoring and workforce surveillance. Following an incident—or during a labor dispute—questions may arise regarding how data was collected, who had access to it, and whether it was used consistently and lawfully.
Potential legal issues include compliance with state privacy and biometric laws, such as statutes regulating the collection of facial recognition data, video analytics, or other biometric identifiers. In some jurisdictions, the mere capture or analysis of certain data types triggers specific notice, consent, retention, and deletion requirements. AI tools that are deployed uniformly across projects may inadvertently violate state-specific laws if they are not configured to account for differing legal standards.
Notice and consent obligations present additional risk. Employees and subcontractor personnel may challenge AI safety monitoring programs if disclosures are unclear, incomplete, or inconsistent with actual data practices. Inconsistent messaging—such as describing AI tools as safety-only while using the data for performance evaluation or discipline—can further increase exposure under employment and unfair labor practice theories.
Labor and union considerations also play a significant role. Unionized workforces may assert that AI-based monitoring and analytics constitute changes in working conditions that require bargaining. Even in non-union environments, workforce monitoring may raise concerns under concerted activity protections, particularly if employees believe safety data is being used to track productivity or target individuals rather than improve conditions.
These risks are magnified for national companies operating across multiple jurisdictions. Cross-border data transfers, centralized analytics platforms, and inconsistent local implementation can complicate compliance with state, federal, and international data protection regimes. Without coordinated governance, companies may struggle to explain how workforce data is lawfully collected, processed, and retained across projects.
Companies can balance safety innovation with workforce protections by working with counsel across OSHA, employment, labor, and privacy disciplines to
- design compliant workforce data-use policies that clearly define what data is collected, how it is used, and what safeguards apply;
- assess notice and consent requirements across jurisdictions and help align disclosures with actual AI system functionality;
- advise on labor and union implications, including bargaining obligations and workforce communications related to AI monitoring;
- evaluate cross-jurisdictional data handling practices for national and multinational contractors; and
- prepare defensible explanations of AI safety programs that emphasize hazard prevention while respecting employee rights.
By addressing workforce, privacy, and labor considerations early in the design and deployment of AI safety tools, companies can reduce legal risk while maintaining employee trust and preserving the integrity of their safety programs.
HOW WE CAN HELP
Our OSHA practice is uniquely positioned to help companies integrate AI into their safety programs while managing the regulatory, litigation, and governance risks that accompany its use. Our lawyers can work with clients to
- translate AI-enabled safety innovation into legally defensible, human-governed practices;
- align AI deployment with OSH Act obligations, state-plan requirements, and evolving enforcement priorities;
- design governance frameworks that preserve human judgment, escalation authority, and accountability; and
- prepare companies for how AI-generated safety data will be scrutinized by OSHA, plaintiffs’ counsel, and insurers after an incident.
Importantly, legal guidance is most effective when engaged early—during system design, vendor selection, and rollout—not after an incident has occurred.
To learn more about our AI offerings and how we advise on the legal and governance considerations surrounding AI deployment, visit our AI resource page.
PRACTICAL TAKEAWAYS FOR COMPANIES
- AI can materially improve workplace safety, but it does not shift or dilute an employer’s legal responsibility for hazard recognition and control
- Increased hazard visibility can expand foreseeability—and with it, regulatory and civil exposure—if risks are identified but not assessed and addressed
- Governance, documentation, and consistency in how AI tools are used matter as much as the technology itself
- AI-generated insights should inform professional judgment and decision-making, not replace it
- Early legal involvement—particularly during system design, vendor selection, and rollout—can significantly reduce downstream enforcement, litigation, and insurance risk
CONCLUSION
AI offers powerful tools to enhance construction safety, but its use reshapes the legal landscape in ways that require deliberate governance and oversight. Companies that deploy AI thoughtfully—preserving human judgment, documenting decision-making, and anticipating post-incident scrutiny—can improve safety outcomes without increasing legal exposure. Companies should strike that balance, ensuring that innovation strengthens safety programs rather than undermining their defensibility.
Contacts
If you have any questions or would like more information on the issues discussed in this Insight, please contact any of the following: