FEDERAL DEVELOPMENTS
To E-Verify or Not to E-Verify: Weighing the Pros and Cons
Employers often have questions about whether they should use E-Verify to help determine whether their new hires are authorized to work in the United States. The program – which matches I-9 data with the information in various government databases – is voluntary for most employers but mandatory for federal contractors and some employers in certain states. Ultimately, its goal is to help employers stay compliant with federal employment and immigration regulations. But is E-Verify right for you? Consider these five pros and five cons when deciding whether to incorporate it into your hiring process.
Top 5 Reasons Why Employers Should Consider Using E-Verify
- Electronic Verification: When hiring a new employee, you must complete a Form I-9 and physically examine the employee’s identity and work authorization documents. Notably, however, a new benefit just became available for employers that are enrolled in E-Verifyallowing them to conduct document verification electronically rather than in person.
- Speed: Employers that use E-Verify receive an initial determination almost immediately regarding a new employee’s authorization to work.
- Good Faith Defense: When an employer confirms the identity and employment eligibility of newly hired employee using E-Verify procedures, you may rely on the system’s confirmation of that employee’s work authorized status, creating a presumption of good faith in the hiring process. This serves as an extra layer of protection and confidence when it comes to compliance.
- Easy integration: E-Verify can be integrated into an employer’s existing onboarding and HR processes and can also be accessed online.
- Government Benefits: Depending on the state, employers may receive state contracts, grants, or incentives for using E-Verify. Additionally, enrollment in E-Verify is a requirement for being awarded certain federal government contracts.
Honorable Mention – Extra Options for Hiring Foreign Nationals: Employers that are enrolled in E-Verify can hire foreign students on F-1 visas for an additional period of two years for Science, Technology, Engineering, and Mathematics (STEM) positions.
Top 5 Reasons Why Employers May Not Want to Use E-Verify
- Drain on Resources: Employers will need to learn how to use E-Verify, stay up to date with ever-changing regulations, and ensure their IT systems can manage the process and keep up with changes.
- False Positives and Negatives: E-Verify is not foolproof, and errors can occur. The system may sometimes flag individuals who are authorized to work (false positives) or fail to identify unauthorized workers (false negatives).
- Privacy Concerns: E-Verify involves the collection and storage of sensitive personal information, such as Social Security numbers. This has raised concerns about privacy and the potential for identity theft or misuse of this information.
- Extra Management: Employers that use E-Verify must consistently use the system for all employees and subject themselves to ICE audits to verify both I-9 andE-Verify compliance. Using E-Verify does not decrease the chance of an I-9 audit.
- Government Shutdown: Employers should consider recent and future threats of government shutdowns, as E-Verify is not available while the federal government is not operating. Although E-Verify will be available after a shutdown ends, there is a period where employers may not submit data.
Click Here for the Original Article
2022 EEO-1 Reporting Period Now Open
Employers and federal contractors must submit workforce demographic reports by December 5, 2023
Employers nationwide should be aware that the long-delayed 2022 EEO-1 reporting period opened October 31, 2023. The deadline for filing 2022 EEO-1 Component 1 data is December 5, 2023, though employers are encouraged to file sooner if possible.
All private-sector employers with 100 or more employees in the U.S., and federal contractors with at least 50 employees, are required to submit an EEO-1 report with workforce demographic data to the EEOC annually. The EEO-1 report discloses data about full-time and part-time employees’ demographic information, such as sex, race, and ethnicity. The 2022 report is based on a workforce payroll snapshot taken in the fourth quarter of the reporting year (i.e., between October 1 and December 31, 2022).
The EEOC has provided resources to assist with the EEO-1 submission process, including:
Additional resources can be found here: https://www.eeocdata.org/EEO1/.
The 2023 EEO-4 state and local government data collection also opened October 31 and is also due December 5, 2023. The EEO-4 is a mandatory data collection for all state and local governments with 100 or more employees to submit demographic workforce data, including data by race/ethnicity, sex, job category, and salary band every two years.
Click Here for the Original Article
Form I-9 Alert: New Form and Remote I-9 Documentation Examination Procedures
Starting November 1, 2023, employers are be required to use the latest version of Form I-9 (edition date 08/01/23), which United States Citizenship and Immigration Services (USCIS) released earlier this year. A revised Spanish-language Form I-9 is available for use only in Puerto Rico. The new Form I-9 applies to new hires only; employers should not complete new Forms I-9 for current employees.
The new Form I-9 is available on USCIS’ website, along with an updated 8-page set of instructions. USCIS also released an updated Handbook for Employers, a valuable resource for those handling Form I-9 issues.
In addition to the new Form I-9, employers should be aware that, as of August 1, 2023, they may remotely examine employees’ Form I-9 documents if they are enrolled in good standing with E-Verify. This is a welcome change for employers with remote employees and for organization that moved to remote I-9 document verification during the COVID-19 pandemic. Employers that choose to offer this alternative procedure to new employees at an E-Verify hiring site must do so consistently for all new employees at that site. However, employers may choose to offer this alternative procedure for remote hires only, while continuing to conduct physical in-person examination procedures to employees who work onsite or in a hybrid capacity, provided the distinction is not made for a discriminatory purpose.
Remote Document Examination Procedures for E-Verify Employers:
- Instruct the employee to transmit a copy of the document(s) for review.
- Examine copies of Form I-9 documents or an acceptable receipt to ensure that the documentation presented reasonably appears to be genuine and relates to the employee.
- Conduct a live video interaction with the employee presenting the document(s) to ensure that the documentation reasonably appears to be genuine and relates to the employee. The employee must present the same documents during the live video interaction that were previously transmitted for review.
- Retain a clear and legible copy of the documentation.
- On the Form I-9, check the box to indicate that you used an alternative procedure in the Additional Information field in Section 2.
- If completing the remote documentation examination for a rehire or reverification, check the box on Form I-9 in Supplement B.
While basic I-9 requirements remains the same, the new Form I-9 and instructions include several notable changes.
Form Changes:
- Reduced Sections 1 and 2 to a single sheet. No previous fields were removed. Multiple fields were merged into fewer fields when possible, such as in the employer certification.
- Moved the Section 1 Preparer/Translator Certification area to a separate Supplement A that employers can use when necessary. This supplement provides three areas for current and future preparers and translators to complete as needed.
- Moved Section 3 Reverification and Rehire to a standalone Supplement B that employers can use for rehire or reverification. This supplement provides four areas for current and subsequent reverifications. Employers may attach additional supplements as needed.
- Removed use of “alien authorized to work” in Section 1, replaced it with “noncitizen authorized to work” and clarified the difference between “noncitizen national” and “noncitizen authorized to work.”
- Ensured the form can be filled out on tablets and mobile devices by downloading onto the device and opening in the free Adobe Acrobat Reader app.
- Removed certain features to ensure the form can be downloaded easily. This also removes the requirement to enter N/A in certain fields.
- Improved guidance to the Lists of Acceptable Documents to include some acceptable receipts, as well as guidance and links to information on automatic extensions of employment authorization documentation.
- Added a checkbox for E-Verify employers to indicate when they have remotely examined Form I-9 documents.
Form I-9 Instruction Changes:
- Reduced length from 15 pages to 8 pages.
- Added definitions of key actors in the Form I-9 process.
- Streamlined the steps each actor takes to complete their section of the form.
- Added instructions for the new checkbox to indicate when Form I-9 documents were remotely examined.
- Removed the abbreviations charts and relocated them to the M-274 (Handbook for Employers).
Given the new form and verification procedures, employers should use particular care when handling Form I-9 and other immigration-related employment processes and practices. Annual internal Form I-9 audits and periodic third party Form I-9 audits by qualified professionals are recommended. Employers not currently enrolled in E-Verify should also evaluate whether program use may be beneficial.
Click Here for the Original Article
FTC Adds Data Breach Notification Requirement to Safeguards Rule
The Federal Trade Commission (FTC or Commission) has amended its Standards for Safeguarding Customer Information, commonly known as the “Safeguards Rule,” to require non-bank financial institutions to report certain data breaches to the Commission. The amended Safeguards Rule requires covered “financial institutions” to report “notification events” affecting 500 or more consumers to the FTC “as soon as possible, and no later than 30 days after discovery“ (the “Notification Requirement”). A “notification event” is defined as the “acquisition of unencrypted customer information without the authorization of the individual to which the information pertains.“ The FTC intends to make the notices it receives public, although financial institutions may request that public disclosure be delayed for law enforcement purposes.
The amendments go into effect 180 days after they are published in the Federal Register, meaning that covered financial institutions likely will be required to begin reporting notification events starting in Q2 2024. The amendments do not include any requirement to notify affected individuals of a data breach.
Financial institutions covered by the Safeguards Rule (and therefore the Notification Requirement) include neobanks, alternative lenders, money transmitters, retailers that extend credit to customers, mortgage brokers, certain investment advisors, and numerous other types of entities providing financial products or services. The U.S. Department of Education also requires institutions of higher education participating in certain federal student aid programs, as well as their third-party servicers, to comply with the Safeguards Rule.
We summarize the Notification Requirement and propose various compliance measures below.
Background
The FTC issued the first version of the Safeguards Rule in 2002 pursuant to the Gramm-Leach-Bliley Act (GLBA). Under GLBA, various federal agencies including the FTC, the U.S. Securities and Exchange Commission, the federal banking regulators—the Office of the Comptroller of the Currency, the Federal Deposit Insurance Corporation, and the Federal Reserve Board—and the National Credit Union Administration, are required to issue standards for the security of customer information for financial institutions subject to each agency’s jurisdiction.[1]
The first version of the Safeguards Rule imposed relatively high-level requirements on covered institutions to implement a written information security program, including designating a qualified individual to lead the program, identifying information security risks, implementing and testing safeguards in response to those risks, overseeing service providers, and periodically adjusting the program based on changes to the business and other circumstances. In December 2021, the FTC overhauled the Safeguards Rule by expanding the existing requirements and enumerating new, more detailed ones. Under the current Safeguards Rule, which we discussed in a prior blog post and webinar, institutions must adopt various safeguards, including encrypting customer information in transit and at rest, multifactor authentication, secure software development as assessment measures, and annual written reports to the board of directors (or other governing body) regarding the institution’s information security program and material security risks, among others.
The FTC’s overhauled Safeguards Rule did not include any breach notification requirement. However, on the same day the FTC published the new Safeguards Rule, December 9, 2021, it also issued a Supplemental Notice of Proposed Rulemaking (SNPRM) to amend the Safeguards Rule to add breach notification.[2] The FTC issued the Notification Requirement in a final rule published on October 27, 2023 (the “Final Rule”).
The FTC published the Final Rule shortly after the release by the Consumer Financial Protection Bureau (CFPB) of its proposed “Personal Financial Data Rights” rule under Section 1033 of the Consumer Financial Protection Act of 2010. The CFPB’s proposed rule would require data providers and third parties not otherwise subject to GLBA to comply with the FTC’s Safeguards Rule (we discuss the CFPB’s proposal here), now including the Notification Requirement.
Covered Information
The Notification Requirement dramatically expands covered financial institutions’ breach reporting obligations because of the range of data covered. The Notification Requirement applies to “customer information,” which is broadly defined in the Safeguards Rule as records containing “nonpublic personal information about a customer of a financial institution.” Nonpublic personal information is (i) personally identifiable financial information[3] and (ii) “[a]ny list, description, or other grouping of consumers (and publicly available information pertaining to them) that is derived using any personally identifiable financial information that is not publicly available.” Customer information may include a broad array of data, from more sensitive types of data such as Social Security numbers, detailed financial and purchase histories, and account access information, to relatively routine and benign data, such as basic customer demographics and contact details.
Under state data breach reporting laws, companies are required to report breaches of only enumerated categories of data, such as Social Security numbers and other government-issued ID numbers, financial account numbers in combination with access credentials, usernames and passwords, and medical information. But given the broad definition of customer information under the Safeguards Rule, covered financial institutions will have to assess their breach reporting obligations for a much larger set of data than they typically do now.[4]
At the same time, it is important to note that the Safeguards Rule, and therefore the Notification Requirement, does not apply to information about “consumers” who are not “customers.” Under the Safeguards Rule, a “consumer” is any individual that obtains a financial product or service from a financial institution to be used for a personal, family, or household purpose.” A “customer” is a type of consumer: specifically, a consumer with which the financial institution has a “customer relationship,” defined as a “continuing relationship” between the institution and customer under which the institution provides a financial product or service. No customer relationship may exist, for example, where a consumer engages in only “isolated transactions” with the institution, such as by purchasing a money order or making a wire transfer. The Notification Requirement applies only to customer information, and therefore is not triggered by a breach affecting only consumers who are not customers.
Covered Incidents
A “notification event” is defined as “acquisition of unencrypted customer information without the authorization of the individual to which the information pertains (emphasis added).” This definition raises several points for consideration:
- Acquisition: The Notification Requirement is triggered by unauthorized “acquisition” and includes a rebuttable presumption that unauthorized “access” is unauthorized acquisition unless the institution has “reliable evidence” showing that acquisition could not reasonably have occurred. On the surface, the Notification Requirement takes a sort of middle approach vis-à-vis state data breach notification laws: under most state laws, personal data must be acquiredto trigger notification obligations, but a small and growing number of states require notification where personal data has only been accessed.[5] However, it is important to note that the FTC has a very broad view of those terms. The FTC describes “acquisition” as “the actual viewing or reading of the data,” even if the data is not copied or downloaded, and “access” as merely “the opportunity to view the data”[6] (emphasis added). Based on the FTC’s reading of those terms, the rebuttable presumption may only be available if an institution has reliable evidence that unauthorized actors did not actually view customer information—even if they had the opportunity to do so.
- Unencrypted: The Notification Requirement treats encrypted data much like state data breach notification laws do. Institutions need not report acquisitions of encrypted data; however, encrypted data is considered unencrypted for the purposes of the Notification Requirement if the encryption key was accessed by an unauthorized person.
- Without Authorization of the Individual to Which the Information Pertains: Typically, when breach notification laws refer to acquisition of data being unauthorized, it is understood that they are referring to whether the acquisition was authorized by the entity that owns the data, not whether it was authorized by the individual who is the subject of the data. By specifying that a notification event occurs when acquisition was unauthorized by the individual data subject, the Notification Requirement potentially encompasses a broader range of incidents than state data breach notification laws. If, for example, a financial institution’s employee uses customer information for a purpose that is authorized by the institution but inconsistent with the institution’s privacy statement or customer agreement, one could argue that the use is acquisition not authorized by the consumer. Whether the FTC would take that novel position remains to be seen. Notably, the FTC’s Health Data Breach Rule(HNBR) includes similar language in its definition of “breach of security,”[7] and the FTC has taken the position that the HNBR applies to disclosures authorized by company holding the data but not the data subject.
Notification Obligation
Financial institutions must notify the FTC “as soon as possible, and no later than 30 days after discovery” of a notification event involving at least 500 consumers. Although not clear from the text of the amendments, the FTC appears to take the position that the Notification Requirement begins to run when an institution discovers that a notification event has occurred, and not when it discovers specifically that the notification event affects 500 or more consumers. The FTC dismissed concerns that a financial institution may not know how many consumers were affected, or other key information such as whether information was only accessed without acquisition, at the time it discovers a data breach, stating that it expects financial institutions “will be able to decide quickly whether a notification event has occurred.” Where it is difficult to ascertain how many consumers may have been affected—for example, where a data breach affected unstructured data containing an unknown amount of consumer data—institutions may face significant time pressures to meet the 30-day reporting requirement.
The Notification Requirement does not include any “risk of harm” analysis or threshold. Under the SNPRM, financial institutions would have been required to notify the FTC only where “misuse” of customer information had occurred or was “reasonably likely” to occur. The final version of the Notification Requirement removes the misuse language and simply requires notification upon discovery that customer information has been “acquired” without authorization.
The Notification Requirement is surprisingly silent on financial institutions’ obligations when data breaches occur at their service providers.[8] A financial institution is considered to have discovered a notification incident “if such event is known to any person, other than the person committing the breach, who is [the institution’s] employee, officer, or other agent.” This language indicates that financial institutions are not considered to have knowledge of a notification event that occurred at a service provider (which would not typically be considered the financial institution’s “agent”) until the service provider makes the institution aware of the event. Although there is no specific requirement that institutions obligate their vendors to notify them of security incidents, the Safeguards Rule does require institutions to oversee their service providers, including by entering into contracts requiring service providers to maintain appropriate security safeguards for customer information. The FTC may take the position that financial institutions must require their service providers to report notification events to them under these broader service provider oversight obligations. Additionally, the FTC might argue that because customer information is defined to include information “that is handled or maintained by or on behalf of” a financial institution, institutions’ responsibility for third-party notification events is assumed.
Report Requirements and Publication
Notifications to the FTC, which must be submitted via electronic form on the FTC website, must include the following information:
- The name and contact information of the reporting financial institution;
- A description of the types of information that were involved in the notification event;
- If the information is possible to determine, the date or date range of the notification event;
- The number of consumers affected;
- A general description of the notification event; and
- If applicable, whether any law enforcement official has provided the financial institution with a written determination that notifying the public of the breach would impede a criminal investigation or cause damage to national security, and a means for the Federal Trade Commission to contact the law enforcement official. A law enforcement official may request a delay in publication of the report for up to 30 days. The delay may be extended for an additional 60 days in response to a written request from the law enforcement official. Any further delay is only permitted if the FTC staff “determines that public disclosure of a security event continues to impede a criminal investigation or cause damage to national security.”
The FTC intends to make the reports it receives publicly available on its website. Financial institutions should take note that plaintiffs attorneys are likely to monitor these postings (as they do with public postings of data breach reports by various state attorneys general and the Department of Health and Human Services Office of Civil Rights) and may use them as a basis for commencing consumer class actions.
Preparing for Compliance
Financial institutions subject to the Safeguards Rule are advised to consider the following steps for preparing to comply with the Notification Requirement:
- Assess Safeguards Rule Compliance and Address Gaps Now: The FTC issued the Notification Requirement to support its enforcement efforts.[9]The FTC intends to review breach reports and assess whether a breach may have been the result of an institution’s failure to comply with the Safeguards Rule’s technical, administrative, and physical safeguards. Institutions should prepare for this increased scrutiny by assessing and remedying any compliance gaps with the Safeguards Rule. The FTC acknowledges that a breach may occur even if an institution fully complies with the Safeguards Rule, so institutions should be prepared to show the FTC that the notification incident occurred notwithstanding their compliance with the rule.[10]
- Review and Update Incident Response Plans. The Notification Requirement dramatically expands covered financial institutions’ breach reporting obligations. Under state data breach reporting laws, companies are required to report breaches of only enumerated categories of data, such as Social Security numbers and other government-issued ID numbers, financial account numbers in combination with access credentials, usernames and passwords, and medical information. But given the broad definition of customer information under the Safeguards Rule, covered financial institutions will have to assess their breach reporting obligations for a much larger set of data. Institutions should update their incident response plans to address these expanded obligations and educate their incident response teams about them. Institutions also should determine who will be responsible for submitting any required report to the FTC. Reports should be reviewed by counsel prior to submission, given that they may form the basis for FTC enforcement or consumer class actions.[11]
- Revise Any Data Maps, Information Classification Schemes and Similar Documentation. Financial institutions also should review their data maps, data inventories, information classification schemes, and similar data management documentation to ensure that they properly address the many types of records that may be considered “customer information” containing “non-public personal information” subject to the Notification Requirement. Doing so will help financial institutions more quickly assess the impact of a security incident and determine whether it is a “notification event” under the amended Safeguards Rule (for example, by informing them of whether customer information may be present on a compromised system). Quick assessment will be important given the 30-day notification deadline, and that the FTC appears not to distinguish between when an institution becomes aware of a notification event and when it determines that the event triggers the reporting obligation.
- Assess and Amend Service Provider Agreements.Although there is no specific requirement in the Safeguards Rule that institutions obligate their service providers to notify them of notification events, the FTC may argue that such an obligation is assumed by the Safeguards Rule provisions. Accordingly, financial institutions should review their relevant service provider agreements and determine whether any amendments are necessary to support their compliance with the Notification Requirement.
Click Here for the Original Article
STATE, CITY, COUNTY AND MUNICIPAL DEVELOPMENTS
U.S. State Privacy Impact Assessment (PIA/DPIA) Requirements
With the passage of numerous comprehensive state laws, many U.S. companies are now subject to a formal requirement to complete a Privacy Impact Assessment (“PIA”). While the various state and international PIA requirements may seem daunting, it is possible to align an organization’s PIA process to the most nuanced laws and achieve a baseline founded on the consistency across the states.
Below are the core concepts that you should be familiar with. See Kilpatrick Townsend’s recent Legal Alert for the answers to some commonly asked questions and practical suggestions for approaching the PIA requirements landscape.
Core Concepts/Key Information At a Glance
- Many states follow a “baseline” model which provides that PIAs are generally required before processing personal data in a manner that presents a heightened risk of harm to consumers.
- “PIA” is a broad term for privacy evaluations that also covers more targeted assessments, such as GDPR or GDPR-style data protection impact assessments (DPIAs). U.S. state laws often refer to PIAs as data protection assessments. PIAs are a means of documenting details around personal data use cases / processing activities and are essentially risk/benefit analyses.
- Heightened risk of harm generally includes (but is not limited to) activities involving targeted advertising, profiling, sale of personal data, and handling sensitive personal data.
- Colorado has documented a set of detailed PIA requirements via regulation, and California is expected to finalize a set of detailed requirements for privacy risk assessments very soon.
- For U.S. based companies, model the overall PIA process on the “baseline states”. Focus on the common factors triggering PIAs. Layer on CA and CO specific requirements where applicable. If the company plans to expand globally, be sure to include questions about the jurisdictions in which they will be operating.
- Identify additional likely candidates for “high-risk” / “heightened risk” processing based on what the organization does (e.g., the company’s business model, data handling, industry, etc.).
- If the company also has GDPR or other global exposure and an established GDPR PIA/DPIA template in place, build in screening questions to see if additional assessments/questions are needed for the U.S. states.
- Include or be prepared to include questions related to AI / ADMT.
- Continue to monitor for developments in the U.S. state privacy arena, as well as municipal-level or topic-specific requirements.
Click Here for the Original Article
Pay Transparency Law on the Horizon for Massachusetts Employers
Massachusetts is on track to join the growing number of jurisdictions, among them California (as we discuss here), Colorado, New York (discussed here), and Washington, in passing wage transparency legislation. While some employers have already taken a detailed and systematic approach to salary transparency to comply with similar laws in other jurisdictions, this would be new territory for many Massachusetts-based employers.
This legislation, if enacted into law, is consistent with the Commonwealth’s commitment – which began in 2016 as the first state to prohibit employers from inquiring into an applicant’s pay history – to promote pay equity across the Massachusetts workforce.
In light of this legislation, employers should consider conducting an audit across their workforce to determine whether any gaps in pay equity exist, as certain Massachusetts employers may soon be required to disclose pay ranges for all advertised job postings in the Commonwealth, as well as have to submit wage data reports aimed at eliminating gender and racial pay disparities. We summarize the Bill in additional detail below.
+++
Summary:
The House Bill (Bill H.4109) An Act Relative to Salary Transparency and the Senate Bill (Bill S.2468) are similar, but have some differences, such as the process for annual wage data reporting (i.e., whether by web portal, email submissions, paper forms or other means), that the legislature must resolve before a bill is sent to the Governor’s desk for execution. Regardless, if signed into law as anticipated, Massachusetts employers should expect the following to come into effect:
- Pay Transparency in Job Postings: Employers with 25+ employees in Massachusetts must post the pay range in internal and external job postings (including recruitment for new hires via a third party). Pay range is defined as the annual salary or hourly wage range the employer “reasonably” and “in good faith” expects to pay for the position. Crafting a position’s reasonable pay range is just one of many potential issues a covered employer must consider when complying with pay transparency legislation, which we discuss in more detail here. Notably, neither the House Bill nor the Senate Bill requires employers to disclose other compensation, including bonuses, commissions or other employee benefits for advertised positions.
- Annual Wage Data Reporting: Employers with 100+ full-time employees in Massachusetts at any time during the preceding year, and who are subject to the federal filing requirements of wage data reports (EEO-1, EEO-3, EEO-4 or EEO-5), must submit a copy of their federal filings to the state secretary. This data, which reflects workforce demographics and salaries, will then be submitted to the executive office of labor and workforce development.
As outlined in both the House Bill and Senate Bill, the Massachusetts’ attorney general would have the “exclusive jurisdiction” to enforce this law, meaning that employees and applicants are not afforded a private right of action in civil court. The House and Senate now must reconcile the differences between the two bills, before a unified bill lands on Governor Healey’s desk to review and sign into law. If signed, it is anticipated that the law will take effect one year after it is officially enacted.
+++
As mentioned above, employers should consider proactively conducting an audit across their workforce to determine whether any gaps in pay equity exist. Gaps in wages for similarly situated employees must be supported by objective measures including, for example, employee location, credentials and qualifications, seniority, and experience.
Click Here for the Original Article
Ohio Approves Legalization of Recreational Use of Marijuana by Adults
On November 7, 2023, voters in Ohio approved an indirect initiated state statute titled “Ohio Issue 2, Marijuana Legalization Initiative (2023)” that legalizes the recreational use of marijuana by adults aged 21 and above in the state, according to the Ohio Issue 2 web page on Ballotpedia.com. The passage of Ohio Issue 2:
- Allows adults who are at least 21 years old to use and possess marijuana, including up to 2.5 ounces of marijuana;
- Allows the sale and purchase of marijuana, which a new Division of Cannabis Control would regulate; and
- Enacts a 10 percent tax on marijuana sales.
“Approval of Issue 2 made Ohio the 24th state to legalize marijuana for recreational and personal use. As of October 2023, twenty-three other states and Washington, D.C. had legalized marijuana through a mix of citizen initiatives, legislative referrals to the ballot, and bills enacted into law,” Ballotpedia.com reports.
“According to U.S. Census population estimates, going into the election, 49.07% of the country’s population lived in a state where marijuana is legal. Approval of the initiative boosted the population percentage from the 50% threshold to 52.56%,” Ballotpedia.com reports. Ohio legalized medical marijuana in 2016.
Click Here for the Original Article
California Enacts Further Protections for Marijuana-Using Workers and Job Applicants
Passed in 2022 and effective January 1, 2024, Assembly Bill 2188 creates Government Code section 12954 to make it unlawful for an employer to discriminate against a person in hiring, termination, or any term or condition of employment, or otherwise penalizing a person, for either:
(1) The person’s use of cannabis off the job and away from the workplace; or
(2) An employer-required drug screening test that has found the person to have non-psychoactive cannabis metabolites in their urine, hair, blood, urine, or other bodily fluids.
Now Senate Bill 700, which will also become effective on January 1, 2024, amends Government Code section 12954 to make it unlawful for an employer to request information from an applicant for employment relating to the applicant’s prior use of cannabis. Thus, any questions about prior marijuana use must be omitted from job applications and job interviews. Information about a person’s prior cannabis use obtained from the person’s criminal conviction history are exempt from the new law.
Government Code section 12954 is not intended to permit employees to possess or use marijuana on the job, nor will it affect the rights of employers to maintain a drug- and alcohol-free workplace. Rather, the focus of 12954 is on tetrahydrocannabinol (THC) and impairment of the individual. THC is the chemical compound in cannabis that causes impairment and psychoactive effects. After THC is metabolized, it is stored in the body as a non-psychoactive cannabis metabolite. These metabolites do not indicate that an individual is impaired, but only reveal whether an individual has consumed cannabis recently.
Based on the distinction between THC and metabolites, the legislation will not prohibit an employer from making employment decisions “based on scientifically valid preemployment drug screening conducted through methods that do not screen for non-psychoactive cannabis metabolites.” Thus, a drug test for THC that does not rely on the presence of non-psychoactive cannabis metabolites can be used, as can impairment tests that measure a person against their own baseline performance.
Government Code section 12954 will not apply to an employee in the building and construction trades, and will not apply to applicants or employees hired for positions that require a federal government background investigation or security clearance. Further, the law will not preempt state or federal laws requiring applicants or employees to be tested for controlled substances, including laws relating to receiving federal funding or federal licensing-related benefits, or entering into federal contracts.
Click Here for the Original Article
New York Governor Signs Clean Slate Law to Seal Older Criminal Convictions
On November 16, 2023, New York Governor Kathy Hochul signed a bill into law requiring records of certain past criminal convictions to be sealed. The legislation is intended in part to prevent discrimination in hiring against previously incarcerated individuals who have satisfied their sentences.
Quick Hits
- The Clean Slate Act calls for eligible misdemeanor convictions to be sealed after three years from an individual’s satisfaction of a sentence and eligible felony convictions to be sealed after eight years from an individual’s satisfaction of a sentence.
- The New York State Human Rights Law has been amended to prohibit discrimination based on a sealed conviction, subject to limited exceptions.
- The law will likely have an impact on employer background checks and hiring practices.
The Clean Slate Act
The signing of the Clean Slate Act, Assembly Bill A01029C and Senate Bill S07551-A, comes months after the bill passed the state legislature in June 2023.
Under A01029C/S07551-A, eligible misdemeanor convictions will be sealed “at least three years” after—or, for felony convictions, “at least eight years” after—an individual’s “release from incarceration or the imposition of sentence if there was no sentence of incarceration.”
If an individual is convicted of another crime before a prior felony conviction is sealed, the clock for sealing the prior conviction will start from the start of the subsequent conviction. Sealing will not apply to sex offenses or Class A felonies under New York penal law, such as aggravated murder, except for certain bribery offenses. The law also contains exceptions that provide that sealed convictions may be accessible where “relevant and necessary”—for example, in connection with state and federal laws requiring criminal background checks for licenses and certain employment with responsibility for the safety and well-being of children or adolescents, elderly individuals, individuals with disabilities, or other vulnerable populations.
Further, the law also permits a private right of action by any individual who had a conviction sealed, allowing the individual to collect damages against a person who disclosed the sealed conviction where:
- there was a duty of care owed to the individual with the sealed conviction;
- the person knowingly and willfully breached such duty;
- the disclosure caused injury to the individual; and
- the “breach of that duty was a substantial factor in the events that caused the injury suffered by such person.”
The law takes effect one year after its signing, after which time the New York State Office of Court Administration will have up to three years to implement processes to identify and seal eligible convictions.
Implications for Employers
The purpose of the law is to prevent discrimination against individuals with certain criminal histories, and it appears to reinforce New York State’s commitment to guarding against such discrimination in employment opportunities.
The law could have an impact on employer background checks and hiring practices, and it could limit the usefulness of background checks. New York is one of as many as a dozen states with similar clean slate laws. The law also could have the effect of expanding the labor pool for employers.
New York City’s Fair Chance Act, or “ban the box law,” which was amended in July 2021, and its regulations already restrict employers from taking adverse employment action against job applicants based on applicants’ arrest or criminal conviction histories.
Next Steps
Employers in New York may want to review their current background check practices and assess the extent to which the new Clean Slate Act will impact their hiring processes.
Click Here for the Original Article
COURT CASES
Boston settles drug testing discrimination suit for $2.6M
Boston has settled a decades-old lawsuit over discriminatory hair drug testing for $2.6 million. The test at the heart of the lawsuit was one employed by the Boston Police Department to detect the presence of controlled substances in hair follicles, which the plaintiffs in the nearly 20-year-old lawsuit argued came back with disproportionate numbers of false positives for Black people.
Experts in the case testified that not only was the test unable to reliably distinguish whether drug remnants found in hair were the result of ingestion — which would be the point for the testing — or from exterior contamination.
What led to the disproportionate number of Black testees returning false positives, experts argued, was because of the unique texture of their hair as well as commonly used grooming products led to more likely external contamination.
“This settlement puts an end to a long, ugly chapter in Boston’s history,” said Oren Sellstrom, Litigation Director at Lawyers for Civil Rights, one of the two firms who represented the Black police officer plaintiffs, in an emailed statement. “As a result of this flawed test, our clients’ lives and careers were completely derailed. The City has finally compensated them for this grave injustice.”
Lawyers for Civil Rights, which says it has represented the plaintiffs in the case since the beginning, announced the settlement Thursday. The settlement will pay out an equal portion of the money — $650,000 — to each of the four plaintiffs.
The law firm WilmerHale also represented the plaintiffs on a pro bono basis.
The test, which has been administered since at least 1999 according to prior Herald reporting, was administered by Acton-based Psychemedics, which has also been involved in lawsuits with the city. Herald efforts to reach a company representative for comment Thursday were unsuccessful.
Mayor Michelle Wu said “this settlement marks the end to an important process to guarantee that every officer is treated fairly.
“Under (Police) Commissioner Michael Cox’s leadership, we are strengthening our entire department by building more trust within the department and with (the) community and supporting a workforce that reflects the communities we serve,” her statement continued.
The settlement nearly doubles the amount of money the plaintiffs’ lawyers say the city has shelled out already in fighting the various lawsuits against the controversial test, as they say the city has spent some $2.1 million in legal fees.
The settlement was also warmly received by the police unions the Massachusetts Association of Minority Law Enforcement Officers and the Boston Police Patrolmen’s Association.
“The hair test not only wreaked havoc on the lives of many Black officers, it also deprived Boston residents of exemplary police officers,” Jeffrey Lopes, MAMLEO president, said. “The City is still trying to make up for the loss of diversity on the police force that resulted from use of the hair test.”
Likewise, BPPA President Larry Calderone said his organization “couldn’t be happier with the decision. Thankfully, the award gives closure and vindication to police officers who pushed back and helped pave the way to eliminate a faulty hair follicle drug testing procedure.”
Click Here for the Original Article
INTERNATIONAL DEVELOPMENTS
Canada: Salary or wage ranges must be included in publicly advertised jobs in British Columbia
As of 1 November 2023, all employers in British Columbia must specify the expected salary or wage range for all publicly advertised job opportunities. The government of British Columbia recently published a guidance document clarifying this requirement (the Guidelines).
In a previous post, we explained that the British Columbia Pay Transparency Act, S.B.C. 2023, c.18 (PTA) received Royal Assent on 11 May 2023. The PTA was introduced to help address inequalities and close the gender pay gap between men and women in British Columbia.
While the Guidelines do not have the same legal force as the PTA itself or any regulations that are published under the PTA in the future, the Guidelines are nevertheless a helpful clarification and insight into what will be expected of employers with respect to publishing salary or wage information.
The key takeaways for employers from the Guidelines are:
- Employers are only required to include an employee’s expected base salary or wage in a job posting. Employers can voluntarily include additional details beyond the base salary or wage such as bonuses, benefits, commission, tips or overtime pay.
- The salary or wage range must have a specified minimum and maximum. For example, an employer would not be compliant with the PTA if the job advertisement described the salary or wage range as ‘up to CAD20 per hour‘ or ‘CAD20 per hour and up‘. The examples provided of acceptable ranges include ‘CAD20-CAD30 per hour’or ‘CAD40,000 – CAD60,000 per year‘. Currently, there are no guidelines as to how large the expected salary or wage range can be in a job advertisement, although limits may be set out by regulations in the future.
- Employers and applicants are not restricted by the expected salary or wage range advertised. Applicants can request a higher salary or wage than advertised. Similarly, employers can agree to pay a higher salary or wage than what was publicly advertised.
- The requirement to publish salary or wage information under the PTA applies to jobs advertised in jurisdictions outside of British Columbia as well, so long as the job in question is open to British Columbia residents and can be filled by someone living in British Columbia, either in-person or remotely.
- The requirement to publish salary or wage information applies to jobs posted by third parties on job search websites, job boards and other recruitment platforms on behalf of the employer.
- The requirement to publish salary or wage information does not apply to general ‘help wanted‘ posters and recruitment campaigns that do not mention specific job opportunities; or job postings that are not posted publicly.
Employers should familiarise themselves with the new obligations around job postings and ensure that their hiring practices and policies comply. Further information regarding other requirements implemented by the PTA can be found here.
Click Here for the Original Article
UK Extension to the EU-U.S. Data Privacy Framework takes effect
The UK Extension to the EU-U.S. Data Privacy Framework takes effect
On 12 October 2023, the UK Extension to the EU-U.S. Data Privacy Framework (UK Extension) – also known as the UKUS Data Bridge – took effect, paving the way for transatlantic personal data transfers from the United Kingdom (UK) to the United States (US). The UK Extension permits the flow of personal data from the UK to the US without the need for further safeguards.
What is the UK Extension and how does it relate to the EU-US Data Privacy Framework (DPF)?
Under Chapter V of the General Data Protection Regulation (GDPR), transfers of personal data outside the European Economic Area (EEA), which includes the European Union (EU), Norway, Iceland, and Liechtenstein, are prohibited unless the intended destination offers an ‘adequate level of protection’ of personal data compared to EU law (Article 45 GDPR). Alternatively, such transfers can proceed if certain appropriate safeguards are in place (Article 46 GDPR), or if specific derogations apply (Article 49 GDPR).
After the UK’s withdrawal from the EU, the UK retained the provisions of the GDPR, known as the UK GDPR, along with all EEA adequacy decisions in effect up to that point.
On 10 July 2023, the European Commission adopted an adequacy decision for transatlantic transfers under the terms of the DPF (see our Client Alert of 7 August 2023). The DPF applies to transfers of personal data from the EEA to the US. However, as the DPF was adopted after Brexit, it does not apply to transfers originating from the UK.
Therefore, the UK needed to create its own transfer mechanism with the US. After an extensive analysis of relevant US law, the UK approved the UK Extension. Technically, the UK Extension functions as a territorial extension of the EU-US DPF, meaning that transfers of personal data from the UK to the US will be carried out under similar conditions to those coming from the EEA.
The UK Extension allows UK data subjects, whose personal data has been transferred to the US, to enjoy guarantees essentially equivalent to the fundamental rights offered to EEA data subjects. This mechanism relies on changes in US law, which require enforcement authorities to limit their access to the personal data transferred for national security purposes. The UK was designated as a qualifying state under US Executive Order 14086, and therefore, similar to their EEA counterparts, UK-based data subjects may access the US Data Protection Review Court (DPRC), established for data subjects to enforce their rights.
What do businesses transferring data from the UK need to know?
Before transferring personal data to the US, a UK-based exporter must confirm that the US-based recipient has self-certified under the DPF and signed up to the UK Extension. This can be done through a search of the DPF List.
To qualify for self-certification under the DPF (and the UK Extension), US businesses must be subject to the jurisdiction of the Federal Trade Commission (FTC) or Department of Transport (DoT). As a result, some sectors, such as banking and personal data gathered for journalistic purposes, currently do not qualify for self-certification under the DPF.
In addition, in its review of the UK Extension, the UK’s Information Commissioner’s Office (ICO), flagged a few concerns, such as a narrower scope of the concept of “sensitive information”. As a result, various types of sensitive information are excluded from the UK Extension’s protection. To remedy that, the UK Government clarified that such information “must be appropriately identified as sensitive to US organisations when transferred under the UK-US data bridge to ensure it receives appropriate protections”.
If a UK-based organisation cannot rely on the UK Extension, it can opt instead for one of the pre-existing appropriate safeguards, such as the UK International Data Transfer Agreement, or the UK Addendum to the EU’s Standard Contractual Clauses (SCCs). Alternatively, and in specific cases, UK exporters may be able to rely on the derogations under Article 49 UK GDPR for international data transfers. However, with these methods, UK-based exporters may still be required to carry out a transfer impact (risk) assessment (TIA, TRA).
Click Here for the Original Article
MISCELLANEOUS DEVELOPMENTS
Automation & Employment Discrimination
Employers are increasingly using some form of Artificial Intelligence (“AI”) in employment processes and decisions. Per the Equal Employment Opportunities Commission (“EEOC”), examples include:[R]esume scanners that prioritize applications using certain keywords; employee monitoring software that rates employees on the basis of their keystrokes or other factors; ‘virtual assistants’ or ‘chatbots’ that ask job candidates about their qualifications and reject those who do not meet pre-defined requirements; video interviewing software that evaluates candidates based on their facial expressions and speech patterns; and testing software that provides ‘job fit’ scores for applicants or employees regarding their personalities, aptitudes, cognitive skills, or perceived ‘cultural fit’ based on their performance on a game or on a more traditional test.
EEOC-NVTA-2023-2, Select Issues: Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964, Equal Emp’t Opportunity Comm’n, (issued May 18, 2023) (last visited November 17, 2023).
As the agency tasked with enforcing and promulgating regulations regarding, federal antidiscrimination laws, the EEOC is concerned how AI could result in discrimination in the workplace. While AI can be present throughout the work lifecycle, from job posting through termination, the EEOC has devoted particular attention to applicant and application sorting and recommendation procedures. If not executed and monitored correctly, the use of AI in these processes could result in discriminatory impact (a.k.a. disparate treatment) for certain protected classes. These claims arise when employers use facially neutral tools/policies, but the application of the same results in some different treatment or impact on a particular protected class.
For example, if an application sorting system automatically discards the applications of individuals who have one or more gaps in employment, the result could be that women (due to pregnancy and childbirth-related constraints) and applicants with disabilities are rejected at a higher rate than males and “able-bodied” applicants. In this circumstance, the employer “doesn’t know what it doesn’t know” and would likely be unaware that some women and applicants with disabilities were pre-sorted before review. While the employer may not have intended for this outcome, it could nevertheless be found to have violated Title VII of the Civil Rights Act (“Title VII”) and the Americans with Disabilities Act (“ADA”).
Ironically, many employment AI tools are marketed as bias-eliminating because some can operate through data de-identification—a process by which protected class information is removed from application information. For example, as a general matter, applicants with “ethnic sounding” names are less likely to receive call backs than those with Anglican sounding names, like the John Smiths of the world. By replacing application names with numbers, implicit bias is less likely to creep in.
Although data de-identification is one tool for avoiding bias in employment decisions, it is not a cure-all and can sometimes backfire. For example, data de-identification could result in an employer being ignorant of the disparate impact caused by its policies. Take the name example. Names often are not the only indicators of race or culture. Presume the HR professional reviewing applications is not well versed in Historically Black Colleges and Universities (“HBCU”), and when the professional does not recognize the name of an HBCU on “John Smith’s” application, she moves it to the bottom of the pile. Of course, there are other more subtle race/ethnicity data points that could trigger subconscious bias (e.g., residence, prior job experience, etc.). While there is not one solution to this complex problem, auditing application systems can at least put the employer on notice that something may need to be changed.
When developing an AI utilization strategy, employers must be mindful of the complexity of both the systems and the law.
Click Here for the Original Article
FTC Authorizes Use of Civil Investigative Demands (CIDs) for AI-related Products and Services
On November 21, 2023, the Federal Trade Commission announced that it has approved an omnibus resolution authorizing the use of compulsory process in non-public investigations involving products and services that use or claim to be produced using artificial intelligence (AI) or claim to detect its use.
The omnibus resolution will streamline the FTC staff’s ability to issue civil investigative demands (CIDs), which are a form of compulsory process similar to a subpoena, in investigations relating to AI, while retaining the agency’s authority to determine when CIDs are issued.
The FTC issues CIDs to obtain documents, information and testimony that advance FTC consumer protection and competition investigations. The omnibus resolution will be in effect for 10 years.
AI includes, but is not limited to, machine-based systems that can, for a set of defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. Generative AI can be used to generate synthetic content including images, videos, audio, text, and other digital content that appear to be created by humans. Many companies now offer products and services using AI and generative AI, while others offer products and services that claim to detect content made by generative AI.
Although AI, including generative AI, offers many beneficial uses, it can also be used to engage in fraud, deception, infringements on privacy and other unfair practices, which may violate the FTC Act and other laws. At the same time, AI can raise competition issues in a variety of ways, including if one or just a few companies control the essential inputs or technologies that underpin AI.
Click Here for the Original Article
Discrimination and bias in AI recruitment: a case study
Barely a day goes by without the media reporting the potential benefits of or threats from AI. One of the common concerns is the propensity of AI systems to return biased or discriminatory outcomes. By working through a case study about the use of AI in recruitment, we examine the risks of unlawful discrimination and how that might be challenged in a UK employment tribunal.
Introduction
Our case study begins with candidates submitting job applications which are to be reviewed and ‘profiled’ by an AI system (the automated processing of personal data to analyse or evaluate people, including to predict their performance at work). We follow this through to the disposal of resulting employment tribunal claims from the unsuccessful candidates, and examine the risks of unlawful discrimination in using these systems. What emerges are the practical and procedural challenges for claimants and respondents (defendants) arising from litigation procedures that are ill-equipped for an automated world.
Bias and discrimination
Before looking at the facts, we consider the concepts of bias and discrimination in automated decision-making.
The Discussion Paper published for the AI Safety Summit organised by the UK government and held at Bletchley Park on 1 and 2 November highlighted the risks of bias and discrimination and commented:
Frontier AI models can contain and magnify biases ingrained in the data they are trained on, reflecting societal and historical inequalities and stereotypes. These biases, often subtle and deeply embedded, compromise the equitable and ethical use of AI systems, making it difficult for AI to improve fairness in decisions. Removing attributes like race and gender from training data has generally proven ineffective as a remedy for algorithmic bias, as models can infer these attributes from other information such as names, locations, and other seemingly unrelated factors.
What is bias and what is discrimination?
Much attention has been paid to the potential for bias and discrimination in automated decision-making. Bias and discrimination are not synonymous but often overlap. Not all bias amounts to discrimination and not all discrimination reflects bias.
A solution can be biased if it leads to inaccurate or unfair outcomes. A solution can be discriminatory if it disadvantages certain groups. A solution is unlawfully discriminatory if it disadvantages protected groups in breach of equality law.
How can bias and discrimination taint automated decision-making?
Bias can creep into an AI selection tool in a number of ways. For example, there can be: historical bias; sampling bias; measurement bias; evaluation bias; aggregation bias; and deployment bias.
To give a recent example, the shortlist of six titles for the 2023 Booker Prize included three titles by authors with the first name ‘Paul’. An AI programme asked to predict works to be shortlisted for this prize, is likely to identify being called ‘Paul’ as a key factor. Of course, being called Paul will not have contributed to their shortlisting and identification of this as a determining factor amounts to bias. In doing so, the AI tool would be identifying a correlating factor which had not actually been a factor in the shortlisting; the tool’s prediction would therefore be biased as it would be inaccurate and unfair. In this case this bias is also potentially discriminatory as Paul is generally a male name and possibly also discriminatory on grounds of ethnicity and religion.
An algorithm can be tainted by historical bias or discrimination. AI algorithms are trained using past data. A recruitment algorithm takes data from past candidates and there will always be a risk of under-representation of particular groups in that training data. Bias and discrimination is even more likely to arise from the definition of success which the algorithm seeks to replicate based on successful recruitment in the past. There is an obvious risk of past discrimination being embedded in any algorithm.
This process presents the risk of random correlations being identified by AI algorithm, and there a several reported examples of this happening. One example from several years ago is an algorithm which identified being called Jared as one of the strongest correlates of success in a job. Correlation is not always causation.
An outcome may potentially be discriminatory but not be unfair or inaccurate and so not biased. If, say, a recruitment application concluded that a factor in selecting the best candidates was having at least ten years’ relevant experience, this would disadvantage younger candidates and a younger candidate may be excluded even if, in all other respects, they would be a strong candidate. This would be unlawful if it could not be justified on the facts. It would not, however, necessarily be a biased outcome.
There has been much academic debate on the effectiveness of AI in eliminating the sub-conscious bias of human subjectivity. Supporters argue that any conscious or sub-conscious bias is much reduced by AI. Critics argue that AI merely embeds and exaggerates historic bias.
The law
Currently in the UK there are no AI specific laws regulating the use of AI in employment. The key relevant provisions at present are equality laws and data privacy laws. This case study focuses on discrimination claims under the Equality Act 2010.
The case study
Acquiring the shortlisting tool
Money Bank gets many hundreds of applicants every year for its annual recruitment of 20 financial analysts to be based in its offices in the City of London. Shortlisting takes time and costly HR resources. Further, Money Bank is not satisfied with the suitability of the candidates shortlisted each year.
Money Bank, therefore, acquires an AI shortlisting tool, GetBestTalent, from a leading provider, CaliforniaAI, to incorporate into its shortlisting process.
CaliforniaAI is based in Silicon Valley in California and has no business presence in the UK. Money Bank is attracted by CaliforniaAI’s promises that GetBestTalent will identify better candidates, more quickly and more cheaply than by relying on human-decision makers. Money Bank is also reassured that CaliforniaAI’s publicity material states that GetBestTalent has been audited to ensure that it is bias and discrimination-free.
Money Bank was sued recently by an unsuccessful job applicant claiming that they were unlawfully discriminated against when rejected for a post. This case was settled but proved costly and time-consuming to defend. Money Bank wants, at all costs, to avoid further claims.
Data protection impact assessment
Money Bank’s Data Protection Officer (DPO) conducts a data protection impact assessment (DPIA) into the proposed use by Money Bank of GetBestTalent given the presence of various high-risk indicators, including the innovative nature of the technology and profiling. Proposed mitigations following this assessment include bolstering transparency around the use of automation by explaining clearly that it will form part of the shortlisting process; ensuring that an HR professional will review all successful applications; and confirming with CaliforniaAI that the system is audited for bias and discrimination. On that basis, the DPO considers that the shortlisting decisions are not ‘solely automated’ and is satisfied that Money Bank’s proposed use of the system complies with UK data protection laws (this case study does not consider the extent to which the DPO is correct in considering Money Bank’s GDPR obligations to have been satisfied in this case).
Money Bank enters into a data processing agreement with CaliforniaAI that complies with UK GDPR requirements. Money Bank also notes that CaliforniaAI is self-certified as compliant with the UK extension to the EU-US Data Privacy Framework.
AI and recruitment
GetBestTalent is an off-the-shelf product and CaliforniaAI’s best seller. It has been developed for markets globally and used for many years though it is updated by the developers periodically. The use of algorithms, and the use of AI in HR systems specifically, is not new but has been growing rapidly in recent years. It is being used at different stages of the recruitment process but one of the most common applications of AI by HR is to shortlist vast numbers of candidates down to a manageable number.
AI shortlisting tools can be bespoke (developed specifically for the client); off-the-shelf; or based on an off-the-shelf system but adapted for the client. GetBestTalent algorithm is based on ‘supervised learning’ where the input data and desired output are known and the machine learning method identifies the best way of achieving the output from the inputted data. This application is ‘static’ in that it only changes when CaliforniaAI developer’s make changes to the algorithm. Other systems, known as dynamic systems, can be more sophisticated and continuously learn how to make the algorithm more effective at achieving its purpose.
Sifting applicants
This year 800 candidates apply for the 20 financial analyst positions at Money Bank. Candidates are all advised that Money Bank will be using automated profiling as part of the recruitment process.
Alice, Frank and James are unsuccessful, and all considered themselves strong candidates with the qualifications and experiences advertised for the role. Alice is female, Frank is black, and James is 61 years old. Each is perplexed at their rejection and concerned that their rejection was unlawfully discriminatory. All three are suspicious of automated decision-making and have had read or heard about concerns about these systems.
Discrimination claims in the employment tribunal
Alice, Frank and James each contact Money Bank challenging their rejection. Money Bank asks one of its HR professionals, Nadine, to look at each of the applications. There is little obvious to differentiate these applications from the shortlisted candidates – and Nadine cannot see that they are obviously stronger – so confirms the results of the shortlisting process.
The Bank responds to Alice, Frank and James saying that it has reviewed the rejections, and that it uses a reputable AI system which they are reassured does not discriminate unlawfully but they do not have any more information as the criteria used are developed by the algorithm and are not visible to Money Bank. The data processing agreement between Money Bank and CaliforniaAI requires CaliforniaAI (as processor) to assist Money Bank to fulfil its obligation (as controller) to respond to rights requests, but does not specifically require CaliforniaAI to provide detailed information on the logic behind the profiling nor its application to individual candidates.
Alice, Frank and James all start employment tribunal proceedings in the UK claiming, respectively sex, race and age discrimination in breach of the UK’s Equality Act. They:
- claim direct and indirect discrimination against Money Bank; and
- sue CaliforniaAI for inducing and/or causing Money Bank to discriminate against them.
Despite California AI having no business presence in the UK and despite the process being more complicated, the claimants can bring proceedings against an overseas party in the UK employment tribunal.
Unless the claimants are aware of each other’s cases, in reality, these cases are likely to proceed independently. However, for the purposes of this case study, all three approach the same lawyer who successfully applies for the cases to be joined and heard together.
Disclosure
Alice, Frank and James recognise that, despite their suspicions, they will need more evidence to back up their claims. They, therefore, contact Money Bank and CaliforniaAI asking for disclosure of documents with the data and information relevant to their rejections.
They also write to Money Bank and California AI with data subject access requests (DSARs) making similar requests for data. These requests are made by reason of their rights under UK data protection law over which the employment tribunal has no jurisdiction so is independent of their employment tribunal claims.
Disparate impact data
In order to seek to establish discrimination, each candidate requests data:
- Alice asks Money Bank for documents showing the data on both the total proportion of candidates, and the proportion of successful candidates, who were women. This is needed to establish her claim of indirect sex discrimination.
- Frank asks for the same in respect of the Black, Black British, Caribbean or African ethnic group.
- James asks for the data for both over 60-year-olds and over 50-year-olds.
They also ask CaliforniaAI for the same data from all exercises in which GetBestTalent has been used globally.
Would a tribunal order a disclosure request of this nature? In considering applications for the provision of information or the disclosure of documents or data, an employment tribunal must consider the proportionality of the request. It is more likely to grant applications which require extensive disclosure or significant time or cost to provide the requested information where the sums claimed are significant.
In this case, Money Bank has the information sought about the sex, ethnicity and age of both all candidates and of those who were successful which it records as part of its equality monitoring procedures. Providing it, therefore, would not be burdensome. In other cases, the employer may not have this data. CaliforniaAI has the means to extract the data sought, at least from many of the uses of GetBestTalent. However, it would be a time-consuming and costly exercise to do this.
Both respondents refuse to provide any of the data sought. Money Bank argues that this is merely a fishing exercise as none of the claimants have any evidence to support a discrimination claim. They also argue that the system has been audited for discrimination and, therefore, the clams are vexatious. CaliforniaAI also regards the information sought as a trade secret (of both itself and its clients) and also relies on the time and cost involved in gathering it.
In response the claimants apply to the employment tribunal for an order requiring the respondents to provide the data and information requested.
The tribunal orders Money Bank to provide the claimants with the requested documents. It declines, however, to make the order requested against CaliforniaAI.
In theory, the tribunal has the power to make the requested order against CaliforniaAI. Although, it cannot make the order against an overseas person which is not a party to the litigation, in this case California AI is a party. However, the tribunal regards the request as manifestly disproportionate and gives this request short shrift.
The disparate impact data does not amount to the individuals’ personal data so is not relevant to their DSARs.
Equality data
The claimants also request from Money Bank documents showing the details of: (a) the gender, ethnic and age breakdown of the Bank’s workforce in the UK (as the case may be); (b) the equality training of the managers connected with decision to use the GetBestTalent solution; and (c) any discrimination complaints made against Money Bank in the last five years and the outcome.
Money Bank refuses all requests as it argues that the claim relates to the discriminatory impact of CaliforniaAI’s recruitment solution so that all these other issues are irrelevant. It could provide the information relatively easily but is mindful that the Bank has faced many discrimination claims in recent years and has settled or lost a number so does not want to highlight this.
The tribunal refuses to grant the requests for the equality data as it considers it unnecessary for the claimants to prove their case. The claimants will, however, still be able to point to Money Bank’s failure to provide this information in seeking to draw inferences. The tribunal also refuses the request for details of past complaints (though details of tribunal claims which proceeded to a hearing are available from a public register).
The tribunal does ask Money Bank to provide details of the equality training provided to the relevant managers as it was persuaded that this is relevant to the issues to be decided.
This information does not amount to the individuals’ personal data so is not relevant to their DSARs.
Disclosing the algorithm and audit
The claimants also asked CaliforniaAI to provide it with:
- a copy of the algorithm used in the shortlisting programme;
- the logic and factors used by the algorithm in achieving is output (i.e. explainability information relating to their individual decisions); and
- the results of the discrimination audit.
In this case, CaliforniaAI has the information to explain the decisions, but this is not auto-generated (as it can be with some systems) or provided to Money Bank. Money Bank’s contract with CaliforniaAI does not explicitly require it to provide this information.
CaliforniaAI refuses to provide any of the requested information on the basis that these amount to trade secrets and also that the code would be meaningless to the claimants. The claimants counter that expert witnesses should be able to consider the code as medical experts would where complex medical evidence is relevant to tribunal proceedings.
The tribunal judge is not persuaded by the trade secret argument. If disclosed the code would be in the bundle of documents to which observers from the general public would have access (though couldn’t copy or remove). The tribunal has wide powers to regulate its own procedure and, in theory, could take steps in exceptional cases to limit public access to trade secrets.
However, the tribunal decides not to order disclosure of the code on the grounds of proportionality. It spends more time deliberating over the ‘explainability’ information and the details of the auditing of the system.
Ultimately, it decides not to require disclosure of either. It considers that, in so far as the direct discrimination claims are concerned, it requires more than the assertion by the claimants that they have been directly discriminated against to make the requested order proportionate. If the sums likely to be awarded had been greater, it may well have reached a different decision here. In so far as Alice’s indirect claim is concerned, the explainability information and audit are more likely to be relevant to Money Bank’s defence than Alice’s claim so leaves it Money Bank to decide whether or not to disclose it.
Arguably, UK GDPR requires Money Bank to provide the explainability information in response to the data subject access request, and for Money Bank’s data processing agreement with CaliforniaAI to oblige the American company to provide this. However, both respond to the DSAR refusing to provide this information (this case study does not consider the extent to which they might be justified in doing so under UK GDPR).
What did the data show?
The data provided by Money Bank shows that of the 800 job applicants: 320 were women (40%) and 480 were men (60%); 80 described their ethnicity as Black, Black British, Caribbean or African (10%); and James was the only applicant over the age of 50.
Of the 320 women, only four were successful (20% of total shortlisted) whereas 16 men were shortlisted (80% of shortlisted). Of the 80 applicants from Frank’s ethnic group, three were appointed (15% of successful applicants). Therefore, the data shows that the system had a disparate impact against women but not against Black, Black British, Caribbean or African candidates. There was no data to help James with an indirect discrimination claim.
After consideration of the data, Frank and James abandon their indirect discrimination claims.
Establishing indirect discrimination
Alice needs to establish:
- a provision, criterion or practice (PCP);
- that the PCP has a disparate impact on women;
- that she is disadvantaged by the application of the PCP; and
- that the PCP is not objectively justifiable.
- PCP
Alice relies on the AI application used by Money Bank as her PCP.
If the decision to reject her had been ‘explainable’ then, as is the case with most human-decisions, the PCP could also be the actual factor which disadvantaged her.
Putting this into practice, let’s say it could have been established from the explainability information that the algorithm had identified career breaks as a negative factor. Alice has had two such breaks and might, in such circumstances, allege that this was unlawfully indirectly discriminatory. A tribunal may well accept that such a factor disadvantages women without needing data to substantiate this. Money Bank would then need to show either that this had not disadvantaged Alice or that such a factor was objectively justifiable.
Neither defence would be easy in this case. It is possible that the respondents could run on a counterfactual to show that Alice had not been disadvantaged by her career breaks. This would mean applying the application to an alternative set of facts – so here, running it against Alice’s application but without career breaks to show she would not have been shortlisted in any event.
In our case, however, Money Bank does not have an explanation for Alice’s failure to be shortlisted.
- Disparate impact
Alice relies on the data to show a disparate impact.
The respondents could seek to argue that there is no disparate impact based on the auditing undertaken by CaliforniaAI using larger numbers of aggregate audits and argue that the Money Bank’s data reflects a random outcome. A tribunal is certain not to accept this argument at face value. Further, the legal tests in the UK and the US are not the same so any auditing in the US will be of reduced value.
In such a case, the respondents could seek to introduce data from its verification testing or the use of the platform by other employers. This may then require expert evidence on the conclusions to be drawn from the audit data.
In our case, neither the audit nor evidence of the impact of GetBestTalent on larger numbers are before the tribunal. Indeed, here, California AI refused to disclose it.
- Disadvantages protected group and claimant
Alice does not have to prove why a PCP disadvantages a particular group. The Supreme Court in Essop v Home Office (2017) considered a case where black candidates had a lower pass rate than other candidates under a skills assessment test. The claimants were unable to explain why the test disadvantaged that group, but this was not a bar to establishing indirect discrimination.
The PCP (the GetBestTalent solution) clearly disadvantages Alice personally as her score was the reason she was not shortlisted.
- Justification
Alice satisfies the first three steps in proving her case.
The burden will then pass to Money Bank to show that the use of this particular application was justified – that it was a proportionate means of achieving a legitimate aim.
What aims could Money Bank rely on? Money Bank argues that its legitimate aim is decision-making which is quicker, cheaper, results in better candidates and discriminates less than with human-made decisions.
Saving money is tricky: it cannot be a justification to discriminate to save money, but this can be relevant alongside other aims. Nonetheless, Money Bank is likely to establish a legitimate aim for the introduction of automation in its recruitment process based on the need to make better and quicker decisions and avoid sub-conscious bias. The greater challenge will be showing that the use of this particular solution was a proportionate means of achieving those aims.
In terms of the objective of recruiting better candidates, Money Bank would have to do more than merely assert that the use of GetBestTalent meant higher quality short-listed candidates. It might, for example, point to historic problems with the quality of successful candidates. This would help justify automation, but Money Bank would still have to justify the use of this particular system.
Money Bank seeks to justify its use of GetBestTalent and satisfy its proportionality by relying on its due diligence. However, it did no more than ask the question of CaliforniaAI who reassured Money Bank that it had been audited.
It also points to the human oversight where a HR professional reviews all candidates which the system proposes to shortlist to verify this decision. The tribunal is unimpressed with the human oversight as this did not extend to oversight of all the unsuccessful applications.
Pulling this together, would a tribunal accept the use of this platform satisfied the objective justification test? This is unlikely. In all likelihood, Alice would succeed, and the matter would proceed to a remedies hearing to determine her compensation.
Establishing direct discrimination
Alice is also pursuing a direct sex discrimination claim and Frank and James, not deterred by the failure to get their indirect discrimination claims off the ground, have also continued their direct race and age discrimination claims respectively. The advantage for Alice in pursuing a direct discrimination claim is that this discrimination (unlike indirect discrimination) cannot be justified, and the fact of direct discrimination is enough to win her case.
Each applicant has to show that they were treated less favourably (i.e. not shortlisted) because of their protected characteristic (sex, race, age respectively). To do this, the reason for the decision not to shortlist must be established.
They have no evidence of the reason, but this does not necessarily defeat their claims. Under UK equality law, the burden of proof can, in some circumstances, transfer so that it is for the employer to prove that it did not discriminate. To prove this, the employer would then have to establish the reason and show that it was not the protected characteristic of the claimant in question. In this case, this would be very difficult for Money Bank as it does not know why the candidates were not shortlisted.
What is required for the burden of proof to transfer? The burden of proof will transfer if there are facts from which the court could decide that discrimination occurred. This is generally paraphrased as the drawing of inferences of discrimination from the facts. If inferences can be drawn, the employer will need to show that there was not discrimination.
Prospects of success
Looking at each claimant in turn:
- Frankwill struggle to draw inferences as there is no disparate impact from any less favourable treatment may be inferred. The absence of any disparate impact does not mean that Frank could not have been directly discriminated against but without more his claim in unlikely to get anywhere. He does not have an explanation for the basis of the decision or the ethnic breakdown of its current workforce. He has limited information about Money Bank’s approach to equality. He can’t prove facts which in the absence of an explanation show prima facie discrimination so his claim fails.
- James‘s claim is unlikely to be rejected as quickly as Frank’s as the data doesn’t help prove or disprove his claim. James could try to rely on the absence of older workers in the workforce, any lack of training or monitoring and past claims if he had this information as well as the absence of an explanation for his rejection but, in reality, this claim looks pretty hopeless.
- Alicemay be on stronger ground. She can point to the disparate impact data as a ground for inferences but this will not normally be enough on its own to shift the burden of proof. Alice can also point to the opaque decision-making. Money Bank could rebut this if the decision was sufficiently ‘explainable’ so the reason for Alice’s rejection could be identified. However, it cannot do so here. The dangers of inexplicable decisions are obvious.
Would the disparate impact and opaqueness be enough to draw inferences? Probably not – particularly if Alice does not have any of the equality data or information about past discrimination claims referred to above and the equality training information does not show a total disregard for equality. She could try and get information about these on cross-examination of witnesses and could point to MoneyBank’s failure to provide the equality data as grounds for drawing inferences and reversing the burden of proof. However, after carefully balancing the arguments, the tribunal decides in our case that Alice can’t prove facts which, in the absence of an explanation, show prima facie discrimination. This means that her direct discrimination claim fails.
If inferences had been drawn and Money Bank had been required to demonstrate that the protected characteristic in question had not been the reason for its decision, Money Bank would have argued that it anonymises the candidate data and ensures age, sex and ethnicity of candidates is omitted and that, therefore, the protected characteristic could not have informed the decision. However, as studies have shown how difficult it is to suppress this, the tribunal would give this argument short shrift. If inferences had been drawn, Alice would have, in all likelihood, succeeded with her direct discrimination claim as well as her indirect discrimination claim.
Causing or inducing discrimination
If Money Bank is liable then CaliforniaAI is also likely to be liable for causing/inducing this unlawful discrimination by supplying the system on which Money Bank based its decision. CaliforniaAI cannot be liable if Money Bank is not liable.
Conclusion
The case of Alice, Frank and James highlights the real challenges for claimants winning discrimination claims where AI solutions have been used in employment decision-making. The case also illustrates the risks and pitfalls for employers using such solutions. It illustrates how both existing data protection and equality laws are unsuited for regulating automated employment decisions.
Looking forward, as the UK and other countries are debating over the appropriate level of regulation over AI in areas such as employment, it is to be hoped that these regulations recognise and embrace the inevitably of increased automation but, at the same time, ensure that individuals’ rights are protected effectively.
Click Here for the Original Article
Equal Pay Named in EEOC Targeted Priorities
The Equal Employment Opportunity Commission (EEOC) has taken another step toward achieving its goal of equal pay and eliminating discrimination.
EEOC objectives for fiscal years 2024 through 2028 are highlighted in its Strategic Enforcement Plan (SEP), released on September 21. And its message is uncompromising.
EEOC priorities
The Commission’s clear focus is on combatting employment discrimination, promoting inclusive workplaces, and responding to racial and economic justice. To achieve this, it names six targeted subject matter priorities:
- Eliminating barriers in recruitment and hiring.
- Protecting vulnerable workers from underserved communities from discrimination.
- Addressing selected emerging and developing issues.
- Advancing equal pay for all workers.
- Preserving access to the legal system.
- Preventing and remedying systemic harassment.
Below, we highlight two of those priorities which affect equal pay and the use of pay equity software:
Advancing equal pay: EEOC focus will prioritize employer practices that may “impede equal pay or contribute to pay disparities.” Reliance on salary history and discouraging or prohibiting employees from asking about pay are named in those practices.
Addressing selected emerging and developing issues: Specific emphasis is placed on employment practices and decisions where the use of technology results in or contributes to discrimination, including software that:
“…. incorporates algorithmic decision-making or machine learning, including artificial intelligence; use of automated recruitment, selection, or production and performance management tools; or other existing or emerging technological tools used in employment decisions.”
EEOC guidance released in May 2023 already aims to prevent discrimination caused by AI to job applicants and employees. Title VII guidance makes it clear that employers cannot delegate responsibility for AI bias to their software vendor. Nor can they rely on their vendor’s assurance that its software is Title VII compliant.
If your pay equity software violates workplace laws, as an employer, you may be held liable.
EEOC pushes for equal pay and workplace justice
Publication of EEOC priorities comes just weeks after announcing its alliance with the Department of Labor Wage and Hour Division. Aimed at enforcing “workplace justice issues” the alliance involves greater collaboration on employment-related matters and regulatory enforcement.
As part of the joint Memorandum of Understanding, target areas for investigation and enforcement may include:
- Employment discrimination based on race, color, religion, sex, national origin, age, disability, or genetic information.
- Unlawful compensation practices, such as violations of minimum wage, overtime pay or wage discrimination laws.
The message is loud and clear; the EEOC means business, and it is not alone.
Introducing New York wage theft laws
Equal pay is also a targeted priority for New York.
In an unprecedented step, New York Governor Kathy Hochul signed legislation which amends the state’s Penal Law and makes wage theft a form of grand larceny. Under state laws grand larceny is defined as any theft valued at $1,000 or more. The degrees of grand larceny relate to the value of property taken. Penalties increase from fourth degree larceny to first degree.
New York Penal Law’s larceny statute describes wage theft as when a person is hired:
“…to perform services and the person performs such services and the person [employer] does not pay wages, at the minimum wage rate and overtime . . . to said person for work performed.”
In a prosecution:
“…it is permissible to aggregate all nonpayments or underpayments ….into one larceny count, even if the nonpayments or underpayments occurred in multiple counties.”
Employers who engage in wage theft will now be subject to criminal prosecution.
The legislation came into immediate effect on September 6, 2023.
The high cost of wage theft
In justification, the New York Senate notes that wage theft accounts for almost $1 billion in lost wages every year. That’s according to Cornell University’s Worker Institute. Wage theft in New York is pervasive across the state’s construction industry, which is expected to be a focal point for the New Act.
Wage theft is also a significant problem across the US. The Economic Policy Institute reports over $3 billion was recovered for workers between 2017 to 2020. That’s just the tip of the wage theft iceberg that it estimated at $50 billion annually back in 2014.
New York wage theft laws are its latest attempt to crack down on wage and hour law violations. Further, they come just over six months after Manhattan’s District Attorney, Alvin Bragg, Jr., announced the creation of a “Worker Protection Unit” to investigate and prosecute wage theft and other forms of worker exploitation and harassment.
Earlier this year, New York City’s “NYC Bias Audit Law, ” also known as Local Law 144 also came into force. It requires employers to carry out “bias audits” of all automated employment decision tools (AEDTs) used in the hiring process and internal promotions.
EEOC increases pressure on employers to ensure equal pay
It is highly likely that the EEOC will support its goal of equal pay by reinstating EEOC-1 Component 2, especially as pay equity is prioritized in its Equity Action Plan. Like the SEP, it aims to tackle systemic discrimination, advance equity, and better serve members of underserved communities.
The EEOC’s message – and that of New York state – is clear.
It’s time for employers to review their pay practices.
Ensure compliance with EEOC priorities on equal pay
The EEOC states “pay inequity is not solely an issue of sex discrimination, but an intersectional issue that cuts across race, color, national origin, and other protected classes.”
Intersectionality is key to achieving pay equity. It recognizes that individuals can experience discrimination based on the intersection of multiple identities. Our state-of-the-art pay equity software, PayParity, helps to solve HR’s most complex challenges around people, data, and compliance. It analyzes compensation through the intersection of gender, race/ethnicity, disability, age and more in a single statistical regression analysis.
Working with a trusted pay equity software provider also ensures compliance with EEOC Title VII guidance, and complex pay transparency legislation.