FEDERAL DEVELOPMENTS
FTC recommends use of written adverse action notices by housing providers
A new FTC blog post titled “Tenant background check reports: Put it in writing” reminds landlords, property managers, and other housing providers of their obligation under the Fair Credit Reporting Act to provide notice of adverse action when information in a consumer report leads them to deny housing to an applicant or require the applicant to pay a deposit that other applicants would not be required to pay.
The FTC notes in the blog post that an adverse action notice must be provided even if information in a consumer report is only a minor factor in a housing provider’s decision. It also recommends that, as a best practice, adverse action notices should be provided in writing even though the FCRA permits such notices to be provided orally. The FTC indicates that written notices serve as proof of compliance with the law and also better enable applicants to assert their rights to request a copy of the report from the consumer reporting agency and to dispute any mistakes in it.
The FTC’s issuance of the blog is part of series of actions announced last week by the White House that are directed at “[e]nsuring all renters have an opportunity to address incorrect tenant screening reports.” The White House indicated that such actions build on the framework set forth in the White House’s “Blueprint for a Renters Bill of Rights” released in January 2023 which set forth principles intended to “create a shared baseline for fairness for renters in the housing market.
Tenant background checks were the subject of a CFPB and FTC request for information issued in February 2023. The RFI sought comment on “background screening issues affecting individuals who seek rental housing in the United States, including how the use of criminal and eviction records and algorithms affect tenant screening decisions and may be driving discriminatory outcomes.” The CFPB previously issued two reports on tenant background checks, one discussing consumer complaints received by the CFPB that relate to tenant screening by landlords and the other discussing practices of the tenant screening industry.
Click Here for the Original Article
New virtual I-9 review option: what U.S. employers need to know
Recently, the U.S. Department of Homeland Security (DHS) and its agencies announced important new policies on the verification of I-9 documentation for remote workers, including the sunset of the temporary Covid-19 (Covid) flexibility policies and a new remote I-9 verification option. This Update reviews the agency policy developments, addresses the pros and cons of the new optional remote I-9 verification option, and provides a number of best practices for companies to integrate the new optional process into their onboarding systems.
Summary
Beginning with the announcement of flexibility policies to accommodate employers whose workforces were suddenly working remotely due to the onset of the pandemic, DHS established a series of policies on the I-9 verification process:
- Temporary remote I-9 verification option during Covid national emergency:During the Covid pandemic, in recognition of the sudden and substantial shift to remote work in many U.S. employment sectors, DHS announced flexibilities for employers in the I-9 verification process. To address the inability of many employers to perform the required in-person verification of original documentation for new hires due to pandemic precautions, the Covid I-9 flexibilities temporarily provided employers with a virtual option to examine identity and work eligibility documents for the I-9 verification process. The Covid I-9 flexibilities policy was designed as temporary relief measure, where employers could use a virtual verification option for remote employees hired during the pandemic. The expectation was that employers would eventually perform a physical, in-person recertification of employees’ original documents upon their return to the office or when the national Covid emergency ended.
- Expiration of Covid I-9 flexibilities:Recently, U.S. Immigration and Customs Enforcement (ICE) announced that the Covid I-9 flexibilities expired on July 31, 2023, and provided a deadline of August 30, 2023, for employers to perform the required in-person recertification of the original identity and employment eligibility documents for employees hired using the remote verification option of the Covid I-9 flexibilities after March 20, 2020.
- New permanent virtual I-9 option:Furthermore, on July 25, 2023, DHS issued a Final Rule announcing a new permanent remote I-9 verification option for qualified employers. The agency’s announcement of the permanent remote I-9 process was accompanied by the release of a new I-9 form.
- New Form I-9: U.S. Citizenship and Immigration Services (USCIS) published a new version of the Form I-9that has been available for use since August 1, 2023
- Effective dates: Employers can continue to use the prior version of the Form I-9 until October 31, 2023. On November 1, 2023, employers can only use the new Form I-9.
- Changes in the new Form I-9:
- The I-9 has been reformatted to a one page form that includes Sections 1 and 2, with some sections, such as the reverification portion (formerly Section 3), now located in supplements.
- The Form I-9 now includes a checkbox for employers to mark that they used the new optional remote I-9 verification process.
Considerations for the New Optional Remote I-9 Verification Process
What are the eligibility requirements for a company to be able to use the optional remote I-9 verification process?
To be able to use the optional remote I-9 verification process, your company must be an E-Verify participant in good standing for any hiring site where the remote I-9 process will be used.
Is my company an E-Verify participant in good standing?
To qualify as an E-Verify participant in good standing, your company must meet all of the following requirements:
- The company has enrolled in E-Verify for all of its hiring sites in the U.S. where it will use the optional remote I-9 verification process.
- The company is in compliance with all E-Verify requirements, including submitting timely E-Verify queries for all newly hired employees in the U.S. at the company’s E-Verify locations.
- The company continues to be an E-Verify participant in good standing throughout the period of time the company uses the optional remote I-9 verification process.
What are the benefits of the new optional remote I-9 verification process?
Most importantly, the optional remote I-9 verification process will allow your company to verify the documentation of newly-hired employees working remotely, eliminating the need for such employees to appear in-person at a company office or to meet in-person with an off-site remote document verifier. This new process will allow your company to streamline and centralize the document verification function for all of your locations that use E-Verify.
Will employers who use the new optional remote I-9 verification process face increased scrutiny?
To use the new optional process, employers need to develop and implement procedures that comply with the requirements outlined in the final rule. In the event of an audit, the government will review the company’s I-9s as well as its processes to determine whether the company is in compliance with the requirements. If the government determines that a company did not follow the process or documentation requirements of the new rule, the company will be at risk of a finding of material violations and potential fines.
Is my company required to use the new optional remote I-9 verification process?
No. While some employers may opt to use the new remote process, your company can continue to follow the standard I-9 process of performing an in-person verification of the employee’s original identity and employment eligibility documents during the onboarding process.
Is the new optional remote I-9 verification process temporary or permanent?
While the rule authorizing remote verification of I-9 documents is permanent, the agency has the authority to revise the method and specific protocols of the remote I-9 verification.
Implementing the New Optional Remote I-9 Verification Process as Part of I-9 Program
What steps is my company required to follow under the optional remote I-9 verification process?
A qualified employer who uses the optional remote I-9 verification process will be required to follow these steps within three business days of an employee’s first day of employment:
- Request the employee to provide digital copies of the front and back of the employment eligibility and identity documentation of their choice from the I-9 Lists of Acceptable Documents.
- After receiving the documentation from the employee,
- examine the copies of the front and back of the employment eligibility and identity documentation
- determine that the documentation reasonably appears to be genuine
- Schedule and conduct a live video call with the employee to further review the front and back of the employee’s employment eligibility and identity documentation and determine that the documents relate to the employee.
- Annotate Form I-9 by completing the corresponding box indicating that the employer has used the remote I-9 verification process.
- Retain a clear and legible copy of the front and back of the employment eligibility and identity documents provided by the employee in the I-9 file.
Does implementing the optional remote I-9 verification process eliminate or change my company’s requirement to complete an I-9?
No. If your company is eligible to use the optional remote I-9 verification process and opts to implement it for new hires, you will continue to have the same I-9 completion and E-Verify obligations. Your company will still be required to ask the employee to complete Section 1 of the form on or before the first day of employment and to complete Section 2 within three business days of the employee’s first day of work. As before, your company will still be able to use the paper I-9 form, the online USCIS fillable form, or a compliant electronic I-9 program. [Please see information regarding the new Form I-9 above.] The key difference for companies using the new optional remote I-9 verification process is that you will be able to verify the employee’s documentation remotely, but your company is still required to meet all I-9 and E-Verify obligations.
Can my company use the new optional remote I-9 verification process for some, but not all, employees?
Yes, the agency has confirmed that employers may, for example, opt to use the alternative process only for its remote employees while continuing to use the standard, in-person I-9 document review for in-office hires. However, employers need to be careful to develop policies that are not discriminatory – for example, an employer may not establish a policy of using the alternative process for all employees but requiring green card holders to present original documents in person as this would constitute impermissible citizenship status discrimination. Therefore, your company should carefully review any segmentation of the workforce in implementing this process to ensure that the practice does not violate anti-discrimination protections.
What are best practices that a company should follow in implementing the new optional remote I-9 verification process?
- Establish a robust protocol to comply with the special requirements of the process, including receipt of documents, video interview, and the document retention requirements.
- Establish a protocol for proper handling of employees’ personal information and documentation.
- Develop a compliant protocol for the organization that includes protections against discrimination.
- Update handbooks and compliance procedures.
- Develop a document retention and purging plan.
- Provide ongoing training for those involved in administering the I-9 process.
- Perform regular documentation and process audits.
New Optional remote I-9 verification process and Sunset Requirements of the Covid I-9 Flexibilities
The agency announced the new optional remote I-9 verification process at a time when many companies are working hard to recertify employee documents that were reviewed remotely under the recently expired Covid I-9 flexibilities. For some employers, the new optional remote I-9 verification process provides an opportunity to complete the recertification process in a more streamlined, efficient manner.
If my company qualifies to use the optional remote I-9 verification process, can we use it to recertify the documentation for employees we hired using the temporary remote Covid I-9 flexibilities option to comply with the August 30, 2023, recertification deadline?
Yes, your company can use the new optional remote I-9 verification process to recertify documentation that an employee presented during the Covid I-9 flexibilities period to comply with the August 30, 2023, recertification deadline if the following conditions are met:
- Your company performed the virtual inspection of the employee’s documents under the Covid I-9 flexibilities between March 20, 2020, and July 31, 2023.
- Your company was already enrolled in E-Verify when it performed the virtual inspection of the employee’s documentation under the Covid I-9 flexibilities.
- Your company submitted an E-Verify inquiry for the employee at the time it performed the virtual inspection.
What are my company’s options if we do not want to perform the virtual recertification using the new optional remote I-9 verification process?
Your company can recertify documentation presented during the temporary remote Covid I-9 flexibilities to comply with the August 30, 2023, recertification deadline using one of the following methods:
(1) Your company performs an in-person recertification process where the employee appears in person before a company representative and presents original I-9 documentation, or
(2) your company uses a traditional remote third-party protocol, where the company agrees to allow a notary or other third party to perform an in-person review of original I-9 documentation on the company’s behalf and certify the results for your company’s records.
What steps is my company required to follow if we use the optional remote I-9 verification process to recertify documents collected remotely under the Covid flexibilities?
A qualified employer who uses the optional remote I-9 verification process to recertify documents collected remotely under the Covid flexibilities will be required to follow these steps:
- Request the employee to provide digital copies of the front and back of the employment eligibility and identity documentation of their choice from the I-9 Lists of Acceptable Documents.
- After receiving the documentation from the employee,
- examine the copies of the front and back of the employment eligibility and identity documentation
- determine that the documentation reasonably appears to be genuine
- Schedule and conduct a live video call with the employee to further review the front and back of the employee’s employment eligibility and identity documentation and determine that the documents relate to the employee.
- Annotate Section 2 of the Form I-9 under “Additional Information” with the date the recertification process was completed.
- Retain a clear and legible copy of the front and back of the employment eligibility and identity documents provided by the employee in the I-9 file.
Do we need to recertify the documents of employees who are no longer working for the company?
No. The government has confirmed that employers do not need to recertify documents for employees who no longer work for the company but should make a note on the I-9 regarding the termination date.
Can an employee present different documents during the recertification process than they provided previously during the verification under the Covid flexibilities?
Yes, the employee can provide any acceptable document or combination of documents from the I-9 Lists of Acceptable Documents during the recertification process. Where an employee provides different documents during recertification, the best practice is for your company to complete a new Section 2 of Form I-9 and attach it to the existing Form I-9. Furthermore, in the additional information section of the form, your company should provide a brief note to explain that the employee presented new documents during the recertification process.
What if my company already followed the same steps that are required by the new optional remote I-9 verification process in our previous remote verification during the Covid flexibility period? Can we just document that our company previously completed those steps to satisfy the recertification requirements?
No. The government has clarified that if a company chooses to use the new optional remote I-9 verification process to perfect the recertification of documents collected remotely under the Covid flexibility period, the company must still follow the required steps for the new optional remote I-9 verification process. In other words, your company will need to request and review the documents from the employee, arrange a video call, and retain the document copies as required, make a notation on the I-9 regarding the recertification, and maintain documentation that the full process was completed for this case.
What if my company is unable to complete the recertification process of the Covid flexibility cases by the August 30, 2023, deadline?
If your company used the temporary remote Covid I-9 flexibilities, you should make every possible effort to complete the document recertification process before the August 30, 2023, deadline. However, recognizing the burden that this short timeframe has placed on employers, ICE stated in press communications that it will not focus its limited enforcement resources to find I-9 violations for employers that fail to complete the document recertification process by August 30, 2023, as long as employers are otherwise in compliance with the law and regulations and have taken timely steps to complete the process. Therefore, while we urge employers to complete the recertification process by August 30, 2023, if at all possible, if your company is unable to complete the process in full by the deadline, we recommend documenting the company’s good-faith efforts and continue to press forward to complete the process as quickly as possible.
Click Here for the Original Article
STATE, CITY, COUNTY AND MUNICIPAL DEVELOPMENTS
California Civil Rights Council Modifies Regulations Pertaining to Background Checks
On July 24, 2023, the California Office of Administrative Law approved the California Civil Rights Council’s modifications to regulations pertaining to California’s Fair Chance Act. Under the Fair Chance Act, employers with five or more employees are prohibited from asking an applicant about conviction history before making a job offer and setting forth other requirements pertaining to an applicant’s criminal history.
The modifications take effect on October 1, 2023.
The following are a few of the modifications that employers should take note of.
Notice of Preliminary Decision and Opportunity for Applicant Response
The modification clarifies that if an employer makes a preliminary decision after an “initial” individualized assessment that the applicant’s conviction history disqualifies the applicant, the employer shall notify the applicant in writing. The notice must include all of the following:
- Notice of disqualifying conviction that is the basis for the preliminary decision.
- A copy of the conviction history report relied upon.
- Notice of the applicant’s right to respond to the notice before the preliminary decision becomes final.
- Explanation of the type of evidence an applicant can submit to challenge the conviction history or as evidence of rehabilitation or mitigation.
- Notice of the deadline for the applicant to respond.
Individualized Assessment
The modified regulations clarify that an individualized assessment must be a “reasoned, evidence-based determination,” and provide detail on what may be taken into consideration in assessing the three factors to determine whether the applicant’s conviction history has a direct and adverse relationship with the specific duties of the job that justify denying the applicant the position.
Evidence of Rehabilitation or Mitigating Circumstances
The modified regulations also clarify that evidence of rehabilitation or mitigating circumstances is optional and may only be voluntarily provided by the applicant or by another party at the applicant’s request. The regulations state-specific actions that an employer is prohibited from taking: refusing to accept additional evidence voluntarily provided by an applicant, requiring an applicant to submit any additional evidence or a specific type of documentary evidence, disqualifying an applicant from the employment conditionally offered for failing to provide any specific type of evidence, requiring an applicant to disclose their status as a survivor of domestic or dating violence, sexual assault, stalking or comparable statuses, and/or requiring an applicant to produce medical records and/or disclose the existence of a disability or diagnosis.
Reassessment
The modification provides examples of the factors the employer may consider when making a final decision regarding whether to rescind a condition offer of employment.
The modified regulations provide more detail on the types of evidence an employer may consider including:
- When the conviction led to incarceration, the applicant’s conduct during incarceration, including participation in work and educational or rehabilitative programming and other prosocial conduct;
- The applicant’s employment history since the conviction or completion of the sentence;
- The applicant’s community service and engagement since the conviction or completion of sentence, including but not limited to volunteer work for a community organization, engagement with a religious group or organization, participation in a support or recovery group, and other types of civic participation; and/or
- The applicant’s other rehabilitative efforts since the completion of sentence or conviction or mitigating factors.
Labor Contractors, Union Halls, and Client Employers
The modified regulations add labor contractors, union hiring halls, and client employers as types of employers governed by the Fair Chance Act and the regulations and specify that they must comply with the requirements for workers who are admitted to a pool or availability list.
IRS Work Opportunity Tax Credit
The modified regulations specify that an employer may require applicants to complete the IRS form 8850 or equivalent before a conditional offer is made, so long as the information gathered is used solely to apply for the credit.
Click Here for the Original Article
New regulations that substantially expand protections against discrimination in employment, education, public accommodations and housing in Pennsylvania will take effect on Aug. 16, 2023.
On December 8, 2022, the Independent Regulatory Review Commission approved regulations by a 3-2 vote that were originally submitted for review by the Pennsylvania Human Relations Commission (PHRC) in March 2022. The Pennsylvania Code will be amended to define protected classes under the Pennsylvania Human Relations Act and the Pennsylvania Fair Educational Opportunities Act. The regulations will become effective after a legislative review period and publication in the Pennsylvania Bulletin. In addition to landlords, realtors, property management companies, schools, colleges and universities, the newly expanded regulations apply to Pennsylvania employers with four or more employees.
The regulations define the terms “sex,” “race,” and “religious creed,” to specifically include anti-bias protections for the LGBTQ+ community and for people with traditionally Black hairstyles and textures.
The regulations provide a comprehensive definition pertaining to the protected class of “sex” to include pregnancy, childbirth, breastfeeding, sex assigned at birth (including but not limited to male, female or intersex), gender identity or expression, affectional or sexual orientation (including heterosexuality, homosexuality, bisexuality and asexuality) and differences in sex development.
Regarding race discrimination, the regulations broadly define the term “race” to include ancestry, national origin, ethnic characteristics, interracial marriage or association, traits associated with race (including hair texture and hairstyles culturally associated with race, such as braids, locks and twists), persons of Hispanic national origin or ancestry, and persons of any other national origin or ancestry.
Regarding religious creed discrimination, the regulations provide a comprehensive definition for the term “religious creed” to include all aspects of religious observance and practice, as well as belief.
While the PHRC had taken the position that the statute already encompassed these protections, promulgating regulations gives this policy the force of law when the regulations take effect. The regulations will be codified as 16 Pa. Code Chapter 41.201-41.207.
Click Here for the Original Article
Illinois and Hawaii Require Employers to Disclose Pay Scales in Job Postings
llinois and Hawaii will join several states — including New York, California, Washington and Colorado — in requiring increased pay transparency in job postings. These changes will further affect how employers recruit and post open positions.
New Illinois Law
On Aug. 11, 2023, Illinois Gov. J.B. Pritzker signed into law House Bill 3129 / Public Act 103-0539, which amends the Illinois Equal Pay Act. The new law, which takes effect on Jan. 1, 2025, will require most Illinois employers to disclose pay scale and benefits information to potential job applicants.
Specifically, all covered job postings must include pay scale and benefits information. “Pay scale and benefits” information includes “the wage or salary, or the wage or salary range, and a general description of the benefits and other compensation, including, but not limited to, bonuses, stock options, or other incentives the employer reasonably expects in good faith to offer for the position, set by reference to any applicable pay scale, the previously determined range for the position, the actual range of others currently holding equivalent positions, or the budgeted amount for the position.”
Providing a hyperlink with a job posting to pay scale and benefits information maintained on an employer’s website would satisfy the law’s requirements.
The new law applies to employers with 15 or more employees and to job postings where the position “will be physically performed, at least in part, in Illinois” or “will be physically performed outside of Illinois, but the employee reports to a supervisor, office, or other work site in Illinois.” Internal job postings and postings publicized by third parties, such as job search sites or recruiters, are also subject to these requirements.
The law also generally requires employers to “announce, post, or otherwise make known all opportunities for promotion to all current employees no later than 14 calendar days after the employer makes an external job posting for the position.”
Violating the law may subject the employer to civil penalties ranging from $500 for a first offense for an active job posting, or $250 for an inactive posting, to $10,000 for a third offense. First offenses can include one or several concurrent postings for the same position. Second and third offenses are limited to a single unlawful job posting. Whether a posting is “active” depends on the totality of the circumstances.
Fortunately for employers, the law provides 14- and seven-day cure periods for first and second offenses if the postings are still active at the time the violation is discovered. No cure period exists for an employer’s third offense or any subsequent offenses for five years after the third offense. Additionally, during the five-year period after a third offense, employers will incur “automatic penalties” and the five-year period will restart if the employer commits another violation during the period.
In addition to the posting requirements, covered employers must preserve for at least five years records that document at least the pay scale, benefits and job posting for each position as well as the prior requirements of name, address, occupation and wages of each employee. Failure to do so may subject the employer to civil penalties ranging from $2,500 for the first offense to $5,000 for the third and subsequent offenses. For employers with 100 or more employees, the penalty could be as much as $10,000 per affected employee.
New Hawaii Law
Hawaii Gov. Josh Green signed a bill with similar ambitions on July 3, 2023. Senate Bill 1057 requires all Hawaii employers to “disclose an hourly rate or salary range that reasonably reflects the actual expected compensation” for a particular job posting. This requirement does not apply to internal transfers or promotions; public employee positions for which salary, benefits or other compensation are determined pursuant to collective bargaining; or positions with employers having fewer than 50 employees. Hawaii’s bill also adds that pay disparities are now prohibited based on any category protected under state law, not just sex. The new Hawaii law will take effect on Jan. 1, 2024.
Employers should review their job postings and record-keeping practices and adjust accordingly to ensure compliance with the new laws. Additionally, with a patchwork of pay transparency laws developing in states and localities across the county, and remote work adding uncertainty to applicability of certain of these laws, employers may want to consider proactively including compensation information within at least their external job postings.
Click Here for the Original Article
In an interesting, but ultimately unsurprising, analysis of Maryland’s anti-discrimination law, Maryland’s highest court has determined that the State’s prohibition against “sex” discrimination, including in the workplace, does not include sexual orientation (and by extension, gender identity). But employers should be aware that other protections for those personal characteristics exist under both state and federal law.
Background of the Case. In a 2020 landmark ruling, Bostock v. Clayton County, the U.S. Supreme Court held that Title VII’s prohibition against “sex” discrimination in employment encompasses sexual orientation and gender identity (as we discussed in a June 15, 2020 E-lert).
In Doe v. Catholic Relief Services, a case involving whether a religious organization must provide health benefits for a same-sex spouse, a Maryland federal court asked the Maryland Supreme Court for its opinion on whether the Maryland Fair Employment Practices Act (MFEPA) and the Maryland Equal Pay for Equal Work Act (MEPEWA) similarly and necessarily includes sexual orientation in the definition of “sex.” The Maryland Supreme Court was also asked whether an exemption to MFEPA that allows a religious entity to employ only those of a particular religion, sexual orientation or gender identity applies to all employees of the religious entity or only those performing religious activities.
The Maryland Supreme Court’s Ruling. The Maryland Supreme Court found that these questions involved issues of statutory interpretation. Certain principles of statutory construction apply to the Court’s analysis. In examining the actual language of the statutes in question, the Court must consider the statute in its entirety in order to give logical meaning to each of its parts. Where the language is ambiguous, the Court will look to legislative history or other relevant sources to determine the Maryland General Assembly’s intent. Furthermore, for remedial statutes, like the MFEPA and MEPEWA, the Court will read them liberally in favor of those seeking the laws’ protection.
Sex and Sexual Orientation Are Separate Categories Under the MFEPA. As to whether “sex” under the MFEPA includes “sexual orientation,” the Court noted that sexual orientation is a separate protected category under the law. Therefore, “the General Assembly did not understand and intend that the prohibition against sex discrimination would also prohibit sexual orientation discrimination.” The Court further observed that to read sex to include sexual orientation would make the specific sexual orientation (and we note, by extension, gender identity) exemption for religious organizations irrelevant, as a plaintiff could simply “plead around” the exemption by asserting sex discrimination.
The Court found additional support for its interpretation that sex does not include sexual orientation in the legislative history of the amendment that added sexual orientation as a separate protected category, as well as the concurrent amendment that added sexual orientation to the religious entity exemption. It was clear from this history that the legislators and the Maryland Commission on Civil Rights (the state agency that enforces the MFEPA) believed that sexual orientation was separate and distinct from sex.
The MEPEWA Does Not Prohibit Sexual Orientation Discrimination. Similarly, the Court found that the plain language of the MEPEWA specifically prohibits pay discrimination on the basis of sex and, as of 2016, gender identity, and that the exclusion of sexual orientation was “purposeful” since it was not added at that same time.
Moreover, given that both the MEPEWA and MFEPA concern the same subject matter, the Court further explained that it would interpret “sex” under both laws in the same way – to not include sexual orientation.
MFEPA’s Religious Entity Exemption Applies to Employees Who Perform Core Mission Duties. The employee argued that the MFEPA’s religious entity exemption should mimic the federal law’s “ministerial” exemption to apply only to those employees directly involved in the entity’s religious activities. On the other hand, the employer argued that all of its employees should be covered by the exemption. The Court took a middle road,holding that the exemption applies to “employees who perform duties that directly further the core mission (or missions) of the religious entity.”
The Court provided guidance on determining a religious entity’s core mission(s), noting that there may be additional factors that may be considered:
- That it involves “a fact-intensive inquiry that requires consideration of the totality of the pertinent circumstances.”
- Duties that directly further the core mission are “duties that are not one or more steps removed from taking the actions that effect the goals of the entity.” As examples, the Court compared a janitor (indirect) with an executive director (direct).
- The entity’s size may be relevant – smaller organizations may have fewer employees whose work does not directly further its mission than larger organizations.
- A religious entity may have both religious and secular core missions, and more than one of each.
- In making the determination, a trial court may consider, among other things: the description of the entity’s mission(s); the services provided; the people the entity seeks to benefit; and how the entity’s funds are allocated.
What Now? Interestingly, the Court suggested that, in light of this case, the General Assembly might amend the MFEPA to “harmonize” it with Bostock’s interpretation of Title VII. We would certainly anticipate the General Assembly to take up that call in the next legislative session.
In the meantime, employers with Maryland employees must keep in mind that, while the MFEPA’s definition of sex may not include sexual orientation, the law also expressly prohibits discrimination on the basis of sexual orientation, as well as gender identity. In addition, employers with 15 or more employees are also covered by Title VII, which does include sexual orientation and gender identity in its protections against sex discrimination. As for the MEWEPA, it applies to sex and gender identity, but not sexual orientation; but again, federal law would (likely) define sex under the federal Equal Pay Act to include sexual orientation and gender identity. Thus, although the federal and state laws may be structured differently, in the end, the practical effect is the same – Maryland employees of employers covered by Title VII and Maryland law are protected from discrimination and pay disparities on the basis of sex, sexual orientation or gender identity.
The exemption from such protections for certain employees of religious entities may be broader under state law than federal law, but if the employer is covered by both laws, the difference is again without practical effect. Any savvy employee will assert claims under the law that provides the greatest protection, whether federal or state.
Click Here for the Original Article
The California Supreme Court issued a ruling this week that expands the definition of employer under the state’s main discrimination statute, the Fair Employment and Housing Act (FEHA). This expansion not only increases the number of defendants that can be swept into a FEHA action, but it may also have a significant impact on California’s burgeoning efforts to regulate the use of artificial intelligence in employment decisions.
Background
As we previously noted, on March 16, 2022, the U.S. Court of Appeals for the Ninth Circuit certified to the Supreme Court of California the following question:
Does California’s Fair Employment and Housing Act, which defines “employer” to include “any person acting as an agent of an employer,” permit a business entity acting as an agent of an employer to be held directly liable for employment discrimination?1
In Raines v. U.S. Healthworks Medical Group, the California Supreme Court answered in the affirmative to this question, first concluding that an employer’s business entity “agents” may be considered “employers” for purposes of the statute, and then holding that such an agent may be held directly liable for employment discrimination in violation of the Fair Employment & Housing Act when it has at least five employees2 and “when it carries out FEHA-regulated activities on behalf of an employer.” The court recognized that its ruling “increases the number of defendants that might share liability” when a plaintiff brings FEHA-related claims against their employer.
In reaching its holding, the court analyzed the FEHA Section 12926(d)’s language, stating that the “most natural reading” supports the determination that an employer’s business-entity’s agent “is itself an employer for purposes of FEHA.” The court further addressed the statute’s legislative history, tracing the origins of the definition of “employer” to the Fair Employment Practices Act (FEPA) enacted in 1959, which adopted the National Labor Relations Act’s (NLRA) “agent-inclusive language.” The court also looked to federal case law, finding support for the idea that “an employer’s agent can, under certain circumstances, appropriately bear direct liability under the federal antidiscrimination laws.” Significantly, the court found that its prior rulings in Reno v. Baird3 and Jones v. Lodge at Torrey Pines Partnership,4 which did not extend personal liability for claims of discrimination or retaliation to supervisors, did not dictate the result here.
The court also reviewed policy reasons that could impact the reading of the statutory language:
- Imposing liability on an employer’s business entity agents broadens FEHA liability to the entity that is “most directly responsible for the FEHA violation” and “in the best position to implement industry-wide policies that will avoid FEHA violations”;
- Imposing liability on an employer’s business entity agents “furthers the statutory mandate that the FEHA ‘be construed liberally’ in furtherance of its remedial purposes”; and
- The court’s reading of the statutory language “will not impose liability on individuals who might face ‘financial ruin for themselves and their families’ where held directly liable under the FEHA.”
Equally important are rulings not made by the court in Raines. The California Supreme Court noted that it was not deciding the significance, if any, of an employer’s control over an agent’s acts that gave rise to a FEHA violation, nor did the court decide whether its conclusion extends to business-entity agents that have fewer than five employees. Critically, it also did not address the scope of a business-agent’s potential liability pursuant to FEHA’s aiding-and-abetting provision.
Impact on California’s Efforts to Regulate AI in Employment Decisions
Raines will likely have a significant impact on businesses that provide services or otherwise assist employers in the use of automated-decision systems for recruiting, screening, hiring, compensation, and other personnel management decisions. Coupled with proposed revisions to the state’s FEHA regulations, this expansion of the statute’s reach takes California one step closer to establishing joint and several liability across the AI tool supply chain.
Under the Fair Employment & Housing Council’s proposed regulations5 addressing the use of artificial intelligence, machine learning, and other data-driven statistical processes to automate decision-making in the employment context, it is unlawful for an employer to use selection criteria—including automated decision systems—that screen out, or tend to screen out, an applicant or employee (or a class of applicants or employees) on the basis of a protected characteristic, unless the criteria are demonstrably job-related and consistent with business necessity. The draft regulations explicitly define “agent” broadly to include third-party providers of AI-driven services related to recruiting, screening, hiring, compensation and other personnel processes, and redefine “employment agency” to similarly cover these third-party entities.6 One key proposal – under the aforementioned aiding-and-abetting provision – even extends liability to the “design, development, advertisement, sale, provision, and/or use of an automated-decision system.” The high court’s decision in Raines unquestionably supports the Council’s proposed revisions, and enhances joint and several liability for artificial intelligence tool supply chains regardless of the final incarnation of the Council’s regulations.
Click Here for the Original Article
COURT CASES
EEOC Files for Consent Decree Settlement in AI Discrimination Case
The Equal Employment Opportunity Commission (EEOC) has ramped-up enforcement and guidance in recent months over employers’ use of artificial intelligence (AI).
On May 18, 2023, as part of its Artificial Intelligence and Algorithmic Fairness Initiative, the EEOC issued its second technical assistance document (TAD) concerning AI, addressing employment selection procedures under Title VII of the Civil Rights Act of 1964. Just a couple of weeks earlier, on May 5, 2023, the EEOC filed suit against a group of related entities that offer tutoring services to students in China under the brand name “iTutorGroup.” Now, according to a consent decree notice of settlement filed in federal court on or about August 9, 2023, in what may be the EEOC’s first settlement of AI-related claims, the parties settled the suit for $365,000, along with a long list of corrective action items for the defendants.
The EEOC’s allegations are fairly straightforward: The defendants hire tutors from the United States to provide online English-language tutoring to adults and children in China. Each year, they hire thousands of tutors who submit applications through the defendant’s website. The defendant’s website requested the date of birth of applicants and programmed their application software to automatically reject female applicants age 55 and older and male applicants age 60 and older. In this case, the charging party’s initial application, which indicated she was older than 55, was rejected. She then resubmitted her application with a more recent date of birth as the only change. She was offered an interview. The defendants rejected more than 200 other presumably qualified applicants because of age. According to the EEOC, these employment practices violated Section 4 of the Age Discrimination in Employment Act, 26 U.S.C. Section 623(a) and (b).
In its TAD, the EEOC observed that employers sometimes rely on different types of software that incorporate algorithmic decision-making at a number of stages of the employment process. The examples it provides include resume scanners that prioritize applications using certain keywords; or “virtual assistants” or “chatbots” that ask job candidates about their qualifications (perhaps even age) and reject those who do not meet pre-defined requirements. These algorithmic decision-making tools can have the effect of unlawfully screening out otherwise qualified candidates.
In this case, the sophistication of the application software and the nature of the AI that allegedly screened out applicants of a certain age were unclear. What is clear is the EEOC, through its Artificial Intelligence and Algorithmic Fairness Initiative, is increasing its efforts to ensure the use of software (including AI, machine learning, and other emerging technologies used in hiring and other employment decisions) comply with the federal civil rights laws the agency enforces.
No doubt, algorithmic decision-making tools, or “automated employment decision tools” as they are referred to under the New York City AI law, can significantly boost the productivity and results of an organization’s recruiting efforts. However, the development and implementation of those tools also come with significant compliance and litigation risk. Issues organizations rolling out these tools should consider include:
- Understanding the use case, g.,recruiting, performance monitoring, performance improvement, and so on.
- Tracking the application and requirements of emerging laws, guidance, and established frameworks.
- Considering application of guardrails or key principles, such as notice, informed consent, transparency, privacy and security, fairness, nondiscrimination, and ability to understand and challenge outcome.
- Incorporating “promising practices” suggested by the EEOC, such as in connection with ensuring reasonable accommodations are available.
- Oversight of the use of the tool from procurement through implementation.
- Vetting the vendor and the product offering the tool.
- Record retention obligations.
Click Here for the Original Article
The Fifth Circuit Broadens its Definition of “Adverse Employment Action” Under Title VII
Last week, the full U.S. Court of Appeals for the Fifth Circuit rejected nearly thirty years of unique precedent that limited the scope of disparate-treatment liability under Title VII of the Civil Rights Act of 1964 to “ultimate employment decisions.” Before the decision, the Fifth Circuit was the only circuit to adopt this narrow definition of an “adverse employment action.”
In Hamilton v. Dallas County, female detention officers challenged Dallas County’s scheduling policy. The scheduling policy entitled all officers to two days off each week. However, male officers were permitted to take both days on the weekend while female officers were required to either take two weekdays off or one weekday and one weekend day. Last year, a three-judge panel for the Fifth Circuit affirmed dismissal of the suit because schedule changes, such as the denial of full weekends off, were not “ultimate employment decisions” under existing Fifth Circuit precedent from Dollis v. Rubin, 77 F.3d 777 (5th Cir. 1995). In Dollis, the Fifth Circuit held that “Title VII was designed to address ultimate employment decisions, not to address every decision made by employers that arguably might have some tangential effect upon those ultimate decisions.”
In light of the policy’s blatant discriminatory intent, the three-judge panel flagged the case as an “ideal vehicle” for the full Fifth Circuit to revisit its narrow definition of “adverse employment action.” The Fifth Circuit agreed and reheard the matter en banc. In denouncing Dollis, the Court of Appeals opined its “ultimate employment decision” definition was based on “fatally flawed foundations.” It acknowledged that the plain language of Title VII, which prohibits discrimination against an individual with respect to their “terms, conditions, or privileges of employment,” is much broader than the “ultimate employment decision” definition used. The Court also conceded that its authority for the definition in Dollis was “based on a misinterpretation” of dicta from the Fourth Circuit Court of Appeals’ decision in Page v. Bolger, 645 F.2d 227 (4th Cir. 1981). The Fifth Circuit then acknowledged that its narrow definition had yielded “some remarkable conclusions.” For example, in one case, a plaintiff alleged “he and his black team members had to work outside without access to water while his white team members worked inside with air conditioning.” Under Dollis, the Fifth Circuit held those conditions were not adverse employment actions because they did “not concern ultimate employment decisions.”
In rejecting Dollis and its progeny, the Fifth Circuit ruled:
To adequately plead an adverse employment action…a plaintiff need only show that she was discriminated against, because of a protected characteristic, with respect to hiring, firing, compensation, or the “terms, conditions, or privileges of employment.”
Under this flattened approach, the Fifth Circuit had little difficulty concluding the plaintiffs in Hamilton plausibly alleged discrimination with respect to their terms, conditions, or privileges of employment. It reasoned that the days and hours employees work are “quintessential ‘terms or conditions’ of one’s employment” and at the “very heart of the work-for-pay arrangement.”
This decision carries significant implications for employers in Texas, Louisiana, and Mississippi because the phrase “terms, conditions, or privileges of employment” is quite broad. The new, broader standard will result in an increased number of viable disparate treatment claims under Title VII.
Notably, and despite acknowledging that there is some merit in doing so, the Fifth Circuit declined to set a bright-line rule establishing a minimum level of workplace harm required to plead a discrimination claim under Title VII. It likely punted on the issue because the Supreme Court is poised to address it in Muldrow v. City of St. Louis, a case for which certiorari was recently granted. Until the issue is addressed in Muldrow, employers in the Fifth Circuit should more closely scrutinize their actions, policies, and practices that have the potential to disadvantage certain employees, whether explicitly or implicitly.
Click Here for the Original Article
INTERNATIONAL DEVELOPMENTS
India Passes Long Awaited Privacy Law
On August 9, 2023, India passed a data protection law that will govern how entities process users’ personal data. The Digital Personal Data Protection Act (“the Act”) will establish guardrails for how organizations should handle personal data and offers citizens control over the personal data gathered for them.
The Act will make it mandatory for entities collecting user data to obtain express user consent before processing the data, with some exceptions. Other provisions include designations of certain entities as “Significant Data Fiduciaries” and imposing heightened compliance measures on them given the nature and volume of personal data they process. The Act also prohibits behavioral monitoring of and targeted advertising directed at minors, as well as establishes the Personal Data Protection Board (“the Board”), who will investigate data breaches and handle consumer inquiries about the processing of their personal data. Potential violations of the Act can lead to fines of up to 2.5 billion rupees ($30 million).
The passing of the Act is significant because it could have large effects on US-based companies that offer their services to the large Indian market. Notably, the Act applies to the handling of digital personal information even if it takes place outside of India as long as it relates to providing goods or services to Indian residents. The Indian Government may also restrict the transfer of personal data by a data fiduciary for processing outside of India. Given that India has over 750 million active internet users, the effect of the Act for companies processing Indian users’ data could be extensive.
We have outlined history and the key provisions of the Act. We are happy to answer any questions about how the Act might affect your privacy compliance program.
History
The Act has been seven years in the making and is the Indian government’s third attempt to pass a privacy bill. In 2017, India’s Supreme Court reaffirmed privacy as a fundamental right. In that monumental decision, the Supreme Court of India noted that the nation lacked a comprehensive privacy law and that existing regulations had limitations in the data privacy context. Following that decision, the Indian government drafted privacy legislation. The first several versions of the privacy bill were rejected in 2019 and 2022. In the 2022 iteration, technology companies expressed concern about the bill’s broad exceptions for government entities, limitations on protecting user data, and restrictions over data exports. This current version was subject to a November 2022 public consultation that received more than 20,000 stakeholder comments for lawmakers to evaluate before completing a final draft.
Data, Data Principals, and Data Fiduciaries
- Data– The Act defines data broadly as “a representation of information, facts, concepts, opinions or instructions in a manner suitable for communication, interpretation or processing by human beings or by automated means.” Personal data refers to “any data about an individual who is identifiable by or in relation to such data.”
- Processing– The Act defines processing as “a wholly or partly automated operation or set of operations performed on digital personal data,” including operations such as “collection, recording, organization, structuring, storage, adaptation, retrieval, use, alignment or combination, indexing, sharing, disclosure by transmission, dissemination or otherwise making available, restriction, erasure or destruction.”
- Data fiduciaries– Under the Act, a data fiduciary is any person who determines the purpose and means of processing personal data. For simplicity, this article will use ‘company’ and ‘entity’ interchangeably with ‘data fiduciary.’
- Data principals– Under the Act, a data principal is the individual to whom the personal data relates. This article will use ‘user’ and ‘consumer’ will be used interchangeably with ‘data principal.’
International Impact
The Act has consequences for US-based and other non-Indian companies given its extra-territorial application and the authority of the government to prohibit international data transfers.
- Extra-territoriality– The Act not only applies to the processing of digital personal data within India, but also extra-territorially to the processing of digital personal data outside the India if it is in connection to data principals within the territory of India. This means that U.S.-based companies must comply with the law in processing data of Indian users.
- International data transfers– Under the Act, the Indian Government may restrict the transfer of personal data by a data fiduciary for processing outside of India. Previous iterations of the bill had allowed data transfers to only a specific set of countries, so this provision is modified to be more relaxed.
Explicit Consent
The Act mandates that companies may only process users’ personal data with the user’s consent or for certain legitimate uses.
- Notice– User requests for consent must be accompanied or preceded by a notice that notifies users of the personal data to be processed and the purpose of such processing. The notice must also include the way users can exercise their opt-out rights and make a complaint to the Board. Users who have provided consent before the commencement of the Act must also receive such a notice, but a fiduciary may continue to process personal data until and unless the user withdraws consent.
- Limited consent for a specific purpose– A user’s consent the processing of her personal data is limited to the specified purpose and is limited to only the personal data as is necessary for that specified purpose. The Act contains an illustrative example: if a telemedicine app requests a user’s consent for the processing of her personal data to make telemedicine services available and to access her cell phone contacts list, even if the user gives her consent to both, the consent is limited to the processing of her personal data to make telemedicine services available because her phone contact list is not necessary for this purpose.
- Right to withdraw consent– Users have the right to withdraw consent at any time. If a user withdraws consent, a company is required to, within a reasonable time, cease and cause its data processors to cease processing that user’s personal data.
- Exception for voluntary consent– There are certain legitimate uses for which companies may process users’ personal data without obtaining prior express consent. When a user has voluntarily provided her personal data for a specific purpose, such as providing a phone number to a pharmacy in order to receive a digital SMS receipt of payment for goods, the company may process that personal data for that purpose.
General Data Fiduciary Obligations
- Responsibility for third party processors– A data fiduciary may involve a third-party data processor to process personal data on its behalf and must ensure completeness, accuracy, and consistency when personal data is being disclosed to another fiduciary.
- Implementing safeguard measures– Data fiduciaries must also take reasonable technical and organizational measures and implement reasonable safety safeguards to prevent personal data breaches. In the case of a breach, the data fiduciary must give the Board and each affected user notice of the breach.
- Establishing a point of contact– Data fiduciaries must publish the business contact information of a Data Protection Officer or a person who can answer on behalf of the entity if users have questions about the processing of their personal data.
Significant Data Fiduciaries
The government may designate certain data fiduciaries as “Significant Data Fiduciaries,” considering factors such as the volume and sensitivity of personal data processed, risks to the rights of users, security of the nation, and public order. Significant data fiduciaries have heightened obligations under the Act.
- Appointing a Data Protection Officer– Significant data fiduciaries are obligated to appoint a Data Protection Officer based in India who will be responsible to the Board of Directors and will be the point of contact for user grievances under the Act.
- Appointing an Independent Data Auditor– Significant data fiduciaries must also appoint independent data auditors who will evaluate the entity’s compliance. In addition, significant data fiduciaries must undertake periodic Data Protection Impact assessments which include assessing the rights of users and the management of risks related to processing their personal data.
Processing of data of children and individuals with disabilities
- Necessary parental/guardian consent– All data fiduciaries must obtain parental or guardian consent before processing the data of children under the age of 18 or of individuals with disabilities who have a lawful guardian.
- Prohibition on behavioral monitoring/targeted advertising– Data fiduciaries may not process personal data when it “is likely to cause any detrimental effect on the wellbeing of a child.” Data fiduciaries are also prohibited from tracking or conducting behavioral monitoring or targeted advertising directed at children. This provision will affect the marketing and advertising practices of US-based media companies whose consumer base comprises of Indian minors. Companies will need to ensure that they are not using the personal data of Indian minors for behavioral monitoring or target advertising purposes.
Rights and Duties of Users
- Right to know– Users have the right to request a summary of their personal data being processed by a data fiduciary, the identity of all other data fiduciaries and data processors with whom the personal data has been shared, and any other information related to their personal data.
- Right to correct– Users also have the right to correct, complete, update, and erase their personal data for the processing of which they previously gave consent. Upon receiving such a request, a data fiduciary must correct, complete, or update the personal data as requested.
- User duties– Users also have certain duties, including to not suppress any material information in providing their personal data and to not register frivolous complaints or grievances.
The Data Protection Board
- Establishment of the Board– The Act establishes the Data Protection Board of India, a government body whose members will possess knowledge of or experience in fields such as and relating to data governance, consumer protection laws, information and communication technology, and digital economy.
- Incident response mitigation and consumer complaints– The Board will oversee mitigation measures in the event of a personal data breach and will handle consumer complaints/grievances related to the processing of their personal data. The Board will conduct investigations into incidents and complaints and has the authority to impose monetary penalties when appropriate. Monetary penalties for violations and non-compliance are up to the amount 2.5 billion rupees, or $30 million.
- Power to block information from data fiduciaries– When a data fiduciary has been subjected to financial penalties on two or more occasions, the Board may advise blocking public access to information generated, stored, received, or hosted in the fiduciary’s specific computer resources or platforms.
Click Here for the Original Article
Japan Addresses the Wage Gap by Requiring Gender Pay Gap Disclosure
This summer many Japanese companies took their first legally required steps toward joining the growing global movement to address gender inequality and promote equal opportunities in the workforce. The Act on Promotion of Women’s Participation and Advancement in the Workforce (“Act”), which took effect in 2022, mandates the public reporting of gender wage gaps within three months of the end of a company’s fiscal year. As many Japanese companies end their fiscal year in March, most employers had until the end of June 2023 to disclose the gender gaps revealed in their 2022 wage data. This was the first year when such annual public disclosure was required by the Act.
According to the Organization for Economic Cooperation and Development’s (OECD) gender wage gap data for 2022, Japan ranks as the 4th lowest out of 38 OECD countries with a gender wage gap calculated at 22.1%. This gap can be attributed to the low rate of female managers in Japan (15%) and the high percentage (50%) of female employees who are either part-time or fixed-term employees. Because of this, and as one of the world’s largest economies, Japan has taken significant steps to narrow the gender pay gap and enhance women’s participation in the labor market. One crucial initiative in this regard involved revisions to the Act imposing wage transparency obligations on Japanese companies.
The Act, which took effect on July 8, 2022, requires that Japanese companies confront their gender wage gaps (the difference in pay between men and women) by public disclosure. For companies with more than 300 employees, the gender pay gap must be disclosed annually and within three months following the end of the fiscal year. The report needs to analyze the gender pay gap for the overall employee population as well as a separate analysis for specific employee groups, including regular employees (those on full-time indefinite contracts) and irregular employees (part-time and fixed-term employees). These calculations include base wages, bonuses and allowances but can exclude severance pay and commuting allowances. The gender pay gap data can be disclosed on either the company’s own website or the government website to promote and advance women’s participation in the labor market (https://positive-ryouritsu.mhlw.go.jp/positivedb/). Beginning in January 2023, the gender pay gap data was also required to be included in annual securities reports for listed companies. Smaller companies are currently exempt from this portion of the law, but it may be extended to smaller companies in the future.
Companies with more 100 employees have been required to submit public annual gender-based “general employer action plans.” For employers with more than 300 employees, these plans must now include the gender pay gap analysis. According to the Act, action plans are to include defined goals, measures, and analyses of female employees’ activities and workplace challenges. In addition to being made available to the general public, action plans must also be submitted to prefectural labor bureaus for review.
Now more than ever, Japanese employers should take steps to review their compensation structures and policies to ensure they are fair and balanced, considering factors like performance evaluation criteria, promotion processes, and salary negotiation practices. Employers should take the opportunity to identify any potential areas where gender biases may exist and take steps to address them to comply with the requirements now in place and in anticipation that more wage equity efforts may be on the horizon.
Click Here for the Original Article
MISCELLANEOUS DEVELOPMENTS
Beware of State E-Verify Requirements for Remote Employees
As remote and hybrid work arrangements become increasingly common, many employers have expanded their recruiting efforts, hiring workers throughout the country. These multijurisdictional workforces give rise to new or previously overlooked regulatory pitfalls, including I-9 compliance and the use of E-Verify.
E-Verify Requirements: E-Verify is a web-based tool operated by U.S. Citizenship and Immigration Services, in partnership with the Social Security Administration, that allows employers to verify the work authorization of their employees. Subject to limited exceptions, unless an entity is a government contractor, the use of E-Verify is optional from a federal standpoint and in several states. However, many jurisdictions have their own wider-ranging requirements. Some state laws parallel federal contractor requirements, limiting mandatory E-Verify to, for instance, state and local agency contractors. Others are much broader. By way of example, Florida recently enacted a law, effective July 1, 2023, pursuant to which all private employers with 25 or more employees must use E-Verify for their Florida employees. Florida joins states such as Arizona, North Carolina, South Carolina, Tennessee and Utah (among others) that require private employers—either all employers or those of a certain size—to use E-Verify. Failure to comply can result in fines, civil penalties and other sanctions.
Implications for Employers: State-specific E-Verify requirements should not pose any issues for employers already enrolled in the system. However, employers not currently enrolled in E-Verify should consider the implications of hiring remote employees in jurisdictions where the use of E-Verify is mandated. Keep in mind that once enrolled in E-Verify, employers must E-Verify all newly hired employees; they cannot use the system selectively. This means that the hiring of a single employee in a jurisdiction with mandatory E-Verify can impact the employer’s employment eligibility process in its entirety.
Next Steps for Employers Not Enrolled in E-Verify: Now is a good time to assess these issues. Employers who do not already utilize E-Verify should review the locations of their remote employees and any jurisdiction-specific E-Verify requirements to determine applicability (e.g., based on employer size, etc.).
- If there are employees in mandatory E-Verify jurisdictions that have not been E-Verified, consult with legal counsel to resolve the issue.
- If the employer does not have remote employees in such jurisdictions, establish a plan going forward: Is the employer willing to enroll in E-Verify in the future? Or should policies be implemented so as not to trigger those E-Verify requirements (e.g., not hiring remote workers in those jurisdictions)?
- Keep apprised of changes in applicable law—such as the new Florida E-Verify requirements—that can impact your workforce and verification procedures. The law changes rapidly, and it’s important to stay current.
One factor in considering whether to enroll in E-Verify is the new streamlined remote verification process. As detailed in our recent client alert, only employers enrolled in E-Verify are eligible to utilize the newly created remote verification procedures for I-9 documentation. That may be a significant enough benefit to justify enrolling in E-Verify, given the reduction in administrative burden and cost of the streamlined remote verification process.
The complex and ever-changing web of federal, state and local laws make it difficult for multijurisdictional employers to remain compliant. Employment counsel well versed in these issues can help.
Click Here for the Original Article
The EU Artificial Intelligence Act: What’s the Impact?
The EU Artificial Intelligence Act (or “AI Act”) is the world’s first legislation to regulate the use of AI. It leaves room for “technical soft law”; but, inevitably (being the first and being broad in scope), it will set principles and standards for AI development and governance. The UK is concentrating more on soft law, working towards a decentralized principle-based approach. The US and China are working on their own AI regulations, with the US focusing more on soft law, privacy, and ethics and China on explainable AI algorithms, aiming for companies to be transparent about their purpose. The AI Act marks a crucial step in regulating AI in Europe, and a global code of conduct on AI could harmonize practices worldwide, ensuring safe and ethical AI use. This article gives an overview of the EU Act, its main aspects as well as an overview of other AI legislative initiatives in the European Union and how these are influencing other jurisdictions, such us the UK, the US and China.
The AI Act: The First AI Legislation. Other Jurisdictions Are Catching Up.
On June 14, 2023, the European Parliament achieved a significant milestone by approving the Artificial Intelligence Act (or “AI Act”), making it the world’s first piece of legislation to regulate the use of artificial intelligence. This approval has initiated negotiations with the Council of the European Union which will determine the final wording of the Act. The final version of the AI Act is expected to be published by the end of 2023. Following this, the Regulation is expected to be fully effective in 2026. A two-year grace period similar to the one contemplated by the GDPR is currently being considered. This grace period would enable companies to adapt gradually and prepare for the changes until the rules come into force.
As the pioneers in regulating AI, the European institutions are actively engaged in discussions that are likely to establish both de facto (essential for the expansion and growth of AI businesses, just like any other industries) and de jure (creating healthy competition among jurisdictions) standards worldwide. These discussions aim to shape the development and governance of artificial intelligence, setting an influential precedent for the global AI community.
Both the United States and China are making efforts to catch up. In October 2022, the US government unveiled its “Blueprint for an AI Bill of Rights,” centered around privacy standards and rigorous testing before AI systems become publicly available. In April 2022, China followed a similar path by presenting a draft of rules mandating chatbot-makers to comply with state censorship laws.
The UK government, has unveiled an AI white paper to provide guidance on utilizing artificial intelligence in the UK. The objective is to encourage responsible innovation while upholding public confidence in this transformative technology.
While the passage of the Artificial Intelligence Act by the European Parliament represents an important step forward in regulating AI in Europe (and indirectly beyond, given the extraterritorial reach), the implementation of a global code of conduct on AI is also under development by the United Nations and is intended to play a crucial role in harmonizing global business practices concerning AI systems, ensuring their safe, ethical, and transparent use.
A Risk-Based Regulation
The European regulatory approach is based on assessing the risks associated with each use of artificial intelligence.
Complete bans are contemplated for intrusive and discriminatory uses that pose unacceptable risk to citizens’ fundamental rights, their health, safety, or other matters of public interest. Examples of artificial intelligence applications considered to carry unacceptable risks include cognitive behavioral manipulation targeting specific categories of vulnerable people or groups, such as talking toys for children, and social scoring, which involves ranking of people based on their behavior or characteristics. The approved draft regulation significantly expands the list of prohibitions on intrusive and discriminatory uses of AI. These prohibitions now include:
- biometric categorization systems that use sensitive characteristics like gender, race, ethnicity, citizenship status, religion, political orientation, commonly known as social scoring;
- “real-time” and “a posteriori” remote biometric identification systems in publicly accessible spaces, with an exception for law enforcement using ex post biometric identification for the prosecution of serious crimes, subject to judicial authorization. However, a complete ban on biometric identification, might encounter challenges in negotiations with the European Council, as certain Member State police forces advocate for its usage for law enforcement activities, where strictly necessary;
- predictive policing systems based on profiling, location or past criminal behavior;
- emotion recognition systems used in the areas of law enforcement, border management, workplaces and educational institutions;
- untargeted extraction of biometric data from the Internet or CCTV footage to create facial recognition databases.
In contrast, those uses that need to be “regulated” (as opposed to simply banned) through data governance, risk management assessment, technical documentation, and criteria for transparency, are:
- high-risk AI systems, such as those used in critical infrastructure (e.g., power grids, hospitals, etc.), those that help make decisions regarding people’s lives (e.g., employment or credit rating), or those that have a significant impact on the environment; and
- foundation models, under the form of Generative AI systems (such as, for example, the highly celebrated ChatGPT) and Basic AI models.
High-Risk AI systems are artificial intelligence systems that may adversely affect security or fundamental rights. They are divided into two categories:
- Artificial intelligence systems used in products subject to the EU General Product Safety Directive. These include toys, automobiles, medical devices and elevators.
- Artificial intelligence systems that fall into eight specific areas, which will have to be registered in an EU database:
(i) biometric identification and categorization of natural persons; (ii) management and operation of critical infrastructure; (iii) education and vocational training; (iv) employment, worker management and access to self-employment; (v) access to and use of essential private and public services and benefits; (vi) law enforcement; (vii) migration management, asylum, and border control; (viii) assistance in legal interpretation and enforcement of the law.
All high-risk artificial intelligence systems will be evaluated before being put on the market and throughout their life cycle.
The Generative and Basic AI systems/models can both be considered general-purpose AI because they are capable of performing different tasks and are not limited to a single task. The distinction between the two lies in the final output.
Generative AI, like the now-popular ChatGPT, uses neural networks to generate new text, images, videos or sounds that have never been seen or heard before, much as a human can. For this reason, the European Parliament has introduced higher transparency requirements:
- companies developing generative AI will have to make sure that it is made explicit in the end result that the content was generated by the AI. This will, for example, make it possible to distinguish deep fakes from real images;
- companies will have to ensure safeguards against the generation of illegal content; and
- companies will have to make public detailed summaries of the copyrighted data used to train the algorithm.
Basic AI models, in contrast, do not ‘create,’ but learn from large amounts of data, use it to perform a wide range of tasks, and have application in a variety of domains. Providers of these models will need to assess and mitigate the possible risks associated with them (to health, safety, fundamental rights, the environment, democracy, and the rule of law) and register their models in the EU database before they are released to the market.
Next are the minimal or low risk AI applications, such as those used to date for translation, image recognition, or weather forecasting. Limited-risk artificial intelligence systems should meet minimum transparency requirements that enable users to make informed decisions. After interacting with applications, users can decide whether they wish to continue using them. Users should be informed when interacting with AI. This includes artificial intelligence systems that generate or manipulate image, audio, or video content (e.g., deepfakes).
Finally, exemptions are provided for research activities and AI components provided under open-source licenses.
The European Union and the United States Aiming to Bridge the AI Legislative Gap
The United States is expected to closely follow Europe in developing its own legislation. In recent times, there has been a shift in focus from a “light touch” approach to AI regulation, towards emphasizing ethics and accountability in AI systems. This change is accompanied by increased investment in research and development to ensure the safe and ethical usage of AI technology. The Algorithm Accountability Act, which aims to enhance transparency and accountability of providers, is still in the proposal stage.
During the recent US-EU ministerial meeting of the Trade and Technology Council, the participants expressed a mutual intention to bridge the potential legislative gap on AI between Europe and the United States. These objectives gain significance given the final passage of the European AI Act. To achieve this goal, a voluntary code of conduct on AI is under development, and once completed, it will be presented as a joint transatlantic proposal to G7 leaders, encouraging companies to adopt it.
The United Kingdom’s ‘Pro-Innovation’ Approach in Regulating AI
On March 29, 2023, the UK government released a white paper outlining its approach to regulating artificial intelligence. The proposal aims to strike a balance between fostering a “pro-innovation” business environment and ensuring the development of trustworthy AI that addresses risks to individuals and society.
The regulatory framework is based on five core principles:
- Safety, security, and robustness: AI systems should function securely and safely throughout their lifecycle, with continuous identification, assessment, and management of risks.
- Appropriate transparency and explainability: AI systems should be transparent and explainable to enable understanding and interpretation of their outputs.
- Fairness: AI systems should not undermine legal rights, discriminate, or create unfair market outcomes.
- Accountability and governance: Effective oversight and clear lines of accountability should be established across the AI lifecycle.
- Contestability and redress: Users and affected parties should have the ability to contest harmful AI decisions or outcomes.
These principles are initially intended to be non-statutory, meaning no new legislation will be introduced in the United Kingdom for now. Instead, existing sector-specific regulators like the ICO, FCA, CMA, and MHRA will be required to create their own guidelines for implementing these principles within their domains.
The principles and sector-specific guidance will be supplemented by voluntary “AI assurance” standards and toolkits to aid in the responsible adoption of AI.
Contrasting with the EU AI Act, the UK’s approach is more flexible and perhaps ‘more proportionate’, relying on regulators in specific sectors to develop compliance approaches with central high-level objectives that can evolve as technology and risks change.
The UK government intends to adopt this framework quickly across relevant sectors and domains. UK sector specific regulators have already received feedback on implementing the principles during a public consultation that ran until June 2023, and we anticipate further updates from each of them in the coming months.
The Difficult Balance between Regulation and Innovation
The ultimate goal of these legislative efforts is to find a delicate balance between the necessity to regulate the rapid development of technology, particularly regarding its impact on citizens’ lives, and the imperative not to stifle innovation, or burden smaller companies with overly strict laws.
Anticipating the level of success is challenging, if not impossible. Nevertheless, the scope for “soft law” such as setting up an ad hoc committee at a European level shows promise. “Ultratechnical” matters subject to rapid evolution require clear principles that stem from the value choices made by legislators. Moreover, such matters demand technical competence to understand what is being regulated at any given moment.
Organizations using AI across multiple jurisdictions will additionally face challenges in developing a consistent and sustainable global approach to AI governance and compliance due to the diverging regulatory standards. For instance, the UK approach may be seen as a baseline level of regulatory obligation with global relevance, while the EU approach may require higher compliance standards.
As exemplified by the recent Italian shutdown of ChatGPT (see ChatGPT: A GDPR-Ready Path Forward? we have witnessed firsthand the complexities involved. The Italian data protection authority assumed a prominent role and instead of contesting the suspension of the technology in court, the business chose to cooperate. As a result, the site was reopened to Italian users within approximately one month.
In line with Italy, various other data protection authorities are actively looking into ways to influence the development and design of AI systems. For instance the Spanish AEPD has implemented audit guidance for data processing involving AI systems, more detail here, while or the French CNIL has created a department dedicated to AI with open self-evaluation resources for AI businesses, more detail here. Additionally, the UK’s Information Commissioner’s Office (ICO) has developed an AI toolkit (available here) designed to provide practical support to organizations.
From Safety to Liability: The AI Act is Prodromic to an AI Specific Liability Regime
The EU AI Act is part of a three-pillar package proposed by the EU Commission to support AI in Europe. The other pillars include an amendment to the EU Product Liability Directive (PLD) and a new AI liability directive (AILD). While the AI Act focuses on safety and ex ante protection/ prevention re fundamental rights, the PLD and AILD address damages caused by AI systems. Non-compliance with the AI Act’s requirements could also trigger, based on the AI Act risk level of the AI system at issue, different forms and degrees of alleviation of the burden of proof under both the amended PLD, for the no-fault based product liability claims, the AILD, for any other (fault based) claim. The amended PLD and the AILD are less imminent than the AI Act: they have not yet been approved by the EU Parliament and, as directives, will require implementation at the national level. Yet the fact that they are coming is of immediate importance and use, as it gives businesses even more reason to follow and possibly cooperate and partake in the standard setting process currently in full swing.
Conclusion
Businesses using AI must navigate evolving regulatory frameworks and strike a balance between compliance and innovation. They should assess the potential impact of the regulatory framework on their operations and consider whether existing governance measures address the proposed principles. Prompt action is necessary, as regulators worldwide have already started publishing extensive guidance on AI regulation.
Monitoring these developments and assessing the use of AI is key for compliance and risk management. This approach is crucial not only for regulatory compliance but also to mitigate litigation risks with contractual parties and complaints from individuals. Collaboration with regulators, transparent communication, and global harmonization are vital for successful AI governance. Proactive adaptation is essential as regulations continue to develop.
Click Here for the Original Article
Employers’ burgeoning use and reliance upon artificial intelligence has paved the way for an increasing number of states to implement legislation governing its use in employment decisions. Illinois enacted first-of-its-kind legislation regulating the use of artificial intelligence in 2020, and as previously discussed, New York City just recently enacted its own law. In 2023 alone, Massachusetts, Vermont and Washington, D.C. also have proposed legislation on this topic. These legislative guardrails are emblematic of our collective growing use of artificial intelligence, underscore the importance of understanding the legal issues this proliferating technology implicates, and need to keep abreast of the rapidly evolving legislative landscape. Below is a high-level summary of AI-related state legislation and proposals of which employers should be aware.
Illinois
In 2020, Illinois Gov. J. B. Pritzker signed into law the Artificial Intelligence Video Interview Act (the “Act”). As previously reported, the Act requires, among other things, employers who use artificial intelligence to analyze video interviews to do the following:
- Provide notice:Before an interview, employers must inform applicants that artificial intelligence may be used to analyze the applicant’s video interview and consider the applicant’s fitness for the position.
- Provide an explanation:Before an interview, employers must explain to the applicant how their artificial intelligence program works and what characteristics the technology uses to evaluate an applicant’s fitness for the position.
- Obtain consent:Before an interview, employers must obtain the applicant’s consent to have the artificial intelligence evaluate them. Employers may not use artificial intelligence to evaluate a video interview without consent.
- Maintain confidentiality: Employers will be permitted to share the videos only with persons whose expertise or technology is needed to evaluate the applicant’s fitness for the position.
- Destroy copies: Upon the applicant’s request, employers must destroy both the video and all copies thereof within 30 days after such request (and instruct any other persons who have copies of the video to destroy their copies as well).
The Act leaves many issues unresolved. For instance, the Act itself does not define “artificial intelligence”. Indeed, even with the proliferation of this technology, there is no settled legal definition of the term. The Act similarly is silent as to what kind and level of information is sufficient to meet the statute’s “explanation” requirement. Nor does the Act specify to which employers it applies and to whom it affords protections — or even if there is a private right of action.
While many of the Act’s nuances remain unsettled, the Illinois legislature has not shied away from tightening the reins on employers’ use of the technology and its myriad capabilities. Illinois passed legislation, effective January 1, 2022, imposing robust reporting requirements on those employers who rely solely on artificial intelligence analysis of video interviews to determine whether to select an applicant for an in-person interview. Under the amendments, such employers must collect and report:
- the race and ethnicity of applicants whom the employer does notprovide with the opportunity for an in-person interview after the use of artificial intelligence analysis, and
- the race and ethnicity of applicants whom the employer hires.
Employers must report this demographic data to the Department of Commerce and Economic Opportunity annually by December 31 of each calendar year, with the report to include the data collected in the 12-month period ending on November 30 preceding the filing of the report. The Department will analyze the data and report any data disclosing a racial bias to the Governor and General Assembly. Thereafter, the Department will analyze the data reported and report to the Governor and General Assembly by July 1 of each year whether the data discloses a racial bias in the use of artificial intelligence.
New York
More recently, effective July 5, 2023, New York City enacted a law even more robust than its Illinois counterpart. The New York City Automated Employment Decision Tools Law (“AEDTL”) prohibits employers and employment agencies from using automated employment decision tools unless: (a) the tool has been subjected to a bias audit within a year of its use or implementation; (b) information about the bias audit is publicly available; and (c) employers provide certain written notices to employees or job candidates.
Other Proposed Legislation
The Illinois and New York laws are part of a growing trend to regulate the use of artificial intelligence in the workplace. Massachusetts, Vermont and Washington, D.C. are following in kind and seek to impose their own safeguards.
Massachusetts
The Massachusetts Act Preventing a Dystopian Work Environment (H1873), introduced February 16, 2023, requires employers to provide notice to workers prior to adopting an automated decision system. The Act defines ADS as “a computational process, including one derived from machine learning, statistics, or other data processing or artificial intelligence techniques, that makes or assists an employment-related decision.”
Current status: The bill remains pending in the House before the Joint Committee on Labor and Workforce Development.
Washington, D.C.
Similarly, the D.C. Stop Discrimination by Algorithms Act of 2023 (B114), introduced February 2, 2023, prohibits businesses from using algorithms to make “important life opportunities,” including “opportunities to secure employment,” when the algorithm is based on protected characteristics such as race, color, religion, national origin, sex, or disability.
Current status: On February 10, 2023, the Council published Notice of an Intent to Act in the District of Columbia Register.
Vermont
Vermont, too, has proposed similar legislation. Vermont H114, introduced January 25, 2023, restricts the use of automated decision systems for employment-related decisions. The legislation defines ADS as “an algorithm or computational process that is used to make or assist in making employment-related decisions, judgments, or conclusions” and specifically includes artificial intelligence.
Current status: The House has referred the bill to the Committee on General and Housing, where it remains pending.
Federal Guidance
The EEOC, too, has chimed in on the topic. As we previously reported, in May 2023 the EEOC issued guidance on The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees. Among other things, this guidance defines key terms and explains how the use of algorithmic decision-making tools may violate the Americans with Disabilities Act. Significantly, the guidance makes clear an employer cannot insulate itself from liability arising from its use of artificial intelligence by utilizing a third-party vendor to develop and/or administer the tool.
Takeaways
The proliferation of guidance and legislation governing employers’ use of artificial intelligence underscores the need for employers to be cognizant of all pertinent laws and remain vigilant if they utilize the technology’s capabilities.