The Business Lawyer
American Bar Association
image
“What a Piece of Work Is AI”—Security and AI Developments
DOI 10.928/ac.2021.03.34 , Volume: 76 , Issue: 1
Trope: “What a Piece of Work Is AI”—Security and AI Developments

I. Introduction

Artificial intelligence (“AI”) gives users of digital hand tools (e.g., cell phone, tablet, laptop computer) enhancements that bring with them novel and unresolved security vulnerabilities and risks.1

AI, used here, refers to narrow or weak AI, the creation of digital systems to do things humans use their minds to do—but do them faster, more accurately, and more consistently, while using them to generate insights and predictions beyond what humans can do.2 AI systems may be thought of metaphorically as “power tools”3 that augment human work and productivity, particularly when such work can be performed as a “prediction.” One kind of AI system is machine learning:

Machine learning . . . approaches problems as a doctor progressing through residency might: by learning rules from data. Starting with patient-level observations, algorithms sift through vast numbers of variables, looking for combinations that reliably predict outcomes. . . . [W]here machine learning shines is in handling enormous numbers of predictors—sometimes, remarkably, more predictors than observations—and combining them in nonlinear and highly interactive ways. This capacity allows us to use new kinds of data, whose sheer volume or complexity would previously have made analyzing them unimaginable.4

This essay on AI and security developments proceeds as follows. Part II addresses the Federal Trade Commission’s 2019 settlement with Facebook covering allegations that included deceptive acquisition of data for the company’s AI tools. Part III discusses the problematic use of AI to set credit limits for Apple Card applicants. Part IV introduces Illinois’ Artificial Intelligence Video Interview Act. Part V addresses Executive Order No. 13905, Strengthening National Resilience Through Responsible Use of Positioning, Navigation, and Timing Services. Part VI contains concluding observations.

II. United States v. Facebook, Inc.

A. FTC/Facebook 2012 Settlement

In 2012, the Federal Trade Commission (“FTC”) filed a complaint alleging that, since 2009, Facebook had engaged in unfair and deceptive practices. Facebook settled the FTC’s allegations. The Commission Order (“2012 Order”) prohibited Facebook from misrepresenting “the extent to which a consumer can control the privacy of any covered information . . . and the steps a consumer must take to implement such controls” and “the extent to which [Facebook] makes or has made covered information accessible to third parties.”5

B. FTC/Facebook 2019 Settlement

In 2019, the FTC and the U.S. Department of Justice alleged that Facebook had failed repeatedly to comply with the 2012 Order. For instance, Facebook told third-party developers that, after April 2015, it would cease sharing user data with apps that a user’s Friends used; Facebook, however, “had private arrangements with dozens of . . . ‘Whitelisted Developers,’ that allowed those developers to continue to collect” user data from apps their Friends used.”6

On July 24, 2019, Facebook settled the FTC’s charges (“2019 Settlement”).7 Facebook agreed to pay a $5 billion penalty and implement an array of privacy and security safeguards, including some specifically related to Facebook’s use of AI-augmented facial recognition. For example, Facebook “shall not create any new Facial Recognition Templates, and shall delete any existing Facial Recognition Templates,” unless Facebook discloses how it “will use, and . . . share, the Facial Recognition Template for such User, and obtains such User’s affirmative express consent.”8

In April 2020, the FTC’s Bureau of Consumer Protection posted on the FTC’s website a guidance entitled Using Artificial Intelligence and Algorithms (“Guidance”). The Guidance seeks to help companies “manage the consumer protection risks of AI and algorithms.”9 The Guidance references the 2019 Settlement to highlight the need to avoid deceptive practices when collecting sensitive data for AI:

Be transparent when collecting sensitive data. The bigger the data set, the better the algorithm, and the better the product for consumers, end of story . . . right? Not so fast. Be careful about how you get that data set. Secretly collecting audio or visual data—or any sensitive data—to feed an algorithm could also give rise to an FTC action. Just last year, the FTC alleged that Facebook misled consumers when it told them they could opt in to facial recognition—even though the setting was on by default. As the Facebook case shows, how you get the data may matter a great deal.10

III. Problematic Use of AI to Set Credit Limits for Apple Card Applicants

AI tools may be defective, due to errors in design (so that they do not “learn” correctly from their data sets), errors contained in the data sets (embedding bias), or errors introduced into the data sets by bad actors. As a 2017 RAND study explained:

[A]n artificial agent is only as good as the data it learns from. Automated learning on inherently biased data leads to biased results. . . . Applying procedurally correct algorithms to biased data is a good way to teach artificial agents to imitate whatever bias the data contains.11

Learning algorithms tend to be vulnerable to characteristics of their training data. This is a feature of these algorithms: the ability to adapt in the face of changing input. But algorithmic adaptation in response to input data also presents an attack vector for malicious users. This data diet vulnerability in learning algorithms is a recurring theme.12

An alleged example of an unlawfully biased AI algorithm surfaced in 2019 involving the Apple Card. In August 2019, Apple, in partnership with Goldman Sachs as the issuing bank, began inviting consumers to apply for its Apple Card credit card. An Apple press release touted Apple Card’s AI advantages, but did not disclose that AI augmentation would help identify “qualified” customers and set their credit limits.13 In November 2019, Danish entrepreneur David Hansson tweeted that his wife had been denied a “credit line increase for the Apple Card,” although her credit score exceeded his14: “‘My wife and I filed joint tax returns, live in a community-property state, and have been married for a long time. Yet Apple’s black box algorithm thinks I deserve 20x the credit limit she does . . . .’”15 Apple assured Hansson the credit determination did not reflect gender discrimination, “citing the [AI] algorithm that makes Apple Card’s credit assessments.”16 Hansson’s tweets “went viral.”17 Hansson’s wife eventually received “a ‘VIP bump’ to match his [Apple Card] credit limit.”18 The AI malfunction remained unexplained.

Goldman reportedly is “responsible for all credit decisions”19 for Apple Card applicants, and Goldman “implemented” the algorithm.20 Neither Apple’s nor Goldman’s denials of discrimination nor defenses of their product explained the apparent gender-based discrepancy, the AI algorithm, its role in such decisions, or any affirmative precautions that Goldman had taken to prevent the algorithm from generating gender-biased predictions. Instead, Goldman took the position that the algorithm did not use gender as a criterion and therefore could not produce gender-biased predictions. This explanation ignored the inferential power of AI algorithms and may propagate a serious misconception—that algorithmic bias will not exist if the data that trains the algorithm does not contain or reflect bias. As a WIRED report explains:

Goldman landed on what sounded like an ironclad defense: The algorithm, it said, has been vetted for potential bias by a third party; moreover, it doesn’t even use gender as an input. How could the bank discriminate if no one ever tells it which customers are women and which are men?

This explanation is doubly misleading. For one thing, it is entirely possible for algorithms to discriminate on gender, even when they are programmed to be “blind” to that variable. For another, imposing willful blindness to something as critical as gender only makes it harder for a company to detect, prevent, and reverse bias on exactly that variable. . . .

A gender-blind algorithm could end up biased against women as long as it’s drawing on any [data] input or inputs that happen to correlate with gender.21

With no financial discriminator identified or acknowledged as the cause, gender discrimination—by humans, or embedded in the design of the AI algorithm or trained into the algorithm by flawed or “poisoned” data—appeared a possible cause, unless the Hanssons’ experience was an outlier.

It proved not to be an outlier. On November 9, 2019, Apple co-founder Steve Wozniak tweeted: “The same thing happened to us. I got 10x the credit limit. We have no separate bank or credit card accounts or any separate assets. Hard to get to a human for a correction though. It’s big tech in 2019.”22

That month, New York’s Department of Financial Services (“NYDFS”) opened an investigation into Apple Card’s issuing bank and AI algorithms used to determine credit limits.23 The NYDFS Superintendent explained the investigation would seek “to determine whether New York law was violated and ensure all consumers are treated equally regardless of sex. . . . Any algorithm that intentionally or not results in discriminatory treatment of women or any other protected class of people violates New York law.”24

It may seem startling that a creditor could be liable for unintentional discriminatory treatment resulting from its use of an AI algorithm. But strict liability, or liability without specific intent to discriminate, is the applicable standard under the Equal Credit Opportunity Act (“ECOA”). The ECOA prohibits disparate treatment “against any applicant, with respect to any aspect of a credit transaction—(1) on the basis of race, color, religion, national origin, sex or marital status, or age.”25 ECOA’s implementing regulations provide: “A creditor shall not discriminate against an applicant on a prohibited basis regarding any aspect of a credit transaction.”26 And, the Consumer Financial Protection Bureau’s official interpretation of this rule emphasizes: “Disparate treatment on a prohibited basis is illegal whether or not it results from a conscious intent to discriminate.”27

IV. Illinois’ Artificial Intelligence Video Interview Act

Companies increasingly use AI to review and rank job applicant resumes (at a speed and scale that human reviewers could not match) and, in some instances, use AI to “analyze applicants’ facial expressions during video job interviews.”28 Such analysis focuses on an array of facial and eye expression cues that the AI model compares to a target profile that purports to be indicative of traits the employer seeks in applicants and traits the employer does not want in applicants.

Companies that use AI as an applicant-selecting tool include Dunkin’ Donuts, IBM, Carnival Cruise Lines, the Boston Red Sox, and Unilever USA.29 Unilever’s algorithm examines videos of applicants “answering questions for around 30 minutes, and through a mixture of natural language processing and body language analysis, determines who is likely to be a good fit.”30

Unilever’s target profile of a preferred candidate’s positive traits includes systemic thinking, resilience, and business acumen.31 It is unclear if Unilever’s target profile excludes disfavored negative traits, such as a lack of candor. Unilever’s AI tool reportedly identifies the applicants that best match the target profile and “returns those to a human recruiter, along with notes from the AI about what it observed in each candidate.”32

Unilever’s AI assesses the presence or absence of such traits in an applicant’s video. It predicts an employee’s probable “success” by recognition not of a person’s identity, but of the degree to which the applicant’s facial expressive traits match “previously successful employees.”33 It is unclear whether Unilever scrutinizes its target profile for bias that may be inherent in traits of successful Unilever employees (which might include gender and racial bias). It’s risky to rely on AI’s apparent objectivity and proficiency in selecting candidates or in setting credit limits, even if it applies its rules more consistently than humans ever could. In such cases, “if a company has traditionally skewed toward (or away from) certain categories of people, the AI will learn to do the same unless the training is handled very carefully to avoid this outcome.”34

Possibly concerned by such risks, Illinois’ legislature, on May 29, 2019, unanimously passed35 the Artificial Intelligence Video Interview Act (“Act”).36 The Act, which came into effect on January 1, 2020,37 is reportedly the first state statute aimed at regulating the use of AI in the employee hiring process.38

The Act applies to an employer in Illinois who wants to consider hiring applicants for “positions based in Illinois,” who wants to ask such applicants to “record video interviews,” and who wants to use AI to analyze the “applicant-submitted video.”39 To engage in such AI-augmented hiring practices, an employer must obtain the applicant’s prior consent: “An employer may not use artificial intelligence to evaluate applicants who have not consented to the use of artificial intelligence.”40

To obtain the requisite consent, an employer must meet three conditions:

    Notify the applicant that AI “may be used to analyze the applicant’s video interview and consider the applicant’s fitness for the position.”41
    Provide the applicant with information that explains “how the artificial intelligence works and what general types of characteristics it uses to evaluate applicants.”42
    Obtain the applicant’s consent to be “evaluated” by the AI program “as described in the information provided.”43

The Act requires that all three conditions be met “before the interview,” but does not specify how long before the interview those conditions must be met. Thus it is unclear whether there is a minimum period before the interview when an employer must give an applicant an explanation of “how” the AI “works.” The Act is silent on whether employers may give a desired category of applicants a written explanation far in advance of an interview, and give a less preferred category of applicants an oral explanation minutes “before the interview.” Doing so would risk impermissible bias and might deny some applicants a fair opportunity to consider the significance of the notice before consenting or refusing to consent to AI analysis of their video interview.

The Act does not define “artificial intelligence,” nor set criteria for what constitutes a sufficient explanation of “how” the AI “works,” nor explain the term “applicant-submitted video.” It would appear the Act applies to video interviews that applicants initiate, on a digital device, and then upload or “submit” to the employer.

The uncertainties of the timing for the notice, the level of explanation of “how” the AI “works,” and lack of a definition of key terms such as “artificial intelligence” set the stage for what could be an employer/applicant impasse: the applicant might object to an opaque or uninformative explanation of “how” the AI “works” and condition consent on receipt of an improved explanation; the employer might refuse to give it, decline to interview the applicant, and thereby exclude the applicant from hiring consideration. Other applicants, on learning of such results, might consent rather than risk rejection.

Thus, in practice, the Act’s required consent to AI analysis of an interview video may dwindle to a consent ritual similar to a pre-surgical requirement for an “informed consent,” characterized by being a last-minute exercise, often conducted with haste and opacity, and providing little, if any, protection of the patient. However, surgeons and anesthesiologists seek to heal, not select, a patient. Physicians answer a patient’s questions to allay fears of surgery’s uncertain outcome, and not to select which patients qualify for surgery. Employers, not bound by medical ethics, may be less patient or unwilling to answer questions about their use of AI or may tag as a negative trait or departure from the target profile an applicant’s request for an improved explanation.

It is noteworthy that the requisite explanation of “how” the AI “works,” which includes explaining “what general types of characteristics it uses to evaluate applicants,” would not appear to protect applicants against an employer’s deliberate or inadvertent use of biased or otherwise defective AI analysis of a video interview.

Perhaps most problematically, the Act omits any requirement for employers to secure access to their interview video AI tools. Such security, at a minimum, might include audits to check whether AI software, algorithm, or training data had been accessed and modified. The more successful the company and the more essential it is to U.S. critical infrastructure or national security, the greater the chances that its AI tools will be targeted by bad actors. Competitors might seek access to modify the AI training data in order to impair the company’s ability to select the most qualified candidates. Foreign adversaries might pursue access to doubly distort the data, causing AI to underrate qualified candidates and overrate candidates who might be sympathetic to, or plants of, the adversary. AI is remarkably susceptible to such hacks and corruption of its training data:

AI models can be hacked by inserting a few tactically inserted pixels (for a computer vision algorithm) or some innocuous looking typos (for a natural language processing model) into the training set. . . . Let’s say you have a model you’ve trained on data sets. Its classifying pictures of cats and dogs. . . . People have figured out ways of changing a couple of pixels in the input image, so now the network image is misled into classifying an image of a cat into the dog category. . . . The image still looks the same to our eyes. . . . But somehow it looks vastly different to the AI model itself.44

V. Executive Order No. 13905

A. GPS Provision of PNT Services and Data

The Global Positioning System (“GPS”), a U.S.-government-owned utility, provides positioning, navigation, and timing (“PNT”) services and information to civilian and military users worldwide.45Positioning data enable one to determine accurately one’s precise location and orientation; navigation data give one the “ability to determine current and desired position . . . and apply corrections to course . . . and speed to attain a desired position anywhere around the world”; and timing data enable one to “acquire and maintain accurate and precise time . . . anywhere in the world and within user-defined timeliness parameters.”46

GPS provides three-dimensional navigational data to ships, aircraft, trains, and mobile phones. GPS also provides a fourth dimension of data crucial to the reliable operation of critical infrastructure—precise time and frequency data for synchronizing devices and systems.47 As observed in a recent Scientific American article, “[a]lthough we think of GPS as a handy tool for finding our way to restaurants and meetups, the satellite constellation’s timing function is now a component of every one of the 16 infrastructure sectors deemed ‘critical’ by the Department of Homeland Security.”48

B. EO Sets Standard for Resilient Use of PNT Services

Critical infrastructure’s dependence on GPS timing signals means that a disruption of GPS signals or corruption or modification of GPS timing data could de-synchronize devices and systems that cannot operate properly or safely in such a destabilized state. Because GPS signals must travel over 12,000 miles from satellites to Earth-based receivers, they are attenuated, weak, and vulnerable to being “jammed” (depriving the user of signal) or “spoofed” (when a slightly stronger signal delivers false data of the recipient’s location and time at that location).49 Experts express concern that adversaries or terrorists could launch a coordinated jamming and spoofing attack against the GPS system. Such attack could

severely degrade the functionality of the electric grid, cell-phone networks, stock markets, hospitals, airports . . . all at once, without detection. The real shocker is that U.S. rivals do not face this vulnerability. China, Russia and Iran have terrestrial backup systems that GPS users can switch to and that are much more difficult to override than the satellite-based GPS system. The U.S. has failed to achieve a 2004 presidential directive to build such a backup.50

To reduce GPS vulnerabilities and improve its resilience, the President, on February 12, 2020, issued Executive Order No. 13905, Strengthening National Resilience Through Responsible Use of Positioning, Navigation, and Timing Services (“EO”).51 The EO emphasizes that disruption of GPS-dependent PNT services—or their “manipulation”—“has the potential to adversely affect the national and economic security of the United States.”52 The EO announces a U.S. policy to “ensure that disruption or manipulation of PNT services does not undermine the reliable and efficient functioning of its critical infrastructure.”53

To implement that policy of continuity of PNT services, the EO introduces a new standard—“responsible use of PNT services,” vaguely defined as “the deliberate, risk-informed use of PNT services, including their acquisition, integration, and deployment, such that disruption or manipulation of PNT services minimally affects national security, the economy, public health, and the critical functions of the Federal Government.”54

When applied, the standard would appear to require users of PNT services to avert “disruption or manipulation”; or, failing that, they should operate their PNT devices and services at a level of resilience that limits any bad actor’s “disruption or manipulation” to a minimal effect on “national security, the economy, public health, and the critical functions of the Federal Government.”

The EO does not define crucial terms in the standard—“disruption,” “manipulation,” and “minimally affect”—nor authorize federal agencies to issue regulations to clarify them. With such terms undefined, a “user” will have difficulty navigating or positioning its activities into compliance with the standard when it emerges in agency-generated “PNT profiles” (explained below).

The EO does not define the term “user,” but it’s reasonable to infer the EO aims at enterprise, not consumer, users. Enterprise users might include, without limitation, financial institutions, telecoms (4G and 5G), mobile phone and map app makers, airlines, trains, oil and gas, and bulk power system operators.

C. PNT Profiles

To coax PNT service users to improve resilience, the EO requires the Secretary of Commerce (“SECCOM”), by February 12, 2021, and in coordination with the heads of Sector-Specific Agencies (“SSAs”), to “develop and make available” to an undefined set of “appropriate agencies and private sector users” what it terms “PNT profiles.”55 The EO defines “PNT profiles” to mean: “a description of the responsible use of PNT services—aligned to standards, guidelines, and sector-specific requirements—selected for a particular system to address the potential disruption or manipulation of PNT services.”56

Deconstructed, the EO seems to require the SECCOM and SSAs to develop standards of resilience that will apply to specific categories of PNT service users and will be aimed at minimizing “potential disruption or manipulation of PNT services.” The set(s) of user-specific resilience standards will be referred to as “PNT profiles.” The EO expressly assumes, without explanation, that making PNT profiles available is something the government can do better than industry, and once available, the PNT profiles will “enable” public and private PNT service users to perform three tasks toward improving PNT service resilience:

    “identify systems, networks, and assets dependent on PNT services”;
    “detect the disruption and manipulation of PNT services”; and
    “manage the associated risks to the systems, networks, and assets dependent on PNT services.”57

The EO does not require that PNT users meet or try to meet their respective PNT profiles. Instead, it mandates that, within ninety days of the PNT profiles’ being made available, federal government agencies, working through the Secretary of Homeland Security, “develop contractual language for inclusion of the relevant information from the PNT profiles in the requirements for Federal contracts for products, systems, and services that integrate or utilize PNT services.”58

To inform development of PNT profiles, the National Institute of Standards and Technology (“NIST”) issued, on May 27, 2020, a Request for Information (“RFI”) to PNT vendors and service users. The RFI asks respondents to identify and describe processes, procedures, approaches, or technologies to “manage cybersecurity risks to PNT services,” “detect disruption or manipulation of PNT services,” and “recover or respond to PNT disruptions.”59 NIST has made publicly available all relevant responses.60

VI. Concluding Observations

AI tools derive their augmentation capabilities or “learn” from exposure to dynamic data sets. The fact that AI “learns” means its learning process can be subverted: it may be hacked and tampered with so that the model that emerges from what AI “learns” may fail to make accurate forecasts, or generate biased predictions, or malfunction in other ways an adversary intends.

Algorithms find things they have not been trained on, can’t recognize, and need to be dynamically retrained to identify correctly.61 AI is as dynamic and protean as the data that trains it, but AI cannot accurately predict “outside the box” of data that trains it. AI that works right today may not work right tomorrow. Security incidents may manipulate data or algorithms. AI machines may train to perform “adversarial AI” to confuse and subvert the operations of other AI machines.62 AI thus has continuous data quality challenges that necessitate checking and verifying data throughout the development process. Routine re-verifications may be viewed as “azimuth checks.” In land navigation, “azimuth” expresses direction, and each “azimuth check” re-verifies whether one’s route is on course to the destination. AI development needs its own “azimuth checks” to verify whether its developers are performing the task at hand correctly and will create a model that forecasts accurately.63 AI’s dynamic intersections with security make “azimuth checks” of data and algorithms a necessary safeguard no matter “how noble in reason! how infinite in faculty!”64 the AI may seem to be.

Notes

1 See Miles Brundage et al., The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation 17–18 (2018), https://img1.wsimg.com/blobby/go/3d82daa4-97fe-4096-9c6b-376b92c619de/downloads/MaliciousUseofAI.pdf?ver=1553030594217.
2 See Kathleen Walch, Rethinking Weak vs. Strong AI, Forbes (Oct. 4, 2019), https://www.forbes.com/sites/cognitiveworld/2019/10/04/rethinking-weak-vs-strong-ai/#1f0a849a6da3; Roland L. Trope & Charles C. Palmer, AI-Controlled Vehicles: How Will We Frame Thy Fearful Symmetry?, in The Law of Artificial Intelligence and Smart Machines 127 (Theodore F. Claypoole ed., 2019).
3 Marc Donner, Statement at New York City Bar Association Webcast: Emergence of AI as Collaborator, as Creator: An Exploration of the Intersection of AI, IP, and Security (June 10, 2020) (recording available from the author).
4 Ziad Obermeyer & Ezekiel J. Emanuel, Predicting the Future—Big Data, Machine Learning, and Clinical Medicine, 375 New Eng. J. Med. 1216, 1217 (Sept. 29, 2016), https://www.nejm.org/doi/full/10.1056/NEJMp1606181.
5 Decision and Order at 4, In re Facebook, Inc., No. C-4365 (F.T.C. July 27, 2012).
6 Complaint for Civil Penalties, Injunction, and Other Relief at 4, United States v. Facebook, Inc., No. 1:19-cv-2184 (D.D.C. July 24, 2019), https://www.ftc.gov/system/files/documents/cases/182_3109_facebook_complaint_filed_7-24-19.pdf.
7 See Press Release, Fed. Trade Comm’n, FTC Imposes $5 Billion Penalty and Sweeping New Privacy Restrictions on Facebook (July 24, 2019), https://www.ftc.gov/news-events/press-releases/2019/07/ftc-imposes-5-billion-penalty-sweeping-new-privacy-restrictions.
8 See Stipulated Order for Civil Penalty, Monetary Judgment, and Injunctive Relief at 8, United States v. Facebook, Inc., No. 1:19-cv-02184 (D.D.C. July 24, 2019), https://www.ftc.gov/system/files/documents/cases/182_3109_facebook_order_filed_7-24-19.pdf.
9 Andrew Smith, Using Artificial Intelligence and Algorithms, Fed. Trade Commission (Apr. 8, 2020, 9:58 AM), https://www.ftc.gov/news-events/blogs/business-blog/2020/04/using-artificial-intelligence-algorithms.
10 Id.
11 Osonde Osoba & William Welser IV, An Intelligence in Our Image 17 (2017).
12 Id. at 7.
13 Apple Card Launches Today for All US Customers, Apple (Aug. 20, 2019), https://www.apple.com/newsroom/2019/08/apple-card-launches-today-for-all-us-customers/.
14 Taylor Telford, Apple Card Algorithm Sparks Gender Bias Allegations Against Goldman Sachs, Wash. Post (Nov. 11, 2019, 10:44 AM), https://www.washingtonpost.com/business/2019/11/11/apple-card-algorithm-sparks-gender-bias-allegations-against-goldman-sachs/.
15 Id. (emphasis added).
16 Id.
17 See, e.g., Shahien Nasiripour & Sridhar Natarajan, Apple Co-founder Says Goldman’s Apple Card Algo Discriminates, Spokesman-Rev. (Nov. 11, 2019), https://www.spokesman.com/stories/2019/nov/11/apple-co-founder-says-goldmans-apple-card-algo-dis/.
18 Neil Vigdor, Apple Card Investigated After Gender Discrimination Complaints, N.Y. Times (Nov. 10, 2019), https://www.nytimes.com/2019/11/10/business/Apple-credit-card-investigation.html.
19 New York Regulator Probes Apple Card Algorithms for Gender Bias After Viral Tweets, Market-Watch (Nov. 11, 2019), https://www.marketwatch.com/story/new-york-regulator-probes-apple-card-algorithms-for-gender-bias-after-viral-tweets-2019-11-09.
20 Jeremy Horwitz, Goldman Explains Apple Card Algorithmic Rejections, Including Bankruptcies, VentureBeat (July 2, 2020), https://venturebeat.com/2020/07/02/goldman-explains-apple-card-algorithmic-rejections-including-bankruptcies/.
21 Will Knight, The Apple Card Didn’t “See” Gender—and That’s the Problem, Wired (Nov. 19, 2019), https://www.wired.com/story/the-apple-card-didnt-see-genderand-thats-the-problem/.
22 Steve Wozniak (@stevewoz), Twitter (Nov. 9, 2019, 7:51 PM), https://twitter.com/stevewoz/status/1193330241478901760.
23 See, e.g., Goldman Denies Discriminatory Apple Card Practices, Says It Will Reassess Limits, PYMNTS.com (Nov. 12, 2019), https://www.pymnts.com/apple/2019/goldman-denies-discriminatory-apple-card-practices-will-reassess-limits/.
24 Nasiripour & Natarajan, supra note 17.
25 15 U.S.C. § 1691(a) (2018).
26 12 C.F.R. § 1002.4(a) (2020).
27 Official Interpretation of Paragraph 4(a), Consumer Fin. Prot. Bureau, https://www.consumerfinance.gov/policy-compliance/rulemaking/regulations/1002/4/ (last visited Oct. 3, 2020) (emphasis added).
28 See Rebecca Heilweil, Illinois Says You Should Know If AI Is Grading Your Online Job Interviews, Vox: Recode (Jan. 1, 2020, 9:50 AM), https://www.vox.com/recode/2020/1/1/21043000/artificial-intelligence-job-applications-illinios-video-interivew-act.
29 See, e.g., Minda Zetlin, AI Is Now Analyzing Candidates’ Facial Expressions During Video Job Interviews, Inc.com (Feb. 28, 2018), https://www.inc.com/minda-zetlin/ai-is-now-analyzing-candidates-facial-expressions-during-video-job-interviews.html.
30 See Bernard Marr, The Amazing Ways How Unilever Uses Artificial Intelligence to Recruit & Train Thousands of Employees, Forbes (Dec. 14, 2018, 12:07 AM), https://www.forbes.com/sites/bernardmarr/2018/12/14/the-amazing-ways-how-unilever-uses-artificial-intelligence-to-recruit-train-thousands-of-employees/#2f61f1f56274.
31 See id.
32 See Zetlin, supra note 29.
33 See Marr, supra note 30.
34 The Artificial Intelligence Video Interview Act, BillTrack50 (July 9, 2019), https://www.billtrack50.com/blog/internet-tech/the-artificial-intelligence-video-interview-act/.
35 Daniel Waltz et al., Illinois Employers Must Comply with Artificial Intelligence Video Interview Act, SHRM (Sept. 5, 2019), https://www.shrm.org/resourcesandtools/legal-and-compliance/state-and-local-updates/pages/illinois-artificial-intelligence-video-interview-act.aspx.
36 See Bill Status of HB2557, Ill. Gen. Assemb., https://www.ilga.gov/legislation/BillStatus.asp?DocNum=2557&GAID=15&DocTypeID=HB&SessionID=108&GA=101? (last visited Oct. 3, 2020).
37 See id.
38 See, e.g., Aaron Burstein, Employers Beware: The Illinois Artificial Intelligence Video Interview Act Is Now in Effect, Ad L. Access (Jan. 15, 2020), https://www.adlawaccess.com/2020/01/articles/employers-beware-the-illinois-artificial-intelligence-video-interview-act-is-now-in-effect/.
39 Artificial Intelligence Video Interview Act, 820 Ill. Comp. Stat. Ann. 42/5 (2020).
40 Id.
41 Id. § 5(1).
42 Id. § 5(2).
43 Id. § 5(3).
44 Alex Woodie, Hacking AI: Exposing Vulnerabilities in Machine Learning, Datanami (July 28, 2020), https://www.datanami.com/2020/07/28/hacking-ai-exposing-vulnerabilities-in-machine-learning/ (internal quotation marks omitted).
45 See What Is Positioning, Navigation and Timing (PNT)?, U.S. Dept Transp., https://www.transportation.gov/pnt/what-positioning-navigation-and-timing-pnt (last updated June 13, 2017).
46 Id.
47 Id.
48 Paul Tullis, GPS Is Easy to Hack, and the U.S. Has No Backup, Sci. Am. (Dec. 1, 2019), https://www.scientificamerican.com/article/gps-is-easy-to-hack-and-the-u-s-has-no-backup/.
49 Id.
50 Id.
51 See Exec. Order No. 13,905, 85 Fed. Reg. 9359 (Feb. 12, 2020).
52 Id. § 1.
53 Id. § 3.
54 Id. § 2(b) (emphasis added).
55 Id. § 4(a).
56 Id. § 2(d).
57 Id. § 4(a).
58 Id. § 4(e) (emphasis added).
59 See Profile of Responsible Use of Positioning, Navigation, and Timing Services, 85 Fed. Reg. 31743, 31745 (May 27, 2020).
60 See Comments Received for RFI on Profile of Responsible Use of Positioning, Navigation, and Timing Services, NIST, https://www.nist.gov/itl/pnt/comments-received-rfi-profile-responsible-use-positioning-navigation-and-timing-services (last visited Oct. 3, 2020).
61 See Sydney J. Freedberg Jr., Pentagon’s AI Problem Is “Dirty” Data: Lt. Gen. Shanahan, Breaking Def. (Nov. 13, 2019, 9:52 AM), https://breakingdefense.com/2019/11/exclusive-pentagons-ai-problem-is-dirty-data-lt-gen-shanahan/
62 Id.
63 Mel Holohan, 34 Military Terms and Their Meanings, Stacker (June 7, 2019), https://thestacker.com/stories/913/35-military-terms-you-could-be-using-real-life.
64 Shakespeare, The Tragedy of Hamlet, Prince of Denmark act 2, sc. 2.
https://test.researchpad.co/tools/openurl?pubtype=article&doi=10.928/ac.2021.03.34&title=“What a Piece of Work Is AI”—Security and AI Developments&author=Roland L. Trope,&keyword=&subject=Report,