Opening remarks on “AI in the Workplace: New Crisis or Longstanding Challenge”
·
5 min read
·
6 hours ago
On Thursday 9/28, I had the opportunity to speak at a virtual roundtable convened by Congressman Bobby Scott on “AI in the Workplace: New Crisis or Longstanding Challenge?”. The roundtable was a closed meeting, but sharing our opening remarks is allowed, so I am posting mine here.
Below is a lightly edited transcript:
Thank you Representative Scott for organizing this and thank you all
for your time and attention. As mentioned I’m a professor of
linguistics at the University of Washington and I work in
computational linguistics. Part of what I would like to do here today
is give you some insight into how this technology works and how it is
already in the world around us so far.
What is AI?
In fact this is a marketing term. It’s a way to make certain kinds of
automation sound sophisticated, powerful, or magical and as such it’s
a way to dodge accountability by making the machines sound like
autonomous thinking entities rather than tools that are created and
used by people and companies. It’s also the name of a subfield of
computer science concerned with making machines that “think like
humans” but even there it was started as a marketing term in the 1950s
to attract research funding to that field.
I think that discussions of this technology become much clearer when
we replace the term AI with the word “automation”. Then we can ask:
- What is being automated?
- Who’s automating it and why?
- Who benefits from that automation?
- How well does the automation work in its use case that we’re considering?
- Who’s being harmed?
- Who has accountability for the functioning of the automated system
- What existing regulations already apply to the activities where theautomation is being used?
In order to make this a bit more concrete, I want to break down the
different kinds of systems that are being called AI these days. There
are different types of automation:
Illustration of the Kempelen chess playing automaton from Racknitz 1789 via Wikimedia Commons
1. One thing is using computers to automate consequential
decisions. This is called automatic decision systems and it’s used for
example in the process of setting bail or approving loans or screening
resumes or allocating social benefits.
2. Another kind of automation is when we’re automating different kinds
of classification: things like image classification to try to get the
camera to focus on the faces or classifying web users for targeted
advertising.
3. A third type of automation is automation of the choice of
information to present to someone. This is called recommender systems
and it’s for example the automation behind the ordering of the feed in
social media or movie suggestions in Netflix.
4. A fourth type is automating access to human labor or making human
labor conveniently available to buyers. Here think Uber, Lyft, Amazon
Mechanical Turk and similar services.
5. The fifth type I want to call out is the automation of translation
of information from one format to another: automatic transcription,
finding words and characters and images like automatically reading
license plates, machine translation or something like image style
transfer: make this photo of me look professional.
6. Then finally there’s a type that’s been very much in everyone’s
mind recently: things like ChatGPT which I call synthetic media
machines. These are systems where you might be able to generate images
based on specific content or specific styles or a plausible sounding
text without any commitment to what it says.
I want to just put in a few more words about ChatGPT. It’s important
to understand that its only job is autocomplete, but it can keep
autocompleting very coherently to make very long texts. It’s being
marketed as an information access system but it is not effective for
that purpose. You might as well be asking questions of a Magic 8 Ball
for all the connection to reality or understanding that it has.
A very key thing to keep in mind here is that the output of these
systems don’t actually make sense. It’s that we are making sense of
the output. It’s very hard to evaluate them because we have to take
that distance from our own cognition to do so.
Finally the ability to create plausible sounding texts on just about
any topic is quite dangerous because it looks like we have or are just
about to have robo-lawyers, robo-doctors, robo-tutors,
robo-therapists, etc., and we don’t.
Popping back up to that full range of automation, I want to point out
that these systems have some characteristics in common:
They are built using training data and then algorithms that capture
the patterns in the training data and can reproduce them over new data
at runtime to varying degrees of accuracy and varying degrees of
desirability. For example, automatic transcription takes patterns of
how sounds map to written words and captures them in sufficient detail
meaning that we can get first packs automatic captioning in
Zoom. That’s very useful, though we can also see problematic biases:
Things like if you’ve got a less frequent name it’s more likely to be
transcribed poorly. Another example is the infamous COMPAS algorithm
for predicting recidivism risk that reproduces patterns of racial
discrimination in policing in terrible ways. Another example is image
systems trained on large collections from the web tend to reproduce
patterns of sexualization in images of women especially women of
color. And similarly ChatGPT and systems like it will output hate
speech and more subtly biased language, again reproducing these
patterns.
There are other things these have in common: They work well
because of the effort of people but they’re often designed to hide
that effort. They may promote mass surveillance either by enabling it
or providing motivation for it. And they tap into automation bias,
that is our cultural tendency to assume that computers must be
objective authoritative and fair.
Finally the hype around these systems really serves corporate
interests because it makes the tech look powerful and valuable because
it distracts from the real issues that I hope regulators will be
focusing on and because it makes the AI seem too exotic to be
regulatable. My hope is that our representatives are critical
consumers of information about this technology and not falling for the
narrative that this is moving too fast for regulation to keep up. Your
job is to protect rights and those aren’t changing so fast.