Owen Davis is a Siegel In-House Research Fellow and the author of Artificial Intelligence and Worker Power, which explores how AI is shifting power dynamics in workplace and bargaining contexts, including monitoring and surveillance, predictive analytics in wage bargaining, and algorithmic management systems that diminish worker autonomy. Owen received his PhD in economics from the New School for Social Research in 2023, where he studied labor market institutions and demographics.
Tell us a little bit about your career path. Prior to your time at Siegel, you worked as a journalist reporting on topics including education, finance and the economy. What drew you to labor economics and to your current research interests?
In my previous life as a journalist, I found myself writing for a financial and economics blog, where I spent a good deal of time mining academic research for content. I found that I enjoyed reading academic research, and I thought I might improve my journalism abilities if I spent some time in a master’s program in economics. I took some courses and soon discovered that I much preferred the practice of economics to that of journalism, so I stuck with it and went on to earn my PhD this past year.
I was drawn to labor economics as a field because it’s micro enough to be able to answer distinct questions in somewhat believable ways, but still deals with insoluble contradictions that I find fascinating. For example, we live in a free society, and yet we agree to spend a very large portion of our time giving up some of those freedoms in order to work. We are also complex and irreducible individuals who can’t be boiled down to a single number – and yet, the labor market gives us a single wage that to some degree defines us. At its best, labor economics is able to take these contradictions and open questions, put some theory around them, and bring some data to the task to see whether the theories hold up.
You recently posted a working paper about how the introduction of digital technologies affects employment relations and worker wellbeing, with a particular focus on AI and AI-enabled workplace technology. Tell us a bit more about what you found. Why is it important? What sets this work apart from other research being done in the field?
My overarching goal with this working paper is to add a new angle or perspective to the work around AI. A lot of the research so far focuses on automation – how AI might do the tasks that some workers currently do on the job, which occupations are most exposed, or which types of jobs might actually get a boost from AI assistance. And while I think those are extremely important questions, they leave aside many other dimensions of how AI might affect work. If we think about AI as a general purpose technology that can be applied to almost every field or industry, we can see that AI will have effects not just on what workers do or the tasks that they complete, but also how they’re managed, the design and flow of their jobs, the structures and hierarchies of the workplace, and so on.
For example, some areas where AI has some foothold in the workplace are: to monitor workers, to bargain, to recruit job seekers, to direct employees what to do, and to evaluate their output. And when AI has a role in the overall job context, then there are implications for worker power. I use “worker power” to mean the various ways that workers capture some piece of the pie. This can be directly through bargaining abilities, like how we negotiate for jobs and salary, but through more subtle pathways that affect how workers are compensated and what they do on the job.
In the paper I offer some toy models to formalize certain pathways where AI and workplace technology could affect worker power. Answering those questions tells us something about what might happen with inequality. When workers are displaced, or when there’s less demand for certain skills, or when workers are augmented – all of these have an impact on inequality. But even if a worker’s job is not changing due to automation, the way that they are recruited, hired, negotiated with, overseen, managed, monitored, etc, might change. And that could affect how the pie is split between the employer and worker with big impacts, or at least widespread impacts, on inequality.
What do you hope the impact of your research might be for the field, as well as stakeholder groups, such as workers, workplaces, and policy frameworks?
First, I hope that there is an impact on academic literature directly. This paper is an attempt to set forth a research agenda around AI and worker power, and to reframe, or at least put a new perspective on, the conversation around AI and workers. My hope is that the models in my paper serve as explanatory and exploratory devices (rather than full fledged new models of the labor market in its totality), which inspire other economists to develop the theoretical side further and bring data to bear on these models. There are many types of research that can follow these ideas.
As far as stakeholders – I hope to impact worker groups and nonprofits civil society folks who are concerned with workplace technology data rights and workers in general. I don’t expect them to care about the equations, but I do hope that it offers a theoretical framework that is approachable to a lay audience, and can be useful for framing some of these AI and work conversations in a way that doesn’t have to fall back on the automation question entirely.
Right now, there is a disconnect between academic research that focuses entirely on the automation risk, and the practitioners, advocates and nonprofits who experience the first- and second-order effects of automation and the inequality it’s bringing. They have documented the way gig workers are algorithmically managed or warehouse workers are constantly monitored, showing this is about much more than just automation. I hope this framework is useful in helping folks working in those spaces conceptualize the issues at play.
What are the biggest challenges you and economic researchers broadly face in approaching AI in the workplace? What do you wish more people understood? What tools do you wish existed? What gaps in knowledge exist?
This is going to be a somewhat prosaic answer, but we need more data around these questions. There are big federal government surveys that ask many, many employers if they are using various types of AI technologies and machine learning tools in the production of goods and services. What surveys are not asking right now is about workplace AI, or AI in management and HR. We really don’t know how prevalent they are. We haven’t done a lot of work defining them in a way that would be relevant for survey research. It remains an open question.
We could always use more research on the ground in context, where certain types of AI in the workplace are being used to explore what the effects are, either qualitatively or quantitatively.
What is one question you would like to see investigated, and one that’s currently not possible given the data we collect?
If I had a magic wand, it would be if some large corporation, with many offices, had a random rollout of some new AI-based monitoring tool or performance evaluation system or hiring/recruitment/pay-setting tool – and then offered researchers access to data before and after. We could see what those effects are on employees – satisfaction, turnover, pay, and profitability, of course. Those are the sorts of studies that can provide really credible causal answers to the questions that I’m raising, and I’m sure there will be more of that research in the future. As of now, it’s few and far between – if any.
What do you wish the general public would understand better? What should an average person whose job is being affected by AI be doing or asking themselves?
It depends on which aspect of their job is being affected by AI. To go back to my framework – I want people to not just think about AI as affecting the tasks they’re doing (for example, automated writing or analyzing X-rays to make a diagnostic decision) but also how it’s touching their overall workplace experience (for example, monitoring output or deciding someone’s pay). The answers are different for those two questions – but still pretty tricky. A lot of research centers and nonprofits are working on these questions.
What matters here is employee voice. Any venue for workers to exercise their voice in an individual or collective manner to influence the way that tools are rolled out, designed, and deployed – and even more ideally, to have some impact on the decisions to incorporate tools in the first place – there’s a lot to be gained there. And that can also be mutually beneficial to employers and workers. That if technology is going to be brought in, is brought in in a way that workers understand, that they consent to, and that makes work better, more efficient, more meaningful, more productive without sacrificing on job quality, autonomy, or the dignity of the worker.
Unfortunately, there’s not an instruction manual for exercising employee voice. It’s relatively straightforward when there’s an institution like a union to channel worker voice, but in most other contexts and in most of the labor market, it does not exist. And there’s a great amount of variability in the tools that are available to workers to exercise their voice and to amplify their concerns.
What can employers do? How can employers be thinking about leveraging AI tools in their own workplaces? What types of considerations should they be thinking about before they bring to bear an AI tool on their community.
One consideration is what are the goals here? Experimentation with AI might be a valid goal in and of itself. But when something that impinges on job quality is brought in without any consultation with workers, the pure experimentation motive might not go over so well.
Another consideration is that a lot of the employment relationship between workers and employers is not spelled out explicitly. It’s implicit, and relies on norms, traditions, and tacit understandings. It might not be clear what those norms are until they’re violated. And trust, once lost, is very hard to regain. This again underscores the importance of bringing some amount of worker consultation or voice into the selection and use of technologies that have major workplace impacts.
What are you reading/watching/listening to right now that you would recommend to readers, and why?
I just started reading a book called The Unaccountability Machine, by Dan Davies, who is a longtime writer, blogger, economist, kind of jack of all trades. It’s a fascinating book that explores the way in which complex systems – such as markets and bureaucracies – develop black holes of accountability where something bad can happen, and yet no one can be found to pin the blame on. The example that comes up a lot is a flight getting delayed or canceled, and no one at the airport knows what’s going on. Everyone’s yelling at low-level customer service representatives, but they don’t know. It’s just a vast and complex system where accountability gets lost in the ether. The way that Davies discusses it sheds a lot of light on the nature of contemporary economies and societies in particular. He uses the framework of management cybernetics, which has some fascinating implications for just how we think about society and economies. Highly recommend!