Megan Shahi reflects on why she’s urging tech companies to take concrete actions to protect democracy in 2024 and beyond
Megan Shahi, a Siegel Research Fellow, has had an unusual path to her current role as director of technology policy at the Center for American Progress (CAP). While there’s a steady stream of researchers and academics who leave the public and civil society sectors for careers in tech companies, Shahi went the opposite direction. We sat down with Shahi to find out what motivated her to make the leap.
In our conversation, Shahi shares recommendations from her new report about safeguarding democracy online in the 2024 election season, explains why she believes that it’s important to establish shared language and methodologies between Washington, D.C. and Silicon Valley, and explores the power of civil society institutions to effect change.
You’re new to the civil society sector. You had a career in government and in the tech sector before coming to the Center for American Progress. Tell us about those experiences.
Out of college, I worked at the U.S. Department of the Treasury. What I found as the Obama Administration was wrapping up was that it was really interesting, great work, but it felt divorced from the reality of where policy implementation happens—where the rubber really meets the road as far as doing good by the intended recipients of the policy.
When my position at Treasury ended, I decided to explore tech as a place where I could see the impact of policy on people. I landed at Facebook on a crisis response team.
In the two years that I spent doing crisis management, a number of global events occurred that put an intense spotlight on issues like integrity, trust, safety, privacy, and researcher access: the Cambridge Analytica scandal, the Christchurch shooting, the death of Molly Russell in the U.K., the Sri Lanka Easter bombings, and the Parkland shooting. As part of this role, I also served in Meta’s first-ever U.S. election “War Room” in 2018.
I had pretty much the opposite experience that I did in government, and I realized, “This is what happens to actual people, when policies fall short, when products don’t hold up, and when protocol is insufficient.”
After two years, I realized that I wanted to add a policy perspective into this work, so I moved to Instagram’s product policy team. My job was to sit down with engineers, data scientists, researchers, and product managers. I was the voice of transparency, accountability, regulatory risks vis-a-vis engagement, growth, clicks, users, youth—all of that.
It was important work, but I felt that the things that I was advocating for at Instagram were almost always beaten out by other priorities. I moved to Twitter, which was at the forefront of setting tech policy precedent at the time. Ultimately, my entire team was laid off when Elon Musk took over the company in November 2022.
What did you learn about how the tech sector operates from your experiences at Facebook, Instagram, and Twitter?
Even though Twitter’s policy team was more robust, my takeaway was the same as at Instagram and Facebook. Incentives at the foundational, most basic levels of the companies were frequently going to oppose policy, regulation, transparency, and being accountable to users. Naturally, the incentives of for-profit organizations are to make money. The companies are not evil, they are just that: companies. I don’t blame the well-intentioned individuals working there at all either; I was one of them!
But, these misaligned incentives highlight my underlying belief that these companies cannot successfully self-regulate. Tech companies are not going to make changes unless something forces their hand. Regulatory action similar to the FDA with medicine, or regulatory controls on the finance industry is prudent and needed for social media and the technology industry.
How did you decide to make the leap from tech to the civil society sector?
It’s a question I get a lot because it’s not a well-trodden path, but is one I’m trying to evangelize through things like the Siegel Research Fellowship. There are so many people who go from public service or civil society institutions to tech, but not nearly as many who go the opposite direction.
When my team was eliminated at Twitter, I took a step back and reassessed my career trajectory. I had been in tech for numerous years, and questioned the additional value and impact I could add by continuing this work at another company. I considered roles at other tech firms, but I ultimately decided to take everything I’d learned and bring it to a place where I might be able to add more value and shape the trajectory of the industry. I could’ve gone to yet another social media platform or generative AI developer where I’d recommend all of these things, but they already know them. I wouldn’t be saying anything new or different that they hadn’t already heard. I also knew at the core of it, they weren’t going to implement most of it. I wasn’t going to be the reason they changed.
In my current role at CAP, I’ve been able to take everything I’ve learned and impart it in a new way. I’m still thinking about the same issues—things like elections and mitigating risks of new technological advances like generative AI. But the difference is the how.
Tell us more about the difference. What are you able to do in your role now that wasn’t possible when you were working within industry?
I’m seeking to uplevel the conversation between D.C. and proverbial Silicon Valley, this time on the side of policymakers and non-profits imploring the companies to do better. It’s a new feeling for me. I always joke that I’m so used to being told “no” and that I understand the word in every language. I was always the least popular function when I worked in tech. In every room, no one really wanted to hear from me; I was usually telling them what they didn’t want to hear. It feels strange that people are now so eager to hear about my experience.
But, it leads to really good conversations because people in D.C. typically don’t have that type of exposure to the tech sector. They’re floored when they hear that Elon Musk fired my whole team at Twitter. It’s almost like a parlor trick that I can pull out. Then it leads people to ask about my experience in tech and about what it’s like to advocate for these issues internally and now externally. That’s the value-add I’m trying to provide.
I now have a unique opportunity to help influence regulation and shape the strategy across the tech policy landscape here in D.C. Without breaking any NDAs, I can share my experience in tech and explain how to think about these problems and craft solutions that are pertinent and future-proof. In that way I can really be a bridge.
You recently published a new report, Protecting Democracy Online in 2024 and Beyond. In what ways does this report reflect that bridge-building role that you were just describing?
The report is the first big attempt I made at putting together the foundation of this bridge between D.C. and the tech sector. What I think is lacking and that I’m trying to work on is building a shared understanding of the problems, and a shared way of talking about those problems.
In the policy world there’s a lot of well-founded anger and frustration at the tech companies. But most of the time, well-meaning groups and individuals are not using the same words, methodologies, or ways of talking about these problems or proposing solutions.
My report is written almost exactly how I would propose policies and solutions internally when I worked at Meta and X: here are the risks, and here’s what can happen if we don’t mitigate them. Even if the companies don’t follow a single recommendation from the report, I will not let them look me in the eye and say they don’t understand or that we aren’t speaking the same language. I’ve been on the other side of such reports and I know that they understand this one.
Can you talk us through the process of developing the recommendations that you lay out in the report?
My process was actually very similar to the one that I used when I worked in tech. There, it was focused on a particular product surface—hashtags, Reels, or feed rankings, for example. I might say, “To protect against misinformation risk, you could do X, Y, and Z. How you prioritize that is up to you. But here’s my recommendation.”
For the report, I zoomed out and asked, “What is the set of things that we can ask tech platforms to do to address these issues?” I started writing out everything I could think of. The list was 200 items long to start. Then I started bucketing them. What are the staffing-related things? What are the policy recommendations? Could these two or three things be combined? Is fact-checking its own section, or is it a part of accurate information work streams? What about auditing?
I sorted all these recommendations to land on the categories that you see in the report today, which has five broad categories: first, policy, process, and protocol; second, transparency; third, staffing and personnel; fourth, external product changes; and fifth, researcher access.
What recommendations do you make in the report? How would a tech company implement some of these recommendations?
The recommendations related to policy, process, and protocol, are really the bread-and butter of what tech companies are doing. This is some stuff you may see on your apps, and some stuff that you probably will never see. It’s deciding when and how to put a big mitigation into place when an election is coming. What are the boundaries around that? What is the press you’re sending around that? How are you thinking this through in a defensible way to the public if something goes awry? My goal was really to create a menu of options for companies and platforms of all sizes. So, some options might apply to Discord, whereas others might apply to YouTube, for example.
There are specific callouts around staffing. In the Israel/Hamas crisis, we’re seeing that there have been slower responses to removing harmful, violent content and mitigating the spread of mis- and disinformation. That’s a direct impact of all the layoffs from last year and this year, so the recommendation is to provide adequate staffing and prevent more backsliding on that front.
The companies are already doing a lot around transparency, so the recommendation is to continue the strides that have been made over the last few years. They collect droves of data, and it is not terribly difficult for them to anonymize and publish that data for the sake of transparency and trust. Regulation, particularly in the European Union, is also advancing and requiring more of this type of transparency.
That’s also true of external product changes and researcher access. One of the things that I am watching is whether regulations that are coming down the pike in Europe end up raising the floor for tech companies in the United States. If tech companies are required to do certain things in Europe, there may be a subset of changes they make universally.
Finally, I’d be remiss to write about tech platform accountability and not consider generative AI developers, which is the reason it has its own section. I intentionally kept those recommendations high-level due to the nascent but rapidly developing nature of the technology and because external policy and enforcement documentation for these systems is severely lacking today.
What obstacles do you foresee to companies implementing these recommendations?
Staffing is tough because it requires resources, and we’re seeing layoffs rather than hiring in the tech sector right now. I’m heartened that the pendulum seems to be swinging back a little bit, though. Hopefully, companies are realizing that the U.S. presidential election is under a year away, in addition to many other important elections happening all over the world. Companies are realizing they need to prepare for that.
But the truth is that tech companies have little incentive to make changes unless their hand is forced. So, part of the way we do it is to get Congress to pass legislation, even though that seems unlikely in the short-term.
What’s the best, realistic way of overcoming these obstacles to implementing the recommendations that you lay out in the report?
In lieu of governmental regulation, civil society organizations like the Center for American Progress, Center for Democracy and Technology, The Leadership Conference on Civil and Human Rights, and Accountable Tech must step up. It is incumbent on us to call out the harms, offer solutions to mitigate them, and hold companies accountable. I can spot the dog and pony show a mile away, partly because I used to help craft said dog and pony show. I’m hopeful we can implore them to do better and do more beyond that show.
We don’t need to be whistleblowers, necessarily. Whether we like it or not, we have to work with the companies to improve the tech landscape for everyone. I’m trying to push for change in a methodical, slow, calm manner.
Why is it important that government, civil society institutions, and ordinary people take action around these issues now?
This is a really critical moment. We have all of these important elections, both in the United States and globally, in addition to major conflicts occuring in real time around the world. It’s critical to pay heightened attention to the role of technology and social media in safeguarding democracy. We’re backsliding in many ways, and it’s important that we clearly state the actions that need to be taken to promote election integrity online.
This report is my attempt at grabbing a megaphone to say that somewhere between 2.5 to 3 billion people around the world are going to vote next year. That is quite significant. And then you layer on the legacy problems of social media which are detailed in the report. Then layer on new technologies like generative AI. We need to capitalize on this moment to push companies to think and do things differently. The stakes are very high.
What issues are you interested in tackling next? And how do you expect the Siegel Research Fellow community can help you in that work?
My next big project is to really dig into generative AI and think about what the needed mitigations should be with that nascent technology, specifically on how developers should build systems to protect users in both first and third party use cases. I hope we as an industry can carry forward the hard-learned lessons from social media. Stay tuned for more on that!
I’ve already had some one-on-one calls with other Siegel Research Fellows and am looking forward to opportunities to connect as a cohort as well. The other fellows offer new and different perspectives and have different areas of expertise, which is hugely beneficial. I’m also particularly excited to tap into the brilliant academic minds in the cohort. Now I know, when I have a concern around equity with AI systems, or a curiosity about labor and workforce implications, or educational opportunities, I don’t have to have an answer all on my own. I can discuss it with my fellow cohort members and we can all benefit from each others’ expertise. I’m looking forward to deeper learning and collaboration!
More from Megan Shahi:
- Report: Protecting Democracy Online in 2024 and Beyond by Megan Shahi, published by Center for American Progress, September 2023
- Op-ed: Opinion: Election denialism nearly shattered our democracy. Meta’s allowing it anyway. by Megan Shahi, published by The Hill, November 2023
- Podcast: Safeguarding the 2024 Global Elections on Social Media, with guest Megan Shahi, hosted by Center for Strategic and International Studies, recorded September 2023
- Comment letter: Priorities for a National AI Strategy by Megan Shahi and Adam Conner, published by Center for American Progress, August 2023