Box CEO Aaron Levie finds a middle ground on tech policy during Trump’s second term

In our current political and media environment the loudest voices are the ones that are farthest from the center. Tech hasn’t been spared, with some Silicon Valley leaders drifting right—not only out of ideology, but also pragmatism. Box CEO Aaron Levie sits somewhere in the middle: not a MAGA-touting accelerationist like Marc Andreessen, nor a traditional progressive like Reid Hoffman. But he is clear-eyed about one thing: Donald Trump will likely preside over some of the most pivotal years in AI and innovation. And Levie sees reason for optimism.

Fast Company spoke to him about AI policy, AI and crypto “czar” David Sacks, Elon Musk’s DOGE, and AI safety concerns. The interview has been edited for length and clarity. 

I know you came out in support of Harris before the election, but now I’d like to find out what your assessment of Trump is so far, where technology is concerned.

During the Harris campaign, I felt like there was an opportunity, and there really needed to be a very strong kind of pro-technology, progress-type of push in the Democratic party. So I was trying to ensure that they saw key policy issues around AI and deregulation and how to drive more growth. And so that was my interest in the topic during the campaign cycle. 

But I just want America to succeed, and I want our tech position to be as strong as possible. And I think there’s a number of topics that kind of relate to that. There’s high-skill immigration, there’s AI policy, there are regulatory issues and topics that face the tech industry, especially more of the harder tech, more manufacturing-leaning parts of tech. And I think we have an opportunity as a country to ensure that we are building for not just the next couple of years, but the next couple of decades, and there’s a lot of key things that are going to happen right now around AI, robotics, autonomous vehicles, advanced manufacturing, new forms of energy; all of these things will intersect with either federal or state and local policy, and it’s critical to make sure that we’re heading in the right direction on those topics. 

As it relates to Trump, I think there have been a number of things that have been signaled that I think are actually extremely positive to those topics. I think we’re really early in seeing how they will manifest. Some of them were on the campaign trail, some of them are kind of being in office, that have been signaled, and to some extent, a little bit in a wait-and-see mode on how they all manifest and evolve. And you know, my views on them are extremely clear, and I’m hoping we lean in the right direction on a large number of those topics.

Let’s talk about AI first. I think you saw JD Vance give his Paris speech, and he did talk a bit about striking a balance between regulation and innovation, but he also seemed to scold Europe for what it has done with the AI Act. He accuses them of going too far. Do you agree with him on that?

My rhetoric would probably be different, but I am worried that when it comes to policy conversations, in the U.S. and even globally, we have tilted more toward the precautionary views of AI as opposed to the productive and pro-progress-oriented views of AI. I thought that it was compelling that the vice president talked about how AI is going to actually create—I’m going to now probably change some of the words—but create more of an abundance environment where we can actually use it to help jobs and create jobs, and we can use it to improve healthcare, and we can use it to drive manufacturing. And if you just take a word content in the speech and compare it to maybe a speech that would have come from a U.S. politician a year or two ago, it leaned 80% more positive than negative, with the appropriate levels of calling out that there are things that we do need to pay attention to. 

Do you have thoughts about David Sacks and his appointment to “AI and Crypto Tsar”?

I am much closer to the AI side and the AI Tsar. I don’t think about crypto that much. David’s a strong choice for that. David knows his way around Silicon Valley. He knows all the people doing all the important work at the leadership levels of AI labs and big tech. And so to have a conduit that can help marry what’s happening in Silicon Valley with the policy decisions happening in the government, I think he’s in a great position. I’m also very happy about Michael Katzios on OSTP (Trump nominated Kratsios as director of the White House Office of Science and Technology Policy). I think he’s a very strong pick for that job. And so I think the administration has brought in sophisticated, thoughtful technologists or business leaders into the administration to help drive policy in the right direction.

You were also upbeat about Elon Musk and what he’s doing with DOGE, at least around the time of the election. Do you have any thoughts about that now? 

At a philosophical level, I think that the thing I get more excited about in the DOGE remit is how do we modernize the approach that many of these agencies are taking to regulation? How do we ensure that we’re driving more efficiency so we can have just better productivity in the government? I think that’s a very good thing. I think there’s an ability to modernize a lot of the technology as well in the government to get more efficiency and be able to drive just overall better results. When you have better data feeds coming in, when you have better ways of collaborating, when you get better insights, we can make better policy decisions, we can run the government better. 

As a big-name tech CEO, do you feel an obligation to express your views and let people know where you stand on these things?

For me, it’s a natural thing. But I do think we’re in an era right now where there’s almost no industry, and especially in tech, there’s no subindustry in tech that will not be impacted by policy decisions from the federal government. And so I do think we’re in an environment where you have to lean in to some extent on the policy conversation if you want any chance that it ends up going in a productive direction.

I have some things that to me personally, from a business perspective, rise to higher or lower levels—like one of my biggest areas is high-skill immigration. And so that’s an area that I unequivocally and emphatically view as mission-critical to get right because it will lead to the next generation of companies for us to go build in the future—the next Apple, the next Google, the next OpenAI. You want the odds to be that that’s going to get created in America, and high-skill immigration is one of your ways of increasing those odds. 

AI policy raises very high because getting AI policy right or wrong could mean that either the U.S. is the home of AI or China is the home of AI. And as a U.S. company, I think it would be a disaster if we miss the window where we could have complete AI leadership.

On the high-skilled labor part of this, are there specific changes to that that you are in favor of or that you think might have a chance of happening in the next four years? 

We do need a faster way for people to get into the country that is more of a merit-oriented approach. On the campaign trail, Trump very clearly stated that he wants to stamp a green card to every diploma for individuals that are coming from outside the country to study here. And so I think there’s been some acknowledgment that we have an inefficient system. It’s a little bit too random at times, and it’s probably not serving us in the best way possible of getting just clearly getting the best talent in the world to always come here, and that’s what we need. I will keep fighting and shouting from the rooftops that that’s a critical policy, whether it’s in the Trump administration or whatever administration comes next. 

I know that your business depends on AI, and you’re probably looking down the road and thinking about how Box’s product can evolve as AI progresses. Do you have concerns about the safety risks of future AI models?

The labs need to operate responsibly. They need to test these models and ensure the safety and protection of how these models operate and ensure that we are in a situation where AI can’t go rogue and complete actions on their own without the right kind of guardrails being built into these systems. So, I’m in favor of everything that the market is currently doing. The part that I’m less in favor of is a situation where there would be just an incredibly extreme liability for the model providers to be able to release new AI model updates without major government involvement. Because what that will do is dramatically slow down the pace of the industry, and the pace of the industry will move at the rate at which the government can evaluate and understand how AI works. And we see in any industry where that is happening, we see less competition, we see higher prices, we see less innovation. 

Maybe there’s a time and a place for that to happen in AI, but we’re not there yet. AI right now is really early. And so, what we need is an environment where the AI innovation is accelerating, where the models are getting better, where they’re getting cheaper, where they’re getting more capable. And what we need is a shared industry-oriented way of establishing that we do need safe AI. These teams should be testing their models. We should have more best practices, more research, more red teaming of these technologies. But to me, I have not been compelled yet that we need the government to  overwhelm the system with those reviews and those procedures. And I may get to that point where I do believe that, and I’m actually glad that there are lots of people that say we need that. I think it should be a really healthy debate and dialogue.

https://www.fastcompany.com/91295411/can-tech-thrive-under-trump-aaron-levie-thinks-so?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Created 5h | Mar 12, 2025, 12:10:02 PM


Login to add comment

Other posts in this group

TikTok’s comment sections are being flooded with copy-pasted Christian messages

An influx of copy-and-pasted Christian messages has recently taken over TikTok’s comment sections.

Over the past several days, comments about Jesus Christ have surfaced among the top com

Mar 12, 2025, 4:50:04 PM | Fast company - tech
Apple successfully blocks Match from data in India antitrust case

Apple has successfully blocked its opponents in

Mar 12, 2025, 4:50:03 PM | Fast company - tech
A new wearable from dating app RAW promises to track your partner’s emotions in real time

Forget a diamond ring, the latest symbol of commitment now comes in the form of wearable tech. 

The RAW ring, created by the dating app

Mar 12, 2025, 2:20:08 PM | Fast company - tech
NASA’s new AI satellites could revolutionize disaster response

Satellite-based disaster monitoring has been a slow and tedious process for decades. The process consists of capturing images, transmitting them back to Earth, and relying on human analysts to int

Mar 12, 2025, 2:20:08 PM | Fast company - tech
Moonvalley launches an AI video generator built for moviemaking

A well-funded AI lab with a deep bench of research talent is releasing a powerful new model that generates high-definition video for the film and advertising industries. The company,

Mar 12, 2025, 2:20:07 PM | Fast company - tech
Military AI is here. Some experts are worried

For the first two-and-a-half years of the generative AI revolution, the AI arms race has been waged between competing companies seeking to make bank from the promise and potential of the technolog

Mar 12, 2025, 12:10:04 PM | Fast company - tech
How the Trump administration plans to use algorithms to target protesters

As protests against the Trump administration and in favor of Palestine continue to grow across the country, the U.S. State Department is reportedly planning to use tech to try and tamp down on dis

Mar 12, 2025, 12:10:03 PM | Fast company - tech